首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study presents a unique approach in investigating the knowledge diffusion structure for the field of data quality through an analysis of the main paths. We study a dataset of 1880 papers to explore the knowledge diffusion path, using citation data to build the citation network. The main paths are then investigated and visualized via social network analysis. This paper takes three different main path analyses, namely local, global, and key-route, to depict the knowledge diffusion path and additionally implements the g-index and h-index to evaluate the most important journals and researchers in the data quality domain.  相似文献   

2.
To take into account the impact of the different bibliometric features of scientific fields and different size of both the publication set evaluated and the set used as reference standard, two new impact indicators are introduced. The Percentage Rank Position (PRP) indicator relates the ordinal rank position of the article assessed to the total number of papers in the publishing journal. The publications in the publishing journal are ranked by the decreasing citation frequency. The Relative Elite Rate (RER) indicator relates the number of citations obtained by the article assessed to the mean citation rate of the papers in the elite set of the publishing journal. The indices can be preferably calculated from the data of the publications in the elite set of journal papers of individuals, teams, institutes or countries. The number of papers in the elite set is calculated by the equation: P(πv) = (10 log P) ? 10, where P is the total number of papers. The mean of the PRP and RER indicators of the journal papers assessed may be applied for comparing the eminence of publication sets across fields.  相似文献   

3.
The scientific impact of a publication can be determined not only based on the number of times it is cited but also based on the citation speed with which its content is noted by the scientific community. Here we present the citation speed index as a meaningful complement to the h index: whereas for the calculation of the h index the impact of publications is based on number of citations, for the calculation of the speed index it is the number of months that have elapsed since the first citation, the citation speed with which the results of publications find reception in the scientific community. The speed index is defined as follows: a group of papers has the index s if for s of its Np papers the first citation was at least s months ago, and for the other (Np ? s) papers the first citation was ≤s months ago.  相似文献   

4.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

5.
Metrics based on percentile ranks (PRs) for measuring scholarly impact involves complex treatment because of various defects such as overvaluing or devaluing an object caused by percentile ranking schemes, ignoring precise citation variation among those ranked next to each other, and inconsistency caused by additional papers or citations. These defects are especially obvious in a small-sized dataset. To avoid the complicated treatment of PRs based metrics, we propose two new indicators—the citation-based indicator (CBI) and the combined impact indicator (CII). Document types of publications are taken into account. With the two indicators, one would no more be bothered by complex issues encountered by PRs based indicators. For a small-sized dataset with less than 100 papers, special calculation is no more needed. The CBI is based solely on citation counts and the CII measures the integrate contributions of publications and citations. Both virtual and empirical data are used so as to compare the effect of related indicators. The CII and the PRs based indicator I3 are highly correlated but the former reflects citation impact more and the latter relates more to publications.  相似文献   

6.
We propose two new indices that are able to measure a scientific researcher's overall influence and the level of his/her works’ association with the mainstream research subjects within a scientific field. These two new measures – the total influence index and the mainstream index – differ from traditional performance measures such as the simple citation count and the h-index in that they take into account the indirect influence of an author's work. Indirect influence describes a scientific publication's impact upon subsequent works that do not reference it directly. The two measures capture indirect influence information from the knowledge emanating paths embedded in the citation network of a target scientific field. We take the Hirsch index, data envelopment analysis, and lithium iron phosphate battery technology field to examine the characteristics of these two measures. The results show that the total influence index favors earlier researchers and successfully highlights those researchers who have made crucial contributions to the target scientific field. The mainstream index, in addition to underlining total influence, also spotlights active researchers who enter into a scientific field in a later development stage. In summary, these two new measures are valuable complements to traditional scientific performance measures.  相似文献   

7.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

8.
The h index is a widely used indicator to quantify an individual's scientific research output. But it has been criticized for its insufficient accuracy—the ability to discriminate reliably between meaningful amounts of research output. As a single measure it cannot capture the complete information on the citation distribution over a scientist's publication list. An extensive data set with bibliometric data on scientists working in the field of molecular biology is taken as an example to introduce two approaches providing additional information to the h index: (1) h2 lower, h2 center, and h2 upper are proposed, which allow quantification of three areas within a scientist's citation distribution: the low impact area (h2 lower), the area captured by the h index (h2 center), and the area of publications with the highest visibility (h2 upper). (2) Given the existence of different areas in the citation distribution, the segmented regression model (sRM) is proposed as a method to statistically estimate the number of papers in a scientist's publication list with the highest visibility. However, such sRM values should be compared across individuals with great care.  相似文献   

9.
Bibliometricians have long recurred to citation counts to measure the impact of publications on the advancement of science. However, since the earliest days of the field, some scholars have questioned whether all citations should be worth the same, and have gone on to weight them by a variety of factors. However sophisticated the operationalization of the measures, the methodologies used in weighting citations still present limits in their underlying assumptions. This work takes an alternative approach to resolving the underlying problem: the proposal is to value citations by the impact of the citing articles, regardless of the length of their reference list. As well as conceptualizing a new indicator of impact, the work illustrates its application to the 2004–2012 Italian scientific production indexed in the WoS. The proposed impact indicator is highly correlated to the traditional citation count, however the shifts observed between the two measures are frequent and the number of outliers not negligible. Moreover, the new indicator shows greater “sensitivity” when used to identify the highly-cited papers.  相似文献   

10.
The standard impact factor allows one to compare scientific journals only within particular scientific subjects. To overcome this limitation, another indicator of citation, viz., the thematically weighted impact factor (TWIF), is proposed. This indicator allows one to compare journals of various subjects and takes the fact that a journal belongs to several subjects into account. Information on the thematic headings of a journal and the value of a standard impact factor is necessary for calculation of the indicator. The TWIF, which is calculated according to the citation index of Journal Citation Reports, is investigated in this article.  相似文献   

11.
12.
高Altmetrics指标科技论文学术影响力研究   总被引:9,自引:0,他引:9  
引入"公平性测试"方法以消除时间窗口对被引次数的影响。以高Altmetrics指标论文作为样本,选取与样本论文发表在同一期刊同一期上前后两篇论文作为参照。利用Altmetric.com、Web of Science分别获取273篇样本及参照论文的Altmetric分数、底层数据值和被引用次数。通过比较分析后发现:Altmetrics和引文数两种指标反映出读者对文献的不同关注方向,底层数据源中大众媒体对于Altmetric分数的影响最明显,高Altmetrics指标论文同时具有较高的学术影响力。作为一种早期指标,高Altmetrics指标在一定程度上能够被视作文章在未来获得高被引的风向标。  相似文献   

13.
开放存取对期刊影响力绩效研究综述   总被引:2,自引:0,他引:2  
文章归纳了国内外主要的OA绩效研究方法,并将它们分为三类:对某个期刊群中OA期刊和非OA期刊影响因子的比较、对某个领域大样本OA论文与非OA论文被引频次的统计比较,以及对某个混合OA期刊中OA论文和非OA论文影响因子平均值的比较,并介绍了其中五个代表性研究的方法和结论。这些研究成果表明,OA对提高期刊影响力有着积极的立竿见影的作用。针对未来需要,文章提出了OA论文比例演变、文献引文中OA文献比例演变、搜索引擎对OA绩效影响的研究方案。该文为《数字图书馆论坛》2.009年第11期本期话题“Open Access”的文章之一。  相似文献   

14.
The disruption (D) index is a network-based indicator to quantify the extent to which a focal paper disrupts its predecessors. This study focuses on what disruption means by examining example articles related to “sleeping beauties in science” and frequency-inverse document frequency (TF-IDF). We investigated the structure of the citation network and subsequent papers’ motivations for citing the focal papers. Based on the observation that conceptual work is more likely to disrupt science than technical work, we hypothesize that disruption reflects the mechanism of how paradigms shift in the development of science. We also assume that the disruption identified by the D index indicates more than generating a new direction. Disruptive contributions include revolutionary studies such as Nobel-prize-winning papers, as suggested in previous work. However, disruptive contributions also include scientific dissemination of new terminology created by popular proposals, such as “sleeping beauties in science.” Such contributions redefine and popularize phenomena in science.  相似文献   

15.
16.
Main path analysis (MPA) is an effective method widely accepted in science and technology for extracting knowledge diffusion paths. Traditional citation analysis assumes that all citations are treated equally. In contrast, this paper proposes a new MPA framework from the perspective of citation structure and content. Three indicators are considered to adjust edge weight: (1) Structural similarity, (2) Topic similarity and (3) Sentiment analysis. This study takes the bullwhip effect and the Internet of Things domain as examples to verify the reliability and feasibility of improved MPA. The results show that the improved main path uncovers the knowledge trajectories appropriately, which has an ability to distinguish citations and detect important papers. This research enriches MPA theory and provides future research directions from perspective of citation structure and content.  相似文献   

17.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

18.
In this paper, we propose two methods for scoring scientific output based on statistical quantile plotting. First, a rescaling of journal impact factors for scoring scientific output on a macro level is proposed. It is based on normal quantile plotting which allows to transform impact data over several subject categories to a standardized distribution. This can be used in comparing scientific output of larger entities such as departments working in quite different areas of research. Next, as an alternative to the Hirsch index [Hirsch, J.E. (2005). An index to quantify an individuals scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572], the extreme value index is proposed as an indicator for assessment of the research performance of individual scientists. In case of Lotkaian–Zipf–Pareto behaviour of citation counts of an individual, the extreme value index can be interpreted as the slope in a Pareto–Zipf quantile plot. This index, in contrast to the Hirsch index, is not influenced by the number of publications but stresses the decay of the statistical tail of citation counts. It appears to be much less sensitive to the science field than the Hirsch index.  相似文献   

19.
《Journal of Informetrics》2014,8(4):997-1004
The percentile-based rating scale P100 describes the citation impact in terms of the distribution of unique citation values. This approach has recently been refined by considering also the frequency of papers with the same citation counts. Here I compare the resulting P100′ with P100 for an empirical dataset and a simple fictitious model dataset. It is shown that P100′ is not much different from standard percentile-based ratings in terms of citation frequencies. A new indicator P100″ is introduced.  相似文献   

20.
Across the various scientific domains, significant differences occur with respect to research publishing formats, frequencies and citing practices, the nature and organisation of research and the number and impact of a given domain's academic journals. Consequently, differences occur in the citations and h-indices of the researchers. This paper attempts to identify cross-domain differences using quantitative and qualitative measures. The study focuses on the relationships among citations, most-cited papers and h-indices across domains and for research group sizes. The analysis is based on the research output of approximately 10,000 researchers in Slovenia, of which we focus on 6536 researchers working in 284 research group programmes in 2008–2012.As comparative measures of cross-domain research output, we propose the research impact cube (RIC) representation and the analysis of most-cited papers, highest impact factors and citation distribution graphs (Lorenz curves). The analysis of Lotka's model resulted in the proposal of a binary citation frequencies (BCF) distribution model that describes well publishing frequencies. The results may be used as a model to measure, compare and evaluate fields of science on the global, national and research community level to streamline research policies and evaluate progress over a definite time period.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号