首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The arbitrariness of the h-index becomes evident, when one requires q × h instead of h citations as the threshold for the definition of the index, thus changing the size of the core of the most influential publications of a dataset. I analyze the citation records of 26 physicists in order to determine how much the prefactor q influences the ranking. Likewise, the arbitrariness of the highly-cited-publications indicator is due to the threshold value, given either as an absolute number of citations or as a percentage of highly cited papers. The analysis of the 26 citation records shows that the changes in the rankings in dependence on these thresholds are rather large and comparable with the respective changes for the h-index.  相似文献   

2.
3.
The scientific impact of a publication can be determined not only based on the number of times it is cited but also based on the citation speed with which its content is noted by the scientific community. Here we present the citation speed index as a meaningful complement to the h index: whereas for the calculation of the h index the impact of publications is based on number of citations, for the calculation of the speed index it is the number of months that have elapsed since the first citation, the citation speed with which the results of publications find reception in the scientific community. The speed index is defined as follows: a group of papers has the index s if for s of its Np papers the first citation was at least s months ago, and for the other (Np ? s) papers the first citation was ≤s months ago.  相似文献   

4.
The h index is a widely used indicator to quantify an individual's scientific research output. But it has been criticized for its insufficient accuracy—the ability to discriminate reliably between meaningful amounts of research output. As a single measure it cannot capture the complete information on the citation distribution over a scientist's publication list. An extensive data set with bibliometric data on scientists working in the field of molecular biology is taken as an example to introduce two approaches providing additional information to the h index: (1) h2 lower, h2 center, and h2 upper are proposed, which allow quantification of three areas within a scientist's citation distribution: the low impact area (h2 lower), the area captured by the h index (h2 center), and the area of publications with the highest visibility (h2 upper). (2) Given the existence of different areas in the citation distribution, the segmented regression model (sRM) is proposed as a method to statistically estimate the number of papers in a scientist's publication list with the highest visibility. However, such sRM values should be compared across individuals with great care.  相似文献   

5.
The aim of the study is to explore the effects of the increase in the number of publications or citations on several impact indicators by a single journal paper or citation. The possible change of the h-index, A-index, R-index, π-index, π-rate, Journal Paper Citedness (JPC), and Citation Distribution Score (CDS) is followed by models. Particular attention is given to the increase of the indices by a single plus citation. The results obtained by the “successively built-up indicator” model show that with increasing number of citations or self-citations the indices may increase substantially.  相似文献   

6.
Reliable methods for the assessment of research success are still in discussion. One method, which uses the likelihood of publishing very highly cited papers, has been validated in terms of Nobel prizes garnered. However, this method cannot be applied widely because it uses the fraction of publications in the upper tail of citation distribution that follows a power law, which includes a low number of publications in most countries and institutions. To achieve the same purpose without restrictions, we have developed the double rank analysis, in which publications that have a low number of citations are also included. By ranking publications by their number of citations from highest to lowest, publications from institutions or countries have two ranking numbers: one for their internal and another one for world positions; the internal ranking number can be expressed as a function of the world ranking number. In log–log double rank plots, a large number of publications fit a straight line; extrapolation allows estimating the likelihood of publishing the highest cited publication. The straight line derives from a power law behavior of the double rank that occurs because citations follow lognormal distributions with values of μ and σ that vary within narrow limits.  相似文献   

7.
Hirsch's h-index seeks to give a single number that in some sense summarizes an author's research output and its impact. Essentially, the h-index seeks to identify the most productive core of an author's output in terms of most received citations. This most productive set we refer to as the Hirsch core, or h-core. Jin's A-index relates to the average impact, as measured by the average number of citations, of this “most productive” core. In this paper, we investigate both the total productivity of the Hirsch core – what we term the size of the h-core – and the A-index using a previously proposed stochastic model for the publication/citation process, emphasising the importance of the dynamic, or time-dependent, nature of these measures. We also look at the inter-relationships between these measures. Numerical investigations suggest that the A-index is a linear function of time and of the h-index, while the size of the Hirsch core has an approximate square-law relationship with time, and hence also with the A-index and the h-index.  相似文献   

8.
The definition of the g-index is as arbitrary as that of the h-index, because the threshold number g2 of citations to the g most cited papers can be modified by a prefactor at one's discretion, thus taking into account more or less of the highly cited publications within a dataset. In a case study I investigate the citation records of 26 physicists and show that the prefactor influences the ranking in terms of the generalized g-index less than for the generalized h-index. I propose specifically a prefactor of 2 for the g-index, because then the resulting values are of the same order of magnitude as for the common h-index. In this way one can avoid the disadvantage of the original g-index, namely that the values are usually substantially larger than for the h-index and thus the precision problem is substantially larger; while the advantages of the g-index over the h-index are kept. Like for the generalized h-index, also for the generalized g-index different prefactors might be more useful for investigations which concentrate only on top scientists with high citation frequencies or on junior researchers with small numbers of citations.  相似文献   

9.
Metrics based on percentile ranks (PRs) for measuring scholarly impact involves complex treatment because of various defects such as overvaluing or devaluing an object caused by percentile ranking schemes, ignoring precise citation variation among those ranked next to each other, and inconsistency caused by additional papers or citations. These defects are especially obvious in a small-sized dataset. To avoid the complicated treatment of PRs based metrics, we propose two new indicators—the citation-based indicator (CBI) and the combined impact indicator (CII). Document types of publications are taken into account. With the two indicators, one would no more be bothered by complex issues encountered by PRs based indicators. For a small-sized dataset with less than 100 papers, special calculation is no more needed. The CBI is based solely on citation counts and the CII measures the integrate contributions of publications and citations. Both virtual and empirical data are used so as to compare the effect of related indicators. The CII and the PRs based indicator I3 are highly correlated but the former reflects citation impact more and the latter relates more to publications.  相似文献   

10.
Most current h-type indicators use only a single number to measure a scientist's productivity and impact of his/her published works. Although a single number is simple to calculate, it fails to outline his/her academic performance varying with time. We empirically study the basic h-index sequence for cumulative publications with consideration of the yearly citation performance (for convenience, referred as L-Sequence). L-Sequence consists of a series of L factors. Based on the citations received in the corresponding individual year, every factor along a scientist's career span is calculated by using the h index formula. Thus L-Sequence shows the scientist's dynamic research trajectory and provides insight into his/her scientific performance at different periods. Furthermore, L, summing up all factors of L-Sequence, is for the evaluation of the whole research career as alternative to other h-index variants. Importantly, the partial factors of the L-Sequence can be adapted for different evaluation tasks. Moreover, L-Sequence could be used to highlight outstanding scientists in a specific period whose research interests can be used to study the history and trends of a specific discipline.  相似文献   

11.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

12.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals.  相似文献   

13.
Across the various scientific domains, significant differences occur with respect to research publishing formats, frequencies and citing practices, the nature and organisation of research and the number and impact of a given domain's academic journals. Consequently, differences occur in the citations and h-indices of the researchers. This paper attempts to identify cross-domain differences using quantitative and qualitative measures. The study focuses on the relationships among citations, most-cited papers and h-indices across domains and for research group sizes. The analysis is based on the research output of approximately 10,000 researchers in Slovenia, of which we focus on 6536 researchers working in 284 research group programmes in 2008–2012.As comparative measures of cross-domain research output, we propose the research impact cube (RIC) representation and the analysis of most-cited papers, highest impact factors and citation distribution graphs (Lorenz curves). The analysis of Lotka's model resulted in the proposal of a binary citation frequencies (BCF) distribution model that describes well publishing frequencies. The results may be used as a model to measure, compare and evaluate fields of science on the global, national and research community level to streamline research policies and evaluate progress over a definite time period.  相似文献   

14.
This study describes the meaning of and the formula for S-index, which is a novel evaluation index based on the number of citations of each article in a particular journal and the rank of the article according to the number of citations. This study compares S-index with Impact Factor (IF), which is the most well-known evaluation index, using the Korea Citation Index data. It is shown that S-index is positively correlated with the number of articles published in a journal. Tapered h-index (hT-index), which is based on all articles of a journal like S-index, is compared with S-index. It is shown that there is a very strong positive correlation between S-index and hT-index. Although S-index is similar to hT-index, S-index has a slightly better differentiating power and ranks the journal with evenly cited articles higher.  相似文献   

15.
Scholarly citations – widely seen as tangible measures of the impact and significance of academic papers – guide critical decisions by research administrators and policy makers. The citation distributions form characteristic patterns that can be revealed by big-data analysis. However, the citation dynamics varies significantly among subject areas, countries etc. The problem is how to quantify those differences, separate global and local citation characteristics. Here, we carry out an extensive analysis of the power-law relationship between the total citation count and the h-index to detect a functional dependence among its parameters for different science domains. The results demonstrate that the statistical structure of the citation indicators admits representation by a global scale and a set of local exponents. The scale parameters are evaluated for different research actors – individual researchers and entire countries – employing subject- and affiliation-based divisions of science into domains. The results can inform research assessment and classification into subject areas; the proposed divide-and-conquer approach can be applied to hidden scales in other power-law systems.  相似文献   

16.
Previous research has shown that citation data from different types of Web sources can potentially be used for research evaluation. Here we introduce a new combined Integrated Online Impact (IOI) indicator. For a case study, we selected research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003. We compared the citation counts from Web of Science (WoS) and Scopus with five online sources of citation data including Google Scholar, Google Books, Google Blogs, PowerPoint presentations and course reading lists. The mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. We also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication. Further analysis showed that the overall percentage for unique Google Scholar citations outside the WoS were 73% and 60% for the articles published in JASIST and Scientometrics, respectively. An important conclusion is that in subject areas where wider types of intellectual impact indicators outside the WoS and Scopus databases are needed for research evaluation, IOI can be used to help monitor research performance.  相似文献   

17.
To take into account the impact of the different bibliometric features of scientific fields and different size of both the publication set evaluated and the set used as reference standard, two new impact indicators are introduced. The Percentage Rank Position (PRP) indicator relates the ordinal rank position of the article assessed to the total number of papers in the publishing journal. The publications in the publishing journal are ranked by the decreasing citation frequency. The Relative Elite Rate (RER) indicator relates the number of citations obtained by the article assessed to the mean citation rate of the papers in the elite set of the publishing journal. The indices can be preferably calculated from the data of the publications in the elite set of journal papers of individuals, teams, institutes or countries. The number of papers in the elite set is calculated by the equation: P(πv) = (10 log P) ? 10, where P is the total number of papers. The mean of the PRP and RER indicators of the journal papers assessed may be applied for comparing the eminence of publication sets across fields.  相似文献   

18.
19.
One aspect of faculty effectiveness can be measured through research productivity, and publication and citation rates can serve as an indicator of that productivity. This study, the fourth in a series to examine LIS faculty and program productivity as measured by publication and citation, uses the same methodology as the previous investigations. A consistent data instrument (the Social Science Citation Index) provided publication and citation data for LIS faculty, covering the years 1999 to 2004. Tables show the faculty and programs with the highest publication and citation rates, both overall and per capita, as well as a cumulative ranking of LIS programs based on faculty research productivity. This study, in conjunction with the three previous, documents an increase in LIS research productivity, suggesting an increase in faculty effectiveness.  相似文献   

20.
引用认同是从引文的角度研究引用者,将引用认同方法应用于科研机构分析中,研究某机构所引用的其他机构的集合,并以印第安纳大学图书情报学院为例,从被引机构的国家(地区)分布、被引机构的被引频次分布(包括自引)、被引论文的学科分布三个角度初步研究该学院的引用认同。结果表明引用认同方法可以用来分析某机构的科研引用模式、科研领域布局和研究动向,发现潜在的科研合作对象。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号