首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
To take into account the impact of the different bibliometric features of scientific fields and different size of both the publication set evaluated and the set used as reference standard, two new impact indicators are introduced. The Percentage Rank Position (PRP) indicator relates the ordinal rank position of the article assessed to the total number of papers in the publishing journal. The publications in the publishing journal are ranked by the decreasing citation frequency. The Relative Elite Rate (RER) indicator relates the number of citations obtained by the article assessed to the mean citation rate of the papers in the elite set of the publishing journal. The indices can be preferably calculated from the data of the publications in the elite set of journal papers of individuals, teams, institutes or countries. The number of papers in the elite set is calculated by the equation: P(πv) = (10 log P) ? 10, where P is the total number of papers. The mean of the PRP and RER indicators of the journal papers assessed may be applied for comparing the eminence of publication sets across fields.  相似文献   

2.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

3.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

4.
This study presents a unique approach in investigating the knowledge diffusion structure for the field of data quality through an analysis of the main paths. We study a dataset of 1880 papers to explore the knowledge diffusion path, using citation data to build the citation network. The main paths are then investigated and visualized via social network analysis. This paper takes three different main path analyses, namely local, global, and key-route, to depict the knowledge diffusion path and additionally implements the g-index and h-index to evaluate the most important journals and researchers in the data quality domain.  相似文献   

5.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

6.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

7.
The aim of the study is to explore the effects of the increase in the number of publications or citations on several impact indicators by a single journal paper or citation. The possible change of the h-index, A-index, R-index, π-index, π-rate, Journal Paper Citedness (JPC), and Citation Distribution Score (CDS) is followed by models. Particular attention is given to the increase of the indices by a single plus citation. The results obtained by the “successively built-up indicator” model show that with increasing number of citations or self-citations the indices may increase substantially.  相似文献   

8.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

9.
Examining a comprehensive set of papers (n = 1837) that were accepted for publication by the journal Angewandte Chemie International Edition (one of the prime chemistry journals in the world) or rejected by the journal but then published elsewhere, this study tested the extent to which the use of the freely available database Google Scholar (GS) can be expected to yield valid citation counts in the field of chemistry. Analyses of citations for the set of papers returned by three fee-based databases – Science Citation Index, Scopus, and Chemical Abstracts – were compared to the analysis of citations found using GS data. Whereas the analyses using citations returned by the three fee-based databases show very similar results, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. Our study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data.  相似文献   

10.
This study describes the meaning of and the formula for S-index, which is a novel evaluation index based on the number of citations of each article in a particular journal and the rank of the article according to the number of citations. This study compares S-index with Impact Factor (IF), which is the most well-known evaluation index, using the Korea Citation Index data. It is shown that S-index is positively correlated with the number of articles published in a journal. Tapered h-index (hT-index), which is based on all articles of a journal like S-index, is compared with S-index. It is shown that there is a very strong positive correlation between S-index and hT-index. Although S-index is similar to hT-index, S-index has a slightly better differentiating power and ranks the journal with evenly cited articles higher.  相似文献   

11.
Based on the rank-order citation distribution of e.g. a researcher, one can define certain points on this distribution, hereby summarizing the citation performance of this researcher. Previous work of Glänzel and Schubert defined these so-called “characteristic scores and scales” (CSS), based on average citation data of samples of this ranked publication–citation list.In this paper we will define another version of CSS, based on diverse h-type indices such as the h-index, the g-index, the Kosmulski's h(2)-index and the g-variant of it, the g(2)-index.Mathematical properties of these new CSS are proved in a Lotkaian framework. These CSS also provide an improvement of the single h-type indices in the sense that they give h-type index values for different parts of the ranked publication–citation list.  相似文献   

12.
Based on an idea by Kosmulski, Franceschini et al. (2012, Scientometrics 92(3), 621–641) propose to classify a publication as “successful” when it receives more citations than a specific comparison term (CT). In the intention of the authors CT should be a suitable estimate of the number of citations that a publication – in a certain scientific context and period of time – should potentially achieve. According to this definition, the success-index is defined as the number of successful papers, among a group of publications examined, such as those associated to a scientist or a journal. In the first part of the paper, the success-index is recalled, discussing its properties and limitations. Next, relying on the theory of Information Production Processes (IPPs), an informetric model of the index is formulated, for a better comprehension of the index and its properties. Particular emphasis is given to a theoretical sensitivity analysis of the index.  相似文献   

13.
The arbitrariness of the h-index becomes evident, when one requires q × h instead of h citations as the threshold for the definition of the index, thus changing the size of the core of the most influential publications of a dataset. I analyze the citation records of 26 physicists in order to determine how much the prefactor q influences the ranking. Likewise, the arbitrariness of the highly-cited-publications indicator is due to the threshold value, given either as an absolute number of citations or as a percentage of highly cited papers. The analysis of the 26 citation records shows that the changes in the rankings in dependence on these thresholds are rather large and comparable with the respective changes for the h-index.  相似文献   

14.
This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters’ Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference.  相似文献   

15.
开放存取对期刊影响力绩效研究综述   总被引:2,自引:0,他引:2  
文章归纳了国内外主要的OA绩效研究方法,并将它们分为三类:对某个期刊群中OA期刊和非OA期刊影响因子的比较、对某个领域大样本OA论文与非OA论文被引频次的统计比较,以及对某个混合OA期刊中OA论文和非OA论文影响因子平均值的比较,并介绍了其中五个代表性研究的方法和结论。这些研究成果表明,OA对提高期刊影响力有着积极的立竿见影的作用。针对未来需要,文章提出了OA论文比例演变、文献引文中OA文献比例演变、搜索引擎对OA绩效影响的研究方案。该文为《数字图书馆论坛》2.009年第11期本期话题“Open Access”的文章之一。  相似文献   

16.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

17.
The definition of the g-index is as arbitrary as that of the h-index, because the threshold number g2 of citations to the g most cited papers can be modified by a prefactor at one's discretion, thus taking into account more or less of the highly cited publications within a dataset. In a case study I investigate the citation records of 26 physicists and show that the prefactor influences the ranking in terms of the generalized g-index less than for the generalized h-index. I propose specifically a prefactor of 2 for the g-index, because then the resulting values are of the same order of magnitude as for the common h-index. In this way one can avoid the disadvantage of the original g-index, namely that the values are usually substantially larger than for the h-index and thus the precision problem is substantially larger; while the advantages of the g-index over the h-index are kept. Like for the generalized h-index, also for the generalized g-index different prefactors might be more useful for investigations which concentrate only on top scientists with high citation frequencies or on junior researchers with small numbers of citations.  相似文献   

18.
The non-citation rate refers to the proportion of papers that do not attract any citation over a period of time following their publication. After reviewing all the related papers in Web of Science, Google Scholar and Scopus database, we find the current literature on citation distribution gives more focus on the distribution of the percentages and citations of papers receiving at least one citation, while there are fewer studies on the time-dependent patterns of the percentage of never-cited papers, on what distribution model can fit their time-dependent patterns, as well as on the factors influencing the non-citation rate. Here, we perform an empirical pilot analysis to the time-dependent distribution of the percentages of never-cited papers in a series of different, consecutive citation time windows following their publication in our selected six sample journals, and study the influence of paper length on the chance of papers’ getting cited. Through the above analysis, the following general conclusions are drawn: (1) a three-parameter negative exponential model can well fit time-dependent distribution curve of the percentages of never-cited papers; (2) in the initial citation time window, the percentage of never-cited papers in each journal is very high. However, as the citation time window becomes wider and wider, the percentage of never-cited papers begins to drop rapidly at first, and then drop more slowly, and the total degree of decline for most of journals is very large; (3) when applying the wider citation time windows, the percentage of never-cited papers for each journal begins to approach a stable value, and after that value, there will be very few changes in these stable percentages, unless we meet a large amount of “Sleeping Beauties” type papers; (4) the length of an paper has a great influence on whether it will be cited or not.  相似文献   

19.
Despite the huge amount of literature concerning the h-index, few papers have been devoted to its statistical analysis when a probabilistic distribution is assumed for citation counts. The present contribution mainly aims to divulge the inferential techniques recently introduced by Pratelli et al. (2012), by explaining the details for proper point and set estimation of the theoretical h-index. Moreover, some new achievements on simultaneous inference – addressed to produce suitable scholar comparisons – are carried out. Finally, the analysis of the citation dataset for the Nobel Laureates (in the last five years) and for the Fields medallists (from 2002 onward) is considered in order to exemplify the theoretical issues.  相似文献   

20.
In the present work we introduce a modification of the h-index for multi-authored papers with contribution based author name ranking. The modified h-index is denoted by hmc-index. It employs the framework of the hm-index, which in turn is a straightforward modification of the Hirsch index, proposed by Schreiber. To retain the merit of requiring no additional rearrangement of papers in the hm-index and in order to overcome its shortage of benefiting secondary authors at the expense of primary authors, hmc-index uses combined credit allocation (CCA) to replace fractionalized counting in the hm-index. The hm-index is a special form of hmc-index and fits for papers with equally important authors or alphabetically ordered authorship. There is a possibility of an author of lower contribution to the whole scientific community obtaining a higher hmc-index. Rational hmc-index, denoted by hmcr-index, can avoid it. A fictitious example as a model case and two empirical cases are analyzed. The correlations of the hmcr-index with the h-index and its several variants considering multiple co-authorship are inspected with 30 researchers’ citation data. The results show that the hmcr-index is more reasonable for authors with different contributions. A researcher playing more important roles in significant work will obtain higher hmcr-index.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号