首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bibliometrics has become an indispensable tool in the evaluation of institutions (in the natural and life sciences). An evaluation report without bibliometric data has become a rarity. However, evaluations are often required to measure the citation impact of publications in very recent years in particular. As a citation analysis is only meaningful for publications for which a citation window of at least three years is guaranteed, very recent years cannot (should not) be included in the analysis. This study presents various options for dealing with this problem in statistical analysis. The publications from two universities from 2000 to 2011 are used as a sample dataset (n = 2652, univ 1 = 1484 and univ 2 = 1168). One option is to show the citation impact data (percentiles) in a graphic and to use a line for percentiles regressed on ‘distant’ publication years (with confidence interval) showing the trend for the ‘very recent’ publication years. Another way of dealing with the problem is to work with the concept of samples and populations. The third option (very related to the second) is the application of the counterfactual concept of causality.  相似文献   

2.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

3.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

4.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

5.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

6.
The findings of Bornmann, Leydesdorff, and Wang (2013b) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years.  相似文献   

7.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

8.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

9.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

10.
Objective:Systematic reviews and other evidence syntheses, the pinnacle of the evidence pyramid, embody comprehensiveness and rigor; however, retracted data are being incorporated into these publications. This study examines the use of retracted publications in the field of pharmacy, describes characteristics of retracted publications cited by systematic reviews, and discusses factors associated with citation likelihood.Methods:Using data from Retraction Watch, we identified retracted publications in the pharmacy field. We identified all articles citing these retracted publications in Web of Science and Scopus and limited results to systematic reviews. We classified the retraction reason, determined whether the citation occurred before or after retraction, and analyzed factors associated with the likelihood of systematic reviews citing a retracted publication.Results:Of 1,396 retracted publications, 283 were cited 1,096 times in systematic reviews. Most (65.0%) (712/1096) citations occurred before retraction. Citations were most often to items retracted due to data falsification or manipulation (39.2%), followed by items retracted due to ethical misconduct including plagiarism (30.4%), or concerns about or errors in data or methods (26.2%). Compared to those not cited in systematic reviews, cited items were significantly more likely to be retracted due to data falsification and manipulation, were published in high impact factor journals, and had longer delays between publication and retraction.Conclusions:Further analysis of systematic reviews citing retracted publications is needed to determine the impact of flawed data. Librarians understand the nuances involved and can advocate for greater transparency around the retraction process and increase awareness of challenges posed by retractions.  相似文献   

11.
科学研究的目的在于创造知识,并应用理论成果解决我国社会、经济、文化等发展中的实际问题。将论文发表在国际期刊上可以让更多的国际同行了解我国最新的科研成果,为我国获得更多的国际影响力,所以在过去二十多年里SCI论文成为我国科研考核的一个重要指标。在这种科研评价导向下,我国学者发表的国际论文数量已居世界第一位,而大量来自国内同行的引用使得我国国际论文的被引量排名世界第二。本文提取1990至2015年Web of Science论文及其引文的数据,分析不同国家、不同学科在国家层次的自引情况,并在不同国家、不同学科之间进行比较。研究发现,在排除国内同行的自引后,我国国际论文的真实国际影响力仍然有限,除了临床医学和物理等少数学科外,其他学科仍然低于全球平均水平。  相似文献   

12.
人为因素对科技期刊影响因子评价指标的影响   总被引:7,自引:1,他引:6  
董建军 《编辑学报》2008,20(4):365-366
讨论参考文献引用行为的人为干扰和一稿多投重复发表及抄袭发表对文献计量学引文评价体系的破坏作用,剖析人为因素对影响因子计算的干扰作用.倡导人们要尊重作者写作中的引用行为,严禁限制和规定作者引用行为;建议相同学科期刊编辑部之间建立联系,杜绝重复发表和抄袭发表的出现.促使参考文献的引用和评价体系步入正轨.  相似文献   

13.
Questionable publications have been accused of “greedy” practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.  相似文献   

14.
The purpose of the Kazakh publication citation indicator that has been developed in Kazakhstan since 2005 is to carry out scientometric analysis of scientific publications to determine their citation rate. At present, the bibliographic database (BDB) on citation includes information on the publication activities and citation index of approximately 30000 Kazakh scientists and specialists. They had over 18000 scientific papers published in over 500 domestic and foreign journals. The total quantity of references to papers by Kazakh scientists was more than 28000. The Kazakh analogue of the science citation index determination system is an efficient tool for analytical work with the BDB of scientific publications, which makes it possible to calculate publication activities and citation parameters, which are used to define the value and demand for the results of scientific work in various fields of domestic science.  相似文献   

15.
In this paper we deal with the problem of aggregating numeric sequences of arbitrary length that represent e.g. citation records of scientists. Impact functions are the aggregation operators that express as a single number not only the quality of individual publications, but also their author's productivity.We examine some fundamental properties of these aggregation tools. It turns out that each impact function which always gives indisputable valuations must necessarily be trivial. Moreover, it is shown that for any set of citation records in which none is dominated by the other, we may construct an impact function that gives any a priori-established authors’ ordering. Theoretically then, there is considerable room for manipulation in the hands of decision makers.We also discuss the differences between the impact function-based and the multicriteria decision making-based approach to scientific quality management, and study how the introduction of new properties of impact functions affects the assessment process. We argue that simple mathematical tools like the h- or g-index (as well as other bibliometric impact indices) may not necessarily be a good choice when it comes to assess scientific achievements.  相似文献   

16.
中国、印度计算机科学领域论文影响力的比较研究   总被引:1,自引:0,他引:1  
文章以1999-2008年Science Citation Index--Expanded为数据来源,从发文数量、被引频次、发文期刊的时间分布、期刊影响因子、被引频次等方面对中国和印度两国在计算机科学领域内论文的发文情况进行了比较研究。  相似文献   

17.
The scientific impact of a publication can be determined not only based on the number of times it is cited but also based on the citation speed with which its content is noted by the scientific community. Here we present the citation speed index as a meaningful complement to the h index: whereas for the calculation of the h index the impact of publications is based on number of citations, for the calculation of the speed index it is the number of months that have elapsed since the first citation, the citation speed with which the results of publications find reception in the scientific community. The speed index is defined as follows: a group of papers has the index s if for s of its Np papers the first citation was at least s months ago, and for the other (Np ? s) papers the first citation was ≤s months ago.  相似文献   

18.
Scientific production is steadily growing, exhibiting 4% annual growth in publications and 1.8% annual growth in the number of references per publication, together producing a 12-year doubling period in the total supply of references, i.e. links in the science citation network. This growth has far-reaching implications for how academic knowledge is connected, accessed and evaluated. Against this background, we analyzed a citation network comprised of 837 million references produced by 32.6 million publications over the period 1965–2012, allowing for a detailed analysis of the ‘attention economy’ in science. Our results show how growth relates to ‘citation inflation’, increased connectivity in the citation network resulting from decreased levels of uncitedness, and a narrowing range of attention – as both very classic and very recent literature are being cited increasingly less. The decreasing attention to recent literature published within the last 6 years suggests that science has become stifled by a publication deluge destabilizing the balance between production and consumption. To better understand these patterns together, we developed a generative model of the citation network, featuring exponential growth, the redirection of scientific attention via publications’ reference lists, and the crowding out of old literature by the new. We validate our model against several empirical benchmarks, and then use perturbation analysis to measure the impact of shifts in citing behavior on the synthetic system's properties, thereby providing insights into the functionality of the science citation network as an infrastructure supporting the memory of science.  相似文献   

19.
Altmetrics promise useful support for assessing the impact of scientific works, including beyond the scholarly community and with very limited citation windows. Unfortunately, altmetrics scores are currently available only for recent articles and cannot be used as covariates in predicting long term impact of publications. However, the study of their statistical properties is a subject of evident interest to scientometricians. Applying the same approaches used in the literature to assess the universality of citation distributions, the intention here is to test whether the universal distribution also holds for Mendeley readerships. Results of the analysis carried out on a sample of publications randomly extracted from the Web of Science confirm that readerships seem to share similar shapes across fields and can be rescaled to a common and universal form. Such rescaling results as not particularly effective on the right tails. In other regions, rescaling causes a good collapse of field specific distributions, even for very recent publications.  相似文献   

20.
Following a brief introduction of citation-based journal rankings as potential serials management tools, the most frequently used citation measure—impact factor—is explained. This paper then demonstrates a methodological bias inherent in averaging Social Sciences Citation Index Journal Citation Reports (SSCI JCR) impact factor data from two or more consecutive years. A possible method for correcting the bias, termed adjusted impact factor, is proposed. For illustration, a set of political science journals is ranked according to three different methods (crude averaging, weighted averaging, and adjusted impact factor) for combining SSCI JCR impact factor data from successive years. Although the correlations among the three methods are quite high, one can observe noteworthy differences in the rankings that could impact on collection development decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号