首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 799 毫秒
1.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

2.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

3.
Following a brief introduction of citation-based journal rankings as potential serials management tools, the most frequently used citation measure—impact factor—is explained. This paper then demonstrates a methodological bias inherent in averaging Social Sciences Citation Index Journal Citation Reports (SSCI JCR) impact factor data from two or more consecutive years. A possible method for correcting the bias, termed adjusted impact factor, is proposed. For illustration, a set of political science journals is ranked according to three different methods (crude averaging, weighted averaging, and adjusted impact factor) for combining SSCI JCR impact factor data from successive years. Although the correlations among the three methods are quite high, one can observe noteworthy differences in the rankings that could impact on collection development decisions.  相似文献   

4.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

5.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

6.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

7.
8.
Research was undertaken that examined what, if any, correlation there was between the h-index and rankings by peer assessment, and what correlation there was between the 2008 UK RAE rankings and the collective h-index of submitting departments. About 100 international scholars in Library and Information Science were ranked by their peers on the quality of their work. These rankings were correlated with the h and g scores the scholars had achieved. The results showed that there was a correlation between their median rankings and the indexes. The 2008 RAE grade point averages (GPA) achieved by departments from three UoAs – Anthropology, Library and Information Management and Pharmacy were compared with each of their collective h and g index scores. Results were mixed, with a strong correlation between pharmacy departments and index scores, followed by library and information management to anthropology where negative and non-significant results were found. Taken together, the findings from the research indicate that individual ranking by peer assessment and their h-index or variants was generally good. Results for the RAE 2008 gave correlations between GPA and successive versions of the h-index which varied in strength, except for anthropology where, it is suggested detailed cited reference searches must be undertaken to maximise citation counts.  相似文献   

9.
This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA.  相似文献   

10.
This paper introduces the Hirsch spectrum (h-spectrum) for analyzing the academic reputation of a scientific journal. h-Spectrum is a novel tool based on the Hirsch (h) index. It is easy to construct: considering a specific journal in a specific interval of time, h-spectrum is defined as the distribution representing the h-indexes associated to the authors of the journal articles. This tool allows defining a reference profile of the typical author of a journal, compare different journals within the same scientific field, and provide a rough indication of prestige/reputation of a journal in the scientific community. h-Spectrum can be associated to every journal. Ten specific journals in the Quality Engineering/Quality Management field are analyzed so as to preliminarily investigate the h-spectrum characteristics.  相似文献   

11.
12.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

13.
Usage of field-normalized citation scores is a bibliometric standard. Different methods for field-normalization are in use, but also the choice of field-classification system determines the resulting field-normalized citation scores. Using Web of Science data, we calculated field-normalized citation scores using the same formula but different field-classification systems to answer the question if the resulting scores are different or similar. Six field-classification systems were used: three based on citation relations, one on semantic similarity scores (i.e., a topical relatedness measure), one on journal sets, and one on intellectual classifications. Systems based on journal sets and intellectual classifications agree on at least the moderate level. Two out of the three sets based on citation relations also agree on at least the moderate level. Larger differences were observed for the third data set based on citation relations and semantic similarity scores. The main policy implication is that normalized citation impact scores or rankings based on them should not be compared without deeper knowledge of the classification systems that were used to derive these values or rankings.  相似文献   

14.
[目的/意义] 探索用不同的方法对学术期刊进行评价,以获得更加有效、公平和有价值的信息,促进期刊评价理论与实践的发展。[方法/过程] 将经济学中的自由处置壳(FDH)效率评价模型引入到文献计量学中,对传统FDH模型进行调整,构建一种新的期刊评价方法,选取7个指标对图书情报类的41种期刊进行评价,并将评价结果与其他期刊排名进行相关性分析。[结果/结论] 基于FDH模型的期刊评价方法的评价结果更客观,与其他期刊排名显著相关;该方法将期刊评价指标融合在一起,不仅可以测算出每一个被评期刊的得分,还能够提供更多的有价值的信息,为期刊部门提供决策参考。例如,FDH模型可以识别出被评期刊的"标杆"控制期刊"等,期刊通过与其各指标表现接近的期刊进行对比、分析,发现自身存在的不足和差距,达到持续改进乃至超越的目的。  相似文献   

15.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals.  相似文献   

16.
Wenli Gao 《期刊图书馆员》2016,70(1-4):121-127
This article outlines a methodology to generate a list of local core journal titles by doing a citation analysis and details the process for retrieving and downloading data from Scopus. It analyzes correlations among citation count, journal rankings, and journal usage. The results of this study reveal significant correlations between journal rankings and journal usage. No correlation with citation count has been found. Limitations and implications for collection development and outreach are also discussed.  相似文献   

17.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

18.
Examining a comprehensive set of papers (n = 1837) that were accepted for publication by the journal Angewandte Chemie International Edition (one of the prime chemistry journals in the world) or rejected by the journal but then published elsewhere, this study tested the extent to which the use of the freely available database Google Scholar (GS) can be expected to yield valid citation counts in the field of chemistry. Analyses of citations for the set of papers returned by three fee-based databases – Science Citation Index, Scopus, and Chemical Abstracts – were compared to the analysis of citations found using GS data. Whereas the analyses using citations returned by the three fee-based databases show very similar results, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. Our study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data.  相似文献   

19.
20.
俞立平  张矿伟 《图书馆杂志》2021,(1):93-103,106
学术界目前大多从静态角度评价学术期刊影响力,比较缺乏动态角度的相关研究。文章借助牛顿第二定律的原理,旨在探索评价学术期刊动态影响力的指标和方法,从期刊影响速度、加速度及影响强度三个角度展开研究,并提出期刊影响强度的概念。以CSSCI经济学期刊为研究对象,基于中国知网CNKI的引文数据库,综合采用相关分析、回归分析、Kappa一致性检验等方法进行研究。研究结果表明:期刊影响强度可以作为一个期刊评价的新指标;期刊影响强度具有较好的区分度;期刊影响强度与h指数、篇均被引量具有正相关关系;建议采用期刊影响速度、加速度与影响强度评价期刊。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号