首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 304 毫秒
1.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

2.
期刊学术影响力、期刊对稿件的录用标准和期刊载文的学术影响力三者之间存在同向加强的机制,来自较高影响力期刊的引用具有较高的评价意义。作者的择刊引用和择刊发表使得较低学术影响力的期刊较少被较高影响力期刊引用。因而,可以通过同时考察构成期刊引证形象的施引期刊的学术影响力及其施引频次来评价被引期刊的学术影响力。以综合性期刊Nature和Science 2010年的引证形象为例,将期刊影响因子作为学术影响力的初评结果,提出了以施引频次对施引期刊影响因子加权的计算方法,以期通过量化的引证形象实现对期刊的评价。  相似文献   

3.
There are many indicators of journal quality and prestige. Although acceptance rates are discussed anecdotally, there has been little systematic exploration of the relationship between acceptance rates and other journal measures. This study examines the variability of acceptance rates for a set of 5094 journals in five disciplines and the relationship between acceptance rates and JCR measures for 1301 journals. The results show statistically significant differences in acceptance rates by discipline, country affiliation of the editor, and number of reviewers per article. Negative correlations are found between acceptance rates and citation-based indicators. Positive correlations are found with journal age. These relationships are most pronounced in the most selective journals and vary by discipline. Open access journals were found to have statistically significantly higher acceptance rates than non-open access journals. Implications in light of changes in the scholarly communication system are discussed.  相似文献   

4.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

5.
Do academic journals favor authors who share their institutional affiliation? To answer this question we examine citation counts, as a proxy for paper quality, for articles published in four leading international relations journals during the years 2000–2015. We compare citation counts for articles written by “in-group members” (authors affiliated with the journal’s publishing institution) versus “out-group members” (authors not affiliated with that institution). Articles written by in-group authors received 18% to 49% fewer Web of Science citations when published in their home journal (International Security or World Politics) vs. an unaffiliated journal, compared to out-group authors. These results are mainly driven by authors who received their PhDs from Harvard or MIT. The findings show evidence of a bias within some journals towards publishing papers by faculty from their home institution, at the expense of paper quality.  相似文献   

6.
Rankings of journals and rankings of scientists are usually discussed separately. We argue that a consistent approach to both rankings is desirable because both the quality of a journal and the quality of a scientist depend on the papers it/he publishes. We present a pair of consistent rankings (impact factor for the journals and total number of citations for the authors) and we provide an axiomatic characterization thereof.  相似文献   

7.
中文医学学术期刊发表英文文章的实践与思考   总被引:1,自引:0,他引:1  
王丽  李欣欣  刘莉 《编辑学报》2005,17(4):284-286
中文医学学术期刊发表英文文章的现象较为普遍.通过对发表在<吉林大学学报(医学版)>上的112篇英文文章的发表时间、栏目、作者分布及被引用情况和其他3种较权威医学学术期刊发表的英文文章的数量及被引用情况进行分析,发现中文期刊发表的英文文章虽然具有较高的学术价值,但被国内期刊引用率明显低于同期发表的中文文章,SCI-E检索显示被引频次为0,认为中文医学学术期刊发表英文文章并没有起到扩大传播范围、促进期刊国际化的作用,而且浪费宝贵的信息资源.建议中文医学学术期刊不宜刊发英文文章,在国内应首选英文版期刊发表英文文章.  相似文献   

8.
Wenli Gao 《期刊图书馆员》2016,70(1-4):121-127
This article outlines a methodology to generate a list of local core journal titles by doing a citation analysis and details the process for retrieving and downloading data from Scopus. It analyzes correlations among citation count, journal rankings, and journal usage. The results of this study reveal significant correlations between journal rankings and journal usage. No correlation with citation count has been found. Limitations and implications for collection development and outreach are also discussed.  相似文献   

9.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

10.
11.
12.
13.
14.
佟建国  颜帅  陈浩元 《编辑学报》2013,25(3):208-210
高校自然科学学报是中国科技期刊中的特殊群体。我们用统计数据展示了该类期刊的良好声誉。基于期刊引证数据和网站下载数据,与专业科技期刊作了比较,认为该类期刊的学术质量与全国科技期刊相当。呼吁依据该类期刊的运行规律,推进期刊改革,促进其健康发展。  相似文献   

15.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

16.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

17.
Although there are at least six dimensions of journal quality, Beall's List identifies predatory Open Access journals based almost entirely on their adherence to procedural norms. The journals identified as predatory by one standard may be regarded as legitimate by other standards. This study examines the scholarly impact of the 58 accounting journals on Beall's List, calculating citations per article and estimating CiteScore percentile using Google Scholar data for more than 13,000 articles published from 2015 through 2018. Most Beall's List accounting journals have only modest citation impact, with an average estimated CiteScore in the 11th percentile among Scopus accounting journals. Some have a substantially greater impact, however. Six journals have estimated CiteScores at or above the 25th percentile, and two have scores at or above the 30th percentile. Moreover, there is considerable variation in citation impact among the articles within each journal, and high-impact articles (cited up to several hundred times) have appeared even in some of the Beall's List accounting journals with low citation rates. Further research is needed to determine how well the citing journals are integrated into the disciplinary citation network—whether the citing journals are themselves reputable or not.  相似文献   

18.
We investigated the self‐citation rates of 884 Chinese biomedical journals, including 185 general medicine journals, 96 preventive medicine journals, 103 Chinese traditional medicine journals, 66 basic medicine journals, 370 clinical medicine journals, and 64 pharmaceutical journals. The average self‐citation rates of these journals for the years 2005–2007 were 0.113 ± 0.124, 0.099 ± 0.098 and 0.092 ± 0.089, respectively, i.e. a downward trend year by year. The upper limits of normal values of self‐citation rates for the same period were 0.316, 0.260 and 0.238, respectively. A significant difference was found in self‐citation rate between biomedical journals of different subjects. 52 Chinese biomedical journals had no self‐citation in 2007. The total citation frequency and impact factor of these 52 biomedical journals were 263 and 0.206, respectively, which were very much lower than the average levels of all Chinese biomedical journals in 2007. A self‐citation rate higher than the upper limit was considered as excessive self‐citation: 62 (7.01%), 68 (7.69%) and 66 (7.47%) biomedical journals showed excessive self‐citation in the years 2005–2007, respectively. However, a certain amount of self‐citation is reasonable and necessary.  相似文献   

19.
20.
In the past, recursive algorithms, such as PageRank originally conceived for the Web, have been successfully used to rank nodes in the citation networks of papers, authors, or journals. They have proved to determine prestige and not popularity, unlike citation counts. However, bibliographic networks, in contrast to the Web, have some specific features that enable the assigning of different weights to citations, thus adding more information to the process of finding prominence. For example, a citation between two authors may be weighed according to whether and when those two authors collaborated with each other, which is information that can be found in the co-authorship network. In this study, we define a couple of PageRank modifications that weigh citations between authors differently based on the information from the co-authorship graph. In addition, we put emphasis on the time of publications and citations. We test our algorithms on the Web of Science data of computer science journal articles and determine the most prominent computer scientists in the 10-year period of 1996–2005. Besides a correlation analysis, we also compare our rankings to the lists of ACM A. M. Turing Award and ACM SIGMOD E. F. Codd Innovations Award winners and find the new time-aware methods to outperform standard PageRank and its time-unaware weighted variants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号