首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The journal impact factor (JIF) reported in journal citation reports has been used to represent the influence and prestige of a journal. Whereas the consideration of the stochastic nature of a statistic is a prerequisite for statistical inference, the estimation of JIF uncertainty is necessary yet unavailable for comparing the impact among journals. Using journals in the Database of Research in Science Education (DoRISE), the current study proposes bootstrap methods to estimate the JIF variability. The paper also provides a comprehensive exposition of the sources of JIF variability. The collections of articles in the year of interest and in the preceding years both contribute to JIF variability. In addition, the variability estimate differs depending on the way a database selects its journals for inclusion. In the bootstrap process, the nested structure of articles in a journal was accounted for to ensure that each bootstrap replication reflects the actual citation characteristics of articles in the journal. In conclusion, the proposed point and interval estimates of the JIF statistic are obtained and more informative inferences on the impact of journals can be drawn.  相似文献   

2.
Journal weighted impact factor: A proposal   总被引:3,自引:0,他引:3  
The impact factor of a journal reflects the frequency with which the journal's articles are cited. It is the best available measure of journal quality. For calculation of impact factor, we just count the number of citations, no matter how prestigious the citing journal is. We think that impact factor as a measure of journal quality, may be improved if in its calculation, we not only take into account the number of citations, but also incorporate a factor reflecting the prestige of the citing journals relative to the cited journal. In calculation of this proposed “weighted impact factor,” each citation has a coefficient (weight) the value of which is 1 if the citing journal is as prestigious as the cited journal; is >1 if the citing journal is more prestigious than the cited journal; and is <1 if the citing journal has a lower standing than the cited journal. In this way, journals receiving many citations from prestigious journals are considered prestigious themselves and those cited by low-status journals seek little credit. By considering both the number of citations and the prestige of the citing journals, we expect the weighted impact factor be a better scientometrics measure of journal quality.  相似文献   

3.
This study compares the two-year impact factor (JIF2), JIF2 without journal self-citation (JIF2_noJSC), five-year impact factor (JIF5), eigenfactor score and article influence score (AIS) and investigates their relative changes with time. JIF2 increased faster than JIF5 overall. The relative change between JIF2 and JIF_noJSC shows that the control of JCR over journal self-citation is effective to some extent. JIF5 is more discriminative than JIF2. The correlation between JIF5 and AIS is stronger than that between JIF5 and the eigenfactor score. The relative change in journal rank according to different indicators varies with the ratio of the indicators and can be up to 60 % of the number of journals in a subject category. There is subject category discrepancy in the average AIS and its change over time. Through the screening of journals according to variations in the ratio of JIF2 to JIF5 for journals in individual subject categories, we found that journals in the same subject categories can have considerably different citation patterns. To provide a fair comparison of journals in individual subject categories, we argue that it is better to replace JIF2 with the ready-made JIF5 when ranking journals.  相似文献   

4.
期刊引用认同指标在期刊评价中的适用性分析   总被引:1,自引:1,他引:0  
论文以CSSCI图书情报领域的18种期刊为例,以这些期刊在2009年全年登载论文的参考文献为研究对象,从CSSCI数据库中获取数据,统计分析各期刊的引用认同。结果显示:期刊引用认同指标(引文量、篇均引文量、英文引文比、期刊引用广度、自施引率、引用半衰期、期刊集中因子、认同期刊影响力等指标)与CSSCI来源期刊定量与定性评价指标并不明显相关,但这类指标可以反映期刊载文的内容特征与偏好、对国外科学文献和对其他学科文献的利用程度、期刊的办刊定位、学科的发展模式等等,在综合评价期刊方面具有一定意义。  相似文献   

5.
基于期刊引用形象和期刊引用认同的期刊评价   总被引:2,自引:0,他引:2  
介绍了期刊引用形象和期刊引用认同的概念;修正了Bonnevie-Nebelong选择的期刊评价指标;对现有期刊评价指标从期刊引用形象和期刊引用认同的角度进行划分;提出了新的期刊引用认同评价指标:新学科扩散指标、新学科影响指标、新即年指标、新引用刊数、新他引率;分析新指标的期刊评价意义;最后以图情领域的三种期刊进行实证分析。  相似文献   

6.
The Journal Impact Factor (JIF) is linearly sensitive to self-citations because each self-citation adds to the numerator, whereas the denominator is not affected. Pinski and Narin (1976) Influence Weights (IW) are not or marginally sensitive to these outliers on the main diagonal of a citation matrix and thus provide an alternative to JIFs. Whereas the JIFs are based on raw citation counts normalized by the number of publications in the previous two years, IWs are based on the eigenvectors in the matrix of aggregated journal-journal citations without a reference to size: the cited and citing sides are normalized and combined by a matrix approach. Upon normalization, IWs emerge as a vector; after recursive multiplication of the normalized matrix, IWs can be considered a network measure of prestige among the journals in the (sub)graph under study. As a consequence, the self-citations are integrated at the field level and no longer disturb the analysis as outliers. In our opinion, this independence of the diagonal values is a very desirable property of a measure of quality or impact. As an example, we elaborate Price’s (1981b) matrix of aggregated citation among eight biochemistry journals in 1977. Routines for the computation of IWs are made available at http://www.leydesdorff.net/iw.  相似文献   

7.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

8.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

9.
One of the flaws of the journal impact factor (IF) is that it cannot be used to compare journals from different fields or multidisciplinary journals because the IF differs significantly across research fields. This study proposes a new measure of journal performance that captures field-different citation characteristics. We view journal performance from the perspective of the efficiency of a journal's citation generation process. Together with the conventional variables used in calculating the IF, the number of articles as an input and the number of total citations as an output, we additionally consider the two field-different factors, citation density and citation dynamics, as inputs. We also separately capture the contribution of external citations and self-citations and incorporate their relative importance in measuring journal performance. To accommodate multiple inputs and outputs whose relationships are unknown, this study employs data envelopment analysis (DEA), a multi-factor productivity model for measuring the relative efficiency of decision-making units without any assumption of a production function. The resulting efficiency score, called DEA-IF, can then be used for the comparative evaluation of multidisciplinary journals’ performance. A case study example of industrial engineering journals is provided to illustrate how to measure DEA-IF and its usefulness.  相似文献   

10.
The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers. The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations. Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF. To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low. However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.  相似文献   

11.
The findings of Bornmann, Leydesdorff, and Wang (2013b) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years.  相似文献   

12.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

13.
This study assesses whether eleven factors associate with higher impact research: individual, institutional and international collaboration; journal and reference impacts; abstract readability; reference and keyword totals; paper, abstract and title lengths. Authors may have some control over these factors and hence this information may help them to conduct and publish higher impact research. These factors have been previously researched but with partially conflicting findings. A simultaneous assessment of these eleven factors for Biology and Biochemistry, Chemistry and Social Sciences used a single negative binomial-logit hurdle model estimating the percentage change in the mean citation counts per unit of increase or decrease in the predictor variables. The journal Impact Factor was found to significantly associate with increased citations in all three areas. The impact and the number of cited references and their average citation impact also significantly associate with higher article citation impact. Individual and international teamwork give a citation advantage in Biology and Biochemistry and Chemistry but inter-institutional teamwork is not important in any of the three subject areas. Abstract readability is also not significant or of no practical significance. Among the article size features, abstract length significantly associates with increased citations but the number of keywords, title length and paper length are insignificant or of no practical significance. In summary, at least some aspects of collaboration, journal and document properties significantly associate with higher citations. The results provide new and particularly strong statistical evidence that the authors should consider publishing in high impact journals, ensure that they do not omit relevant references, engage in the widest possible team working, when appropriate, and write extensive abstracts. A new finding is that whilst is seems to be useful to collaborate and to collaborate internationally, there seems to be no particular need to collaborate with other institutions within the same country.  相似文献   

14.
Do academic journals favor authors who share their institutional affiliation? To answer this question we examine citation counts, as a proxy for paper quality, for articles published in four leading international relations journals during the years 2000–2015. We compare citation counts for articles written by “in-group members” (authors affiliated with the journal’s publishing institution) versus “out-group members” (authors not affiliated with that institution). Articles written by in-group authors received 18% to 49% fewer Web of Science citations when published in their home journal (International Security or World Politics) vs. an unaffiliated journal, compared to out-group authors. These results are mainly driven by authors who received their PhDs from Harvard or MIT. The findings show evidence of a bias within some journals towards publishing papers by faculty from their home institution, at the expense of paper quality.  相似文献   

15.
网络环境下期刊引文的类型与变化分析   总被引:1,自引:0,他引:1  
学术期刊论文的引文记录和反映了作者选择与利用文献信息的某些规律。在网络环境下,引文是否经历了一种因主观选择而带来的变化?从引文类型、最大引文年限、衰减系数等方面,对图书情报领域4种代表性期刊1998-2007年的60371条引文研究表明:网络环境下引文的类型发行了变化,作者倾向于选择容易获取的网络资源,而选择较新资源的趋势并不明显。  相似文献   

16.
The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most prominent and common measures of the prestige, position, and importance of a scientific journal. The JIF may profit from its comprehensibility, robustness, methodological reproducibility, simplicity, and rapid availability, but it is at the expense of serious technical and methodological flaws. The paper discusses two core problems with the JIF: first, citations of documents are generally not normally distributed, and, furthermore, the distribution is affected by outliers, which has serious consequences for the use of the mean value in the JIF calculation. Second, the JIF is affected by bias factors that have nothing to do with the prestige or quality of a journal (e.g., document type). For solving these two problems, we suggest using McCall's area transformation and the Rubin Causal Model. Citation data for documents of all journals in the ISI Subject Category “Psychology, Mathematical” (Journal Citation Report) are used to illustrate the proposal.  相似文献   

17.
哪些因素会影响学术论文的被引次数是文献计量学领域的一个经典研究议题。目前的研究主要关注论文的内容特征和形式特征与被引次数之间的关系,鲜有研究从文本可读性视角切入这一议题。文本可读性影响读者对文本内容的理解和知识吸收,是一个关乎知识传播效率和研究成果认可度的重要因素。本研究在控制论文知识品质和权威性的基础上,使用文本可读性R值等五个变量研究论文的文本可读性对被引次数的影响。以中文图书情报学知名期刊发表于2016—2020年的论文为研究样本,研究发现论文的文本可读性R值、是否采用复合式标题、是否使用公式和表格对被引次数有显著影响,而是否使用图对被引次数没有显著影响。研究验证了中文情境下文本可读性对论文影响力的实质性作用,研究结果对科研人员改善自身的中文学术写作以及提高研究成果影响力具有重要参考价值。  相似文献   

18.
This article reports a comparative study of five measures that quantify the degree of research collaboration, including the collaborative index, the degree of collaboration, the collaborative coefficient, the revised collaborative coefficient, and degree centrality. The empirical results showed that these measures all capture the notion of research collaboration, which is consistent with prior studies. Moreover, the results showed that degree centrality, the revised collaborative coefficient, and the degree of collaboration had the highest coefficient estimates on research productivity, the average JIF, and the average number of citations, respectively. Overall, this article suggests that the degree of collaboration and the revised collaborative coefficient are superior measures that can be applied to bibliometric studies for future researchers.  相似文献   

19.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

20.
Bibliometricians have long recurred to citation counts to measure the impact of publications on the advancement of science. However, since the earliest days of the field, some scholars have questioned whether all citations should be worth the same, and have gone on to weight them by a variety of factors. However sophisticated the operationalization of the measures, the methodologies used in weighting citations still present limits in their underlying assumptions. This work takes an alternative approach to resolving the underlying problem: the proposal is to value citations by the impact of the citing articles, regardless of the length of their reference list. As well as conceptualizing a new indicator of impact, the work illustrates its application to the 2004–2012 Italian scientific production indexed in the WoS. The proposed impact indicator is highly correlated to the traditional citation count, however the shifts observed between the two measures are frequent and the number of outliers not negligible. Moreover, the new indicator shows greater “sensitivity” when used to identify the highly-cited papers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号