首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Wide differences in publication and citation practices make impossible the direct comparison of raw citation counts across scientific disciplines. Recent research has studied new and traditional normalization procedures aimed at suppressing as much as possible these disproportions in citation numbers among scientific domains. Using the recently introduced IDCP (Inequality due to Differences in Citation Practices) method, this paper rigorously tests the performance of six cited-side normalization procedures based on the Thomson Reuters classification system consisting of 172 sub-fields. We use six yearly datasets from 1980 to 2004, with widely varying citation windows from the publication year to May 2011. The main findings are the following three. Firstly, as observed in previous research, within each year the shapes of sub-field citation distributions are strikingly similar. This paves the way for several normalization procedures to perform reasonably well in reducing the effect on citation inequality of differences in citation practices. Secondly, independently of the year of publication and the length of the citation window, the effect of such differences represents about 13% of total citation inequality. Thirdly, a recently introduced two-parameter normalization scheme outperforms the other normalization procedures over the entire period, reducing citation disproportions to a level very close to the minimum achievable given the data and the classification system. However, the traditional procedure of using sub-field mean citations as normalization factors yields also good results.  相似文献   

2.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

3.
《Journal of Informetrics》2019,13(2):738-750
An aspect of citation behavior, which has received longstanding attention in research, is how articles’ received citations evolve as time passes since their publication (i.e., citation ageing). Citation ageing has been studied mainly by the formulation and fit of mathematical models of diverse complexity. Commonly, these models restrict the shape of citation ageing functions and explicitly take into account factors known to influence citation ageing. An alternative—and less studied—approach is to estimate citation ageing functions using data-driven strategies. However, research following the latter approach has not been consistent in taking into account those factors known to influence citation ageing. In this article, we propose a model-free approach for estimating citation ageing functions which combines quantile regression with a non-parametric specification able to capture citation inflation. The proposed strategy allows taking into account field of research effects, impact level effects, citation inflation effects and skewness in the distribution of cites effects. To test our methodology, we collected a large dataset consisting of more than five million citations to 59,707 research articles spanning 12 dissimilar fields of research and, with this data in hand, tested the proposed strategy.  相似文献   

4.
This paper investigates the citation impact of three large geographical areas – the U.S., the European Union (EU), and the rest of the world (RW) – at different aggregation levels. The difficulty is that 42% of the 3.6 million articles in our Thomson Scientific dataset are assigned to several sub-fields among a set of 219 Web of Science categories. We follow a multiplicative approach in which every article is wholly counted as many times as it appears at each aggregation level. We compute the crown indicator and the Mean Normalized Citation Score (MNCS) using for the first time sub-field normalization procedures for the multiplicative case. We also compute a third indicator that does not correct for differences in citation practices across sub-fields. It is found that: (1) No geographical area is systematically favored (or penalized) by any of the two normalized indicators. (2) According to the MNCS, only in six out of 80 disciplines – but in none of 20 fields – is the EU ahead of the U.S. In contrast, the normalized U.S./EU gap is greater than 20% in 44 disciplines, 13 fields, and for all sciences as a whole. The dominance of the EU over the RW is even greater. (3) The U.S. appears to devote relatively more – and the RW less – publication effort to sub-fields with a high mean citation rate, which explains why the U.S./EU and EU/RW gaps for all sciences as a whole increase by 4.5 and 5.6 percentage points in the un-normalized case. The results with a fractional approach are very similar indeed.  相似文献   

5.
In citation network analysis, complex behavior is reduced to a simple edge, namely, node A cites node B. The implicit assumption is that A is giving credit to, or acknowledging, B. It is also the case that the contributions of all citations are treated equally, even though some citations appear multiply in a text and others appear only once. In this study, we apply text-mining algorithms to a relatively large dataset (866 information science articles containing 32,496 bibliographic references) to demonstrate the differential contributions made by references. We (1) look at the placement of citations across the different sections of a journal article, and (2) identify highly cited works using two different counting methods (CountOne and CountX). We find that (1) the most highly cited works appear in the Introduction and Literature Review sections of citing papers, and (2) the citation rankings produced by CountOne and CountX differ. That is to say, counting the number of times a bibliographic reference is cited in a paper rather than treating all references the same no matter how many times they are invoked in the citing article reveals the differential contributions made by the cited works to the citing paper.  相似文献   

6.
Usage of field-normalized citation scores is a bibliometric standard. Different methods for field-normalization are in use, but also the choice of field-classification system determines the resulting field-normalized citation scores. Using Web of Science data, we calculated field-normalized citation scores using the same formula but different field-classification systems to answer the question if the resulting scores are different or similar. Six field-classification systems were used: three based on citation relations, one on semantic similarity scores (i.e., a topical relatedness measure), one on journal sets, and one on intellectual classifications. Systems based on journal sets and intellectual classifications agree on at least the moderate level. Two out of the three sets based on citation relations also agree on at least the moderate level. Larger differences were observed for the third data set based on citation relations and semantic similarity scores. The main policy implication is that normalized citation impact scores or rankings based on them should not be compared without deeper knowledge of the classification systems that were used to derive these values or rankings.  相似文献   

7.
[目的/意义] 探讨不同学科分类体系在机构科研影响力评价中的差异及对评价结果的影响。[方法/过程] 以Incites数据库为数据来源,选择5种分类体系、8种分类方案。首先对14 955个机构不同分类方案下的学科标准化引文影响力(Category Normalized Citation Impact,CNCI)进行相关性分析,考察不同分类体系下评价结果的整体相似性。然后以国内双一流建设中的36所高校为例,比较和分析不同分类方案下机构CNCI值的变化情况及差异产生的具体原因,研究分类体系对个体机构评价的影响。[结果/结论] 不同学科分类方案下得到的CNCI值相关性显著(最低相关性达到0.85),即不同分类体系得到的整体评价结果具有较高的相似度。但是不同分类体系下的评价结果也存在聚类特征,OECD、ESI、SCADC、CT1相互之间相关系数高、结果更相近,WoS、CT2和CT3评价结果更接近,分类体系的粒度是决定评价结果的重要因素。36所高校在不同的分类体系下评价结果的整体相关性较高,但个别高校CNCI值变化较大,特别是在热点主题上有突出发文的机构。评价结果的巨大差异其根本原因是论文划分到不同类目中,不同类目下的引用基准值不同。在评价过程中更加推荐粒度较细的分类体系,减少热点主题等对引用基准值的影响。  相似文献   

8.
In an age of intensifying scientific collaboration, the counting of papers by multiple authors has become an important methodological issue in scientometric based research evaluation. Especially, how counting methods influence institutional level research evaluation has not been studied in existing literatures. In this study, we selected the top 300 universities in physics in the 2011 HEEACT Ranking as our study subjects. We compared the university rankings generated from four different counting methods (i.e. whole counting, straight counting using first author, straight counting using corresponding author, and fractional counting) to show how paper counts and citation counts and the subsequent university ranks were affected by counting method selection. The counting was based on the 1988–2008 physics papers records indexed in ISI WoS. We also observed how paper and citation counts were inflated by whole counting. The results show that counting methods affected the universities in the middle range more than those in the upper or lower ranges. Citation counts were also more affected than paper counts. The correlation between the rankings generated from whole counting and those from the other methods were low or negative in the middle ranges. Based on the findings, this study concluded that straight counting and fractional counting were better choices for paper count and citation count in the institutional level research evaluation.  相似文献   

9.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

10.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

11.
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor.  相似文献   

12.
13.
以CNKI的中国引文数据库(CCD)为数据源,考察三个学科在不同载体环境下的文献被引情况,发现不论是纸质-网络混合载体环境还是纸质载体环境,文献的被引曲线变化趋势都较为相似,但混合载体环境下文献的被引程度明显大于纯纸质载体时期,而文献被引峰值的出现时间则稍晚于纯纸质载体时期.回归分析的结果表明不同载体环境、不同学科间文献被引的历时分布数据需要用不同的数学模型来描述.  相似文献   

14.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

15.
A similarity comparison is made between 120 journals from five allied Web of Science disciplines (Communication, Computer Science-Information Systems, Education & Educational Research, Information Science & Library Science, Management) and a more distant discipline (Geology) across three time periods using a novel method called citing discipline analysis that relies on the frequency distribution of Web of Science Research Areas for citing articles. Similarities among journals are evaluated using multidimensional scaling with hierarchical cluster analysis and Principal Component Analysis. The resulting visualizations and groupings reveal clusters that align with the discipline assignments for the journals for four of the six disciplines, but also greater overlaps among some journals for two of the disciplines or categorizations that do not necessarily align with their assigned disciplines. Some journals categorized into a single given discipline were found to be more closely aligned with other disciplines and some journals assigned to multiple disciplines more closely aligned with only one of the assigned disciplines. The proposed method offers a complementary way to more traditional methods such as journal co-citation analysis to compare journal similarity using data that are readily available through Web of Science.  相似文献   

16.
We performed a citation analysis on the Web of Science publications consisting of more than 63 million articles and over a billion citations on 254 subjects from 1981 to 2020. We proposed the Article’s Scientific Prestige (ASP) metric and compared this metric to number of citations (#Cit) and journal grade in measuring the scientific impact of individual articles in the large-scale hierarchical and multi-disciplined citation network. In contrast to #Cit, ASP, that is computed based on the eigenvector centrality, considers both direct and indirect citations, and provides steady-state evaluation cross different disciplines. We found that ASP and #Cit are not aligned for most articles, with a growing mismatch amongst the less cited articles. While both metrics are reliable for evaluating the prestige of articles such as Nobel Prize winning articles, ASP tends to provide more persuasive rankings than #Cit when the articles are not highly cited. The journal grade, that is eventually determined by a few highly cited articles, is unable to properly reflect the scientific impact of individual articles. The number of references and coauthors are less relevant to scientific impact, but subjects do make a difference.  相似文献   

17.
对中美4种图书馆学拔尖期刊3116篇论文的36998篇样本引文进行了多角度分析,发现中美图书馆学理论均广泛借用其他学科知识,借用学科出现长尾现象。从借用学科领域看,美国较关注生命系统领域,中国关注非生命系统领域;从借用知识形式看,中国重视动态知识,美国则更重视稳定知识;从借用知识年代看,美国重视累计的知识,中国更重视现实知识。据此得出结论:中国图书馆学知识体系更接近于应用学科,美国图书馆学知识体系更接近于生命系统学科;中国图书馆学研究重视吸收新知识,忽略知识积累。我国图书馆学研究应加强基础理论研究,加强对纯科学知识的借用,加强生命系统学科知识的借用,注重知识积累与学术传承。图1。表6。参考文献14。  相似文献   

18.
Subject classification arises as an important topic for bibliometrics and scientometrics, searching to develop reliable and consistent tools and outputs. Such objectives also call for a well delimited underlying subject classification scheme that adequately reflects scientific fields. Within the broad ensemble of classification techniques, clustering analysis is one of the most successful.Two clustering algorithms based on modularity – the VOS and Louvain methods – are presented here for the purpose of updating and optimizing the journal classification of the SCImago Journal & Country Rank (SJR) platform. We used network analysis and Pajek visualization software to run both algorithms on a network of more than 18,000 SJR journals combining three citation-based measures of direct citation, co-citation and bibliographic coupling. The set of clusters obtained was termed through category labels assigned to SJR journals and significant words from journal titles.Despite the fact that both algorithms exhibited slight differences in performance, the results show a similar behaviour in grouping journals. Consequently, they are deemed to be appropriate solutions for classification purposes. The two newly generated algorithm-based classifications were compared to other bibliometric classification systems, including the original SJR and WoS Subject Categories, in order to validate their consistency, adequacy and accuracy. In addition to some noteworthy differences, we found a certain coherence and homogeneity among the four classification systems analysed.  相似文献   

19.
数据表格科学性审读的比较分析方法   总被引:2,自引:0,他引:2  
王贵春  钱文霖 《编辑学报》2002,14(2):105-107
在科技文稿和论著中数表的内容疏误时有发生.数表科学性的审读成了科技编辑的一大难题,但目前编辑学对数表内容科学性的审读方法研究极少.作者提出用科技编辑方法论中的比较分析方法考查数表的科学性,并着重从比较的对象,即表与文、表与表、表与图、表内要素4个方面详细阐释了审查数表科学性的具体方法.  相似文献   

20.
There is an increasing consensus in the Recommender Systems community that the dominant error-based evaluation metrics are insufficient, and mostly inadequate, to properly assess the practical effectiveness of recommendations. Seeking to evaluate recommendation rankings—which largely determine the effective accuracy in matching user needs—rather than predicted rating values, Information Retrieval metrics have started to be applied for the evaluation of recommender systems. In this paper we analyse the main issues and potential divergences in the application of Information Retrieval methodologies to recommender system evaluation, and provide a systematic characterisation of experimental design alternatives for this adaptation. We lay out an experimental configuration framework upon which we identify and analyse specific statistical biases arising in the adaptation of Information Retrieval metrics to recommendation tasks, namely sparsity and popularity biases. These biases considerably distort the empirical measurements, hindering the interpretation and comparison of results across experiments. We develop a formal characterisation and analysis of the biases upon which we analyse their causes and main factors, as well as their impact on evaluation metrics under different experimental configurations, illustrating the theoretical findings with empirical evidence. We propose two experimental design approaches that effectively neutralise such biases to a large extent. We report experiments validating our proposed experimental variants, and comparing them to alternative approaches and metrics that have been defined in the literature with similar or related purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号