首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
高Altmetrics指标科技论文学术影响力研究   总被引:9,自引:0,他引:9  
引入"公平性测试"方法以消除时间窗口对被引次数的影响。以高Altmetrics指标论文作为样本,选取与样本论文发表在同一期刊同一期上前后两篇论文作为参照。利用Altmetric.com、Web of Science分别获取273篇样本及参照论文的Altmetric分数、底层数据值和被引用次数。通过比较分析后发现:Altmetrics和引文数两种指标反映出读者对文献的不同关注方向,底层数据源中大众媒体对于Altmetric分数的影响最明显,高Altmetrics指标论文同时具有较高的学术影响力。作为一种早期指标,高Altmetrics指标在一定程度上能够被视作文章在未来获得高被引的风向标。  相似文献   

2.
With the advancement of science and technology, the number of academic papers published each year has increased almost exponentially. While a large number of research papers highlight the prosperity of science and technology, they also give rise to some problems. As we know, academic papers are the most intuitive embodiment of the research results of scholars, which can reflect the level of researchers. It is also the standard for evaluation and decision-making of them, such as promotion and allocation of funds. Therefore, how to measure the quality of an academic paper is very critical. The most common standard for measuring the quality of academic papers is the number of citation counts of them, as this indicator is widely used in the evaluation of scientific publications. It also serves as the basis for many other indicators (such as the h-index). Therefore, it is very important to be able to accurately predict the citation counts of academic papers. To improve the effective of citation counts prediction, we try to solve the citation counts prediction problem from the perspective of information cascade prediction and take advantage of deep learning techniques. Thus, we propose an end-to-end deep learning framework (DeepCCP), consisting of graph structure representation and recurrent neural network modules. DeepCCP directly uses the citation network formed in the early stage of the paper as the input, and outputs the citation counts of the corresponding paper after a period of time. It only exploits the structure and temporal information of the citation network, and does not require other additional information. According to experiments on two real academic citation datasets, DeepCCP is shown superior to the state-of-the-art methods in terms of the accuracy of citation count prediction.  相似文献   

3.
Microsoft Academic is a free academic search engine and citation index that is similar to Google Scholar but can be automatically queried. Its data is potentially useful for bibliometric analysis if it is possible to search effectively for individual journal articles. This article compares different methods to find journal articles in its index by searching for a combination of title, authors, publication year and journal name and uses the results for the widest published correlation analysis of Microsoft Academic citation counts for journal articles so far. Based on 126,312 articles from 323 Scopus subfields in 2012, the optimal strategy to find articles with DOIs is to search for them by title and filter out those with incorrect DOIs. This finds 90% of journal articles. For articles without DOIs, the optimal strategy is to search for them by title and then filter out matches with dissimilar metadata. This finds 89% of journal articles, with an additional 1% incorrect matches. The remaining articles seem to be mainly not indexed by Microsoft Academic or indexed with a different language version of their title. From the matches, Scopus citation counts and Microsoft Academic counts have an average Spearman correlation of 0.95, with the lowest for any single field being 0.63. Thus, Microsoft Academic citation counts are almost universally equivalent to Scopus citation counts for articles that are not recent but there are national biases in the results.  相似文献   

4.
This paper presents an empirical analysis of two different methodologies for calculating national citation indicators: whole counts and fractionalised counts. The aim of our study is to investigate the effect on relative citation indicators when citations to documents are fractionalised among the authoring countries. We have performed two analyses: a time series analysis of one country and a cross-sectional analysis of 23 countries. The results show that all countries’ relative citation indicators are lower when fractionalised counting is used. Further, the difference between whole and fractionalised counts is generally greatest for the countries with the highest proportion of internationally co-authored articles. In our view there are strong arguments in favour of using fractionalised counts to calculate relative citation indexes at the national level, rather than using whole counts, which is the most common practice today.  相似文献   

5.
Identifying the future influential papers among the newly published ones is an important yet challenging issue in bibliometrics. As newly published papers have no or limited citation history, linear extrapolation of their citation counts—which is motivated by the well-known preferential attachment mechanism—is not applicable. We translate the recently introduced notion of discoverers to the citation network setting, and show that there are authors who frequently cite recent papers that become highly-cited in the future; these authors are referred to as discoverers. We develop a method for early identification of highly-cited papers based on the early citations from discoverers. The results show that the identified discoverers have a consistent citing pattern over time, and the early citations from them can be used as a valuable indicator to predict the future citation counts of a paper. The discoverers themselves are potential future outstanding researchers as they receive more citations than average.  相似文献   

6.
In this paper we present a first large-scale analysis of the relationship between Mendeley readership and citation counts with particular documents’ bibliographic characteristics. A data set of 1.3 million publications from different fields published in journals covered by the Web of Science (WoS) has been analyzed. This work reveals that document types that are often excluded from citation analysis due to their lower citation values, like editorial materials, letters, news items, or meeting abstracts, are strongly covered and saved in Mendeley, suggesting that Mendeley readership can reliably inform the analysis of these document types. Findings show that collaborative papers are frequently saved in Mendeley, which is similar to what is observed for citations. The relationship between readership and the length of titles and number of pages, however, is weaker than for the same relationship observed for citations. The analysis of different disciplines also points to different patterns in the relationship between several document characteristics, readership, and citation counts. Overall, results highlight that although disciplinary differences exist, readership counts are related to similar bibliographic characteristics as those related to citation counts, reinforcing the idea that Mendeley readership and citations capture a similar concept of impact, although they cannot be considered as equivalent indicators.  相似文献   

7.
Studies on the relationship between the numbers of citations and downloads of scientific publications is beneficial for understanding the mechanism of citation patterns and research evaluation. However, seldom studies have considered directionality issues between downloads and citations or adopted a case-by-case time lag length between the download and citation time series of each individual publication. In this paper, we introduce the Granger-causal inference strategy to study the directionality between downloads and citations and set up the length of time lag between the time series for each case. By researching the publications on the Lancet, we find that publications have various directionality patterns, but highly cited publications tend to feature greater possibilities to have Granger causality. We apply a step-by-step manner to introduce the Granger-causal inference method to information science as four steps, namely conducting stationarity tests, determining time lag between time series, establishing cointegration test, and implementing Granger-causality inference. We hope that this method can be applied by future information scientists in their own research contexts.  相似文献   

8.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

9.
Citations are increasingly used for research evaluations. It is therefore important to identify factors affecting citation scores that are unrelated to scholarly quality or usefulness so that these can be taken into account. Regression is the most powerful statistical technique to identify these factors and hence it is important to identify the best regression strategy for citation data. Citation counts tend to follow a discrete lognormal distribution and, in the absence of alternatives, have been investigated with negative binomial regression. Using simulated discrete lognormal data (continuous lognormal data rounded to the nearest integer) this article shows that a better strategy is to add one to the citations, take their log and then use the general linear (ordinary least squares) model for regression (e.g., multiple linear regression, ANOVA), or to use the generalised linear model without the log. Reasonable results can also be obtained if all the zero citations are discarded, the log is taken of the remaining citation counts and then the general linear model is used, or if the generalised linear model is used with the continuous lognormal distribution. Similar approaches are recommended for altmetric data, if it proves to be lognormally distributed.  相似文献   

10.
Scholarly citations – widely seen as tangible measures of the impact and significance of academic papers – guide critical decisions by research administrators and policy makers. The citation distributions form characteristic patterns that can be revealed by big-data analysis. However, the citation dynamics varies significantly among subject areas, countries etc. The problem is how to quantify those differences, separate global and local citation characteristics. Here, we carry out an extensive analysis of the power-law relationship between the total citation count and the h-index to detect a functional dependence among its parameters for different science domains. The results demonstrate that the statistical structure of the citation indicators admits representation by a global scale and a set of local exponents. The scale parameters are evaluated for different research actors – individual researchers and entire countries – employing subject- and affiliation-based divisions of science into domains. The results can inform research assessment and classification into subject areas; the proposed divide-and-conquer approach can be applied to hidden scales in other power-law systems.  相似文献   

11.
Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics’ ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics’ performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.  相似文献   

12.
Citation behaviour is the source driver of scientific dynamics, and it is essential to understand its effect on knowledge diffusion and intellectual structure. This study explores the effect of citation behaviour on disciplinary knowledge diffusion and intellectual structure by comparing three types of citation behaviour trends, namely the high citation trend, medium citation trend, and low citation trend. The diffusion power, diffusion speed, and diffusion breadth were calculated to quantify knowledge diffusion. The properties of the global and local citation network structure were used to reflect the particular influences of citation behaviour on the scientific intellectual structure. The primary empirical results show that (a) the high citation behaviour trend could improve the knowledge diffusion speed for papers with a short citation history span. Additionally, the medium citation trend has the broadest diffusion breadth whereas the low citation behaviour trend might make the citation counts take off for papers with a long citation history span; (b) the high citation trend has a stronger influence and greater control over the intellectual structure, but this relationship is true only for papers with a short or normal citation history span. These findings could play important roles in scientific research evaluation and impact prediction.  相似文献   

13.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

14.
This study investigates the use, citation and diffusion of three bibliometric mapping software tools (CiteSpace, HistCite and VOSviewer) in scientific papers. We first conduct a content analysis of a sample of 481 English core journal papers—i.e., papers from journals deemed central to their respective disciplines—in which at least one of these tools is mentioned. This allows us to understand the predominant mention and citation practices surrounding these tools. We then employ several diffusion indicators to gain insight into the diffusion patterns of the three software tools. Overall, we find that researchers mention and cite the tools in diverse ways, many of which fall short of a traditional formal citation. Our results further indicate a clear upward trend in the use of all three tools, though VOSviewer is more frequently used than CiteSpace or HistCite. We also find that these three software tools have seen the fastest and most widespread adoption in library and information science research, where the tools originated. They have since been gradually adopted in other areas of study, initially at a lower diffusion speed but afterward at a rapidly growing rate.  相似文献   

15.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

16.
科技期刊参考文献数量与部分引证指标的定量关系初探   总被引:1,自引:0,他引:1  
针对目前相关研究缺乏定量分析的现状,探讨中国科技期刊参考文献数量与总被引频次、影响因子的定量关系.根据《中国科技期刊引证报告》2000-2013年的相关统计数据,拟合出相应的关系曲线及公式,分析篇均引文数与平均总被引频次、篇均引文数与平均影响因子的关系和变化趋势.结果表明:篇均引文数与平均总被引频次呈现较好的线性关系;篇均引文数与平均影响因子呈现比较理想的三次多项式关系;拟合曲线与统计数据吻合良好.根据这2个拟合公式进行预测,当篇均引文数达到20时,中国科技核心期刊的平均总被引频次有望超过1 700,平均影响因子有望超过1.0.根据预测结果,如果篇均引文数能在现有基础上提高26%,则平均总被引频次有可能提高44%,平均影响因子有可能提高90%.  相似文献   

17.
This paper presents a statistical analysis of the relationship between three science indicators applied in earlier bibliometric studies, namely research leadership based on corresponding authorship, international collaboration using international co-authorship data, and field-normalized citation impact. Indicators at the level of countries are extracted from the SIR database created by SCImago Research Group from publication records indexed for Elsevier’s Scopus. The relationship between authorship and citation-based indicators is found to be complex, as it reflects a country’s phase of scientific development and the coverage policy of the database. Moreover, one should distinguish a genuine leadership effect from a purely statistical effect due to fractional counting. Further analyses at the level of institutions and qualitative validation studies are recommended.  相似文献   

18.
In recent decades, the United States Patent and Trademark Office (USPTO) has been granting more and more patents with more and more references, which has led to patent citation inflation. Citation counts are a fundamental consideration in decisions about research funding, academic promotions, commercializing IP, investing in technologies, etc. With so much at stake, we must be sure we are valuing citations at their true worth. In this article, we reveal two types of patent citation inflation and analyze its causes and cumulative effects. Further, we propose some alternative indicators that more accurately reflect the true worth of a citation. A case study on the patents held by eight universities demonstrates that the relative indicators outlined in this paper are an effective way to account for citation inflation as an alternative approach to evaluating patent activity.  相似文献   

19.
In an age of intensifying scientific collaboration, the counting of papers by multiple authors has become an important methodological issue in scientometric based research evaluation. Especially, how counting methods influence institutional level research evaluation has not been studied in existing literatures. In this study, we selected the top 300 universities in physics in the 2011 HEEACT Ranking as our study subjects. We compared the university rankings generated from four different counting methods (i.e. whole counting, straight counting using first author, straight counting using corresponding author, and fractional counting) to show how paper counts and citation counts and the subsequent university ranks were affected by counting method selection. The counting was based on the 1988–2008 physics papers records indexed in ISI WoS. We also observed how paper and citation counts were inflated by whole counting. The results show that counting methods affected the universities in the middle range more than those in the upper or lower ranges. Citation counts were also more affected than paper counts. The correlation between the rankings generated from whole counting and those from the other methods were low or negative in the middle ranges. Based on the findings, this study concluded that straight counting and fractional counting were better choices for paper count and citation count in the institutional level research evaluation.  相似文献   

20.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号