首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
将引用认同方法应用于农林类高校图书馆研究特点和研究网络分析。结合《中国引文数据库》,计算"211"农林类高校图书馆2009-2011年引用认同指标,构建其互引认同网络;计算其标准化 (h,g,R)指数,并对各引用认同指标与其相关性进行分析。研究表明:引用认同可以反映出这些图书馆研究特点和研究偏好及对国外成果吸收借鉴的情况,国外机构引文/引文比、平均引用半衰期和平均即年引用指数与标准化(h,g,R)指数呈正相关,平均即年引用指数与标准化(h,g,R)指数有较强的相关性。  相似文献   

2.
The outgrow index measures to which extent an article outgrows – in terms of citations – the references on which it is based. In this article, three types of time series of outgrow indices and one outgrow index matrix are introduced. Examples of these time series are given illustrating the newly introduced concepts. These time series expand the toolbox for citation analysis by focusing on a specific subnetwork of the global citation network. It is stated that citation analysis has three application areas: information retrieval, research evaluation and structural citation network studies. This contribution is explicitly placed among structural network studies.  相似文献   

3.
Previous research has shown that citation data from different types of Web sources can potentially be used for research evaluation. Here we introduce a new combined Integrated Online Impact (IOI) indicator. For a case study, we selected research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003. We compared the citation counts from Web of Science (WoS) and Scopus with five online sources of citation data including Google Scholar, Google Books, Google Blogs, PowerPoint presentations and course reading lists. The mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. We also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication. Further analysis showed that the overall percentage for unique Google Scholar citations outside the WoS were 73% and 60% for the articles published in JASIST and Scientometrics, respectively. An important conclusion is that in subject areas where wider types of intellectual impact indicators outside the WoS and Scopus databases are needed for research evaluation, IOI can be used to help monitor research performance.  相似文献   

4.
科学研究的目的在于创造知识,并应用理论成果解决我国社会、经济、文化等发展中的实际问题。将论文发表在国际期刊上可以让更多的国际同行了解我国最新的科研成果,为我国获得更多的国际影响力,所以在过去二十多年里SCI论文成为我国科研考核的一个重要指标。在这种科研评价导向下,我国学者发表的国际论文数量已居世界第一位,而大量来自国内同行的引用使得我国国际论文的被引量排名世界第二。本文提取1990至2015年Web of Science论文及其引文的数据,分析不同国家、不同学科在国家层次的自引情况,并在不同国家、不同学科之间进行比较。研究发现,在排除国内同行的自引后,我国国际论文的真实国际影响力仍然有限,除了临床医学和物理等少数学科外,其他学科仍然低于全球平均水平。  相似文献   

5.
It is widely accepted that data is fundamental for research and should therefore be cited as textual scientific publications. However, issues like data citation, handling and counting the credit generated by such citations, remain open research questions.Data credit is a new measure of value built on top of data citation, which enables us to annotate data with a value, representing its importance. Data credit can be considered as a new tool that, together with traditional citations, helps to recognize the value of data and its creators in a world that is ever more depending on data.In this paper we define data credit distribution (DCD) as a process by which credit generated by citations is given to the single elements of a database. We focus on a scenario where a paper cites data from a database obtained by issuing a query. The citation generates credit which is then divided among the database entities responsible for generating the query output. One key aspect of our work is to credit not only the explicitly cited entities, but even those that contribute to their existence, but which are not accounted in the query output.We propose a data credit distribution strategy (CDS) based on data provenance and implement a system that uses the information provided by data citations to distribute the credit in a relational database accordingly.As use case and for evaluation purposes, we adopt the IUPHAR/BPS Guide to Pharmacology (GtoPdb), a curated relational database. We show how credit can be used to highlight areas of the database that are frequently used. Moreover, we also underline how credit rewards data and authors based on their research impact, and not merely on the number of citations. This can lead to designing new bibliometrics for data citations.  相似文献   

6.
The ability to predict the long-term impact of a scientific article soon after its publication is of great value towards accurate assessment of research performance. In this work we test the hypothesis that good predictions of long-term citation counts can be obtained through a combination of a publication's early citations and the impact factor of the hosting journal. The test is performed on a corpus of 123,128 WoS publications authored by Italian scientists, using linear regression models. The average accuracy of the prediction is good for citation time windows above two years, decreases for lowly-cited publications, and varies across disciplines. As expected, the role of the impact factor in the combination becomes negligible after only two years from publication.  相似文献   

7.
徐琳宏  丁堃  陈娜  李冰 《情报学报》2020,39(1):25-37
基于内容的引文情感分析克服了传统基于引用频次的引用同一化问题,是引文内容分析领域一个重要的研究热点。然而引文情感分析依赖于带标注的数据集,目前大规模高质量的引文情感语料资源匮乏,严重制约了该领域的研究。因此,本文在分析引文情感表达方式的基础上提出了一套适用于引文情感表示的标注体系,并详细阐述了语料库建设的技术和方法。采用人机结合的标注策略,借助完善的引文标注系统,构建了规模较大的中文文献的引文情感语料库。统计结果显示,在中文信息处理和科技管理领域情感褒义和贬义总的引用的占比分别为22%和6%,引文情感标注kappa值达到0.852,表明该语料库能够客观地反映作者的情感倾向性,可为论文评价、引文网络分析和情感分析等相关领域的研究提供数据支撑。  相似文献   

8.
The article provides details of a faculty citation analysis study conducted at the University of Nevada, Las Vegas. The citation analysis analyzed faculty citations for publications published from 2002 to 2010. The citation analysis was used for a collection assessment project and continues to be used, along with other data to help assist with collection management decisions.  相似文献   

9.
针对目前科研评价体系中对学术论文影响力评价存在的种种问题,提出一种基于引文分析法的"客观同行评议"方法,即在考察被引频次数量的基础上,通过筛选高质量评论性引文,结合这些引文的具体评价内容,实现对学术论文影响力和学术价值的客观评估。通过具体案例的演示对方法进行论证和详细说明。  相似文献   

10.
The aim of this study was to develop a model to evaluate the retrieval quality of search queries performed by Dutch general practitioners using the printed Index Medicus, MEDLINE on CD-ROM, and MEDLINE through GRATEFUL MED. Four search queries related to general practice were formulated for a continuing medical education course in literature searching. The selected potential relevant citations from the course instructor and the 103 course participants together served as the basic set for the three judges to evaluate for (a) relevance and (b) quality, with the latter based on journal ranking, research design and publication type. Relevant individual citations received a citation quality score from 1 (low) to 4 (high). The overall search quality was expressed in a formula, which included the individual citation quality score of the selected and missed relevant citations, and the number of selected non-relevant citations. The outcome measures were the number and quality of relevant citations and agreement between the judges. Out of 864 citations, 139 were assessed as relevant, of which 44 citations received an individual citation quality score of 1, 76 of 2, 19 of 3 and none of 4. The level of agreement between the judges was 68% for the relevant citations, and 88% for the non-relevant citations. We describe a model for the evaluation of search queries based not only on the relevance, but also on the quality of the citations retrieved. With adaptation, this model could be generalized to other professional users, and to other bibliographic sources.  相似文献   

11.
《Journal of Informetrics》2019,13(2):485-499
With the growing number of published scientific papers world-wide, the need to evaluation and quality assessment methods for research papers is increasing. Scientific fields such as scientometrics, informetrics, and bibliometrics establish quantified analysis methods and measurements for evaluating scientific papers. In this area, an important problem is to predict the future influence of a published paper. Particularly, early discrimination between influential papers and insignificant papers may find important applications. In this regard, one of the most important metrics is the number of citations to the paper, since this metric is widely utilized in the evaluation of scientific publications and moreover, it serves as the basis for many other metrics such as h-index. In this paper, we propose a novel method for predicting long-term citations of a paper based on the number of its citations in the first few years after publication. In order to train a citation count prediction model, we employed artificial neural network which is a powerful machine learning tool with recently growing applications in many domains including image and text processing. The empirical experiments show that our proposed method outperforms state-of-the-art methods with respect to the prediction accuracy in both yearly and total prediction of the number of citations.  相似文献   

12.
Identifying the future influential papers among the newly published ones is an important yet challenging issue in bibliometrics. As newly published papers have no or limited citation history, linear extrapolation of their citation counts—which is motivated by the well-known preferential attachment mechanism—is not applicable. We translate the recently introduced notion of discoverers to the citation network setting, and show that there are authors who frequently cite recent papers that become highly-cited in the future; these authors are referred to as discoverers. We develop a method for early identification of highly-cited papers based on the early citations from discoverers. The results show that the identified discoverers have a consistent citing pattern over time, and the early citations from them can be used as a valuable indicator to predict the future citation counts of a paper. The discoverers themselves are potential future outstanding researchers as they receive more citations than average.  相似文献   

13.
In this work we investigate the sensitivity of individual researchers’ productivity rankings to the time of citation observation. The analysis is based on observation of research products for the 2001–2003 triennium for all research staff of Italian universities in the hard sciences, with the year of citation observation varying from 2004 to 2008. The 2008 rankings list is assumed the most accurate, as citations have had the longest time to accumulate and thus represent the best possible proxy of impact. By comparing the rankings lists from each year against the 2008 benchmark we provide policy-makers and research organization managers a measure of trade-off between timeliness of evaluation execution and accuracy of performance rankings. The results show that with variation in the evaluation citation window there are variable rates of inaccuracy across the disciplines of researchers. The inaccuracy results negligible for Physics, Biology and Medicine.  相似文献   

14.
Across the various scientific domains, significant differences occur with respect to research publishing formats, frequencies and citing practices, the nature and organisation of research and the number and impact of a given domain's academic journals. Consequently, differences occur in the citations and h-indices of the researchers. This paper attempts to identify cross-domain differences using quantitative and qualitative measures. The study focuses on the relationships among citations, most-cited papers and h-indices across domains and for research group sizes. The analysis is based on the research output of approximately 10,000 researchers in Slovenia, of which we focus on 6536 researchers working in 284 research group programmes in 2008–2012.As comparative measures of cross-domain research output, we propose the research impact cube (RIC) representation and the analysis of most-cited papers, highest impact factors and citation distribution graphs (Lorenz curves). The analysis of Lotka's model resulted in the proposal of a binary citation frequencies (BCF) distribution model that describes well publishing frequencies. The results may be used as a model to measure, compare and evaluate fields of science on the global, national and research community level to streamline research policies and evaluate progress over a definite time period.  相似文献   

15.
This paper introduces a new impact indicator for the research effort of a university, nh3. The number of documents or the number of citations obtained by an institution are used frequently in international ranking of institutions. However, these are very dependent on the size and this is inducing mergers with the apparent sole goal of improving the research ranking. The alternative is to use the ratio of the two measures, the mean citation rate, that is size independent but it has been shown to fluctuate along the time as a consequence of its dependence on a very small number of documents with an extremely good citation performance. In the last few years, the popularity of the Hirsch index as an indicator of the research performance of individual researchers led to its application to journals and institutions. However, the original aim of this h index of giving a mixed measure of the number of documents published and their impact as measured by the citations collected along the time is totally undesirable for institutions as the overall size may be considered irrelevant for the impact evaluation of research. Furthermore, the h index when applied to institutions tends to retain a very small number of documents making all other research production irrelevant for this indicator. The nh3 index proposed here is designed to measure solely the impact of research in a way that is independent of the size of the institution and is made relatively stable by making a 20-year estimate of the citations of the documents produced in a single year.  相似文献   

16.
p 指数运用于人才评价的有效性实证研究   总被引:2,自引:0,他引:2  
h指数用于高发文、高引用的学者评价是有效的,但对低发文、高引用的学者进行评价存在缺陷,且数值易于雷同,不易区分。p指数在学者研究绩效评价方面具有同h指数相一致的维度,它不仅考虑学者的被引次数(C),而且考虑学者的研究质量指标——平均被引率(C/N)。以图书情报与文献学科领域49位专家为例,对比分析专家的发文量(N)、被引次数(C)、平均被引率、专家h指标、g指数、p指数,并进行相关性分析。结论:p指数优于现有的h指数、g指数,更具有评价的合理性,应在更大范围内进一步使用。  相似文献   

17.
针对仅仅依靠引文数量来评价文献的问题,引入社会网络分析的权力指数指标,将所有文献看作是存在引用和被引用关系的网络。对社会网络分析及其权力指数的相关概念及如何应用这一指标来评价文献进行详细介绍,并结合实例进行说明。达到从文献的引文质量以及科学研究的延续性这一角度对文献进行分析评价的目的,为引文分析和评价提供新的思路。  相似文献   

18.
This study provides a conceptual overview of the literature dealing with the process of citing documents (focusing on the literature from the recent decade). It presents theories, which have been proposed for explaining the citation process, and studies having empirically analyzed this process. The overview is referred to as conceptual, because it is structured based on core elements in the citation process: the context of the cited document, processes from selection to citation of documents, and the context of the citing document. The core elements are presented in a schematic representation. The overview can be used to find answers on basic questions about the practice of citing documents. Besides understanding of the process of citing, it delivers basic information for the proper application of citations in research evaluation.  相似文献   

19.
参考文献引用质量鉴审的3个基本要素   总被引:1,自引:0,他引:1  
朱大明 《编辑学报》2015,27(4):334-335
为充分发挥参考文献的学术论证作用,并保证基于引文计量分析的学术评价的准确性、真实性和有效性,在审稿过程中应重视参考文献引用质量的鉴审.文章提出参考文献引用质量鉴审的3个基本要素,即引用格式、引文内容和引证作用,并简要阐述有关评审要点.  相似文献   

20.
基于F1000与WoS的同行评议与文献计量相关性研究   总被引:1,自引:1,他引:0  
为比较同行评议与文献计量方法在科学评价中的有效性及相关性,选取F1000以及Web of Science数据库,采用SPSS16.0软件,将近2000篇论文的F1000因子与Web of Science数据库中指标进行相关性比较。结果显示,F1000因子与统计区间内的被引频次呈显著正相关,同时一些F1000因子很高的论文并没有高频被引,反之亦然。结论指出:从统计学的视角,文献计量指标与同行评议结果具有正向相关性,但是无论是同行评议还是文献计量,单独作为科学评价标准都会有失偏颇,以引文分析为代表的定量指标与同行评议方法的结合将是未来科学评价的主流。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号