首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
[目的/意义]分析学科规范引文影响力在科学评价中的可行性及其与同行评议的相关性,为负责任计量及以其为支撑的同行评议提供借鉴。[方法/过程]选取F1000以及InCites平台,将29 850篇细胞生物学文献、30 326篇生物技术文献的CNCI (学科规范化引文影响力)与被引频次进行相关分析,对其中956篇细胞生物学论文的CNCI与F1000分值进行斯皮尔曼相关系数检验。[结果/结论]研究结果表明,从统计学视角看CNCI与被引频次呈高度正相关,与F1000分值呈显著正相关,同时亦存在二者相悖的情形。因此,CNCI在一定程度上能够反映同行评议结果、能代偿实施学术影响力归誉的功能,并适用于跨学科比较;但同行评议或CNCI单独作为科学评价标准都会有失偏颇,以CNCI为代表的新一代负责任计量指标为支撑的同行评议将成为未来科学评价的主流。  相似文献   

2.
Recently, two new indicators (Equalized Mean-based Normalized Proportion Cited, EMNPC; Mean-based Normalized Proportion Cited, MNPC) were proposed which are intended for sparse scientometrics data, e.g., alternative metrics (altmetrics). The indicators compare the proportion of mentioned papers (e.g. on Facebook) of a unit (e.g., a researcher or institution) with the proportion of mentioned papers in the corresponding fields and publication years (the expected values). In this study, we propose a third indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator family. The MHq is based on the MH analysis – an established method in statistics for the comparison of proportions. We test (using citations and assessments by peers, i.e. F1000Prime recommendations) if the three indicators can distinguish between different quality levels as defined on the basis of the assessments by peers. Thus, we test their convergent validity. We find that the indicator MHq is able to distinguish between the quality levels in most cases while MNPC and EMNPC are not. Since the MHq is shown in this study to be a valid indicator, we apply it to six types of zero-inflated altmetrics data and test whether different altmetrics sources are related to quality. The results for the various altmetrics demonstrate that the relationship between altmetrics (Wikipedia, Facebook, blogs, and news data) and assessments by peers is not as strong as the relationship between citations and assessments by peers. Actually, the relationship between citations and peer assessments is about two to three times stronger than the association between altmetrics and assessments by peers.  相似文献   

3.
The main objective of this study is to describe the life cycle of altmetric and bibliometric indicators in a sample of publications. Altmetrics (Downloads, Views, Readers, Tweets, and Blog mentions) and bibliometric counts (Citations) (in this study, the indicators will be capitalized to differentiate them from the general language) of 5185 publications (19,186 observations) were extracted from PlumX to observe their distribution according to the publication age. Correlations between these metrics were calculated from month to month to observe the evolution of these relationships. The results showed that mention metrics (Tweets and Blog mentions) are the earliest metrics that become available most quickly and have the shortest life cycle. Next, Readers are the metrics with the highest prevalence and with the second fastest growth. Views and Downloads show a continuous growth, being the indicators with the longest life cycles. Finally, Citations are the slowest indicators and have a low prevalence. Correlations show a strong relationship between mention metrics and Readers and Downloads, and between Readers and Citations. These results enable us to create a schematic diagram of the relationships between these metrics from a longitudinal view.  相似文献   

4.
InCites Essential Science Indicators is becoming increasingly used to identify top-performing research and evaluate the impact of institutes. Unfortunately, our study shows that ESI indicators, as well as other normalized citation indicators, have the following flaws. First, the publication month and the online-to-print delay affect a paper’s probability of becoming a Highly Cited Paper (HCP). Papers published in the earlier months of the year are more likely to accumulate enough citation counts to rank at the top 1% compared with those published in later months of the year. Papers with longer online-to-print delays have an apparent advantage for being selected as HCPs. Research field normalizations lead to the third pitfall. Different research fields have different citation thresholds for HCPs, making research field classification important for a journal. In addition, the uniform thresholds for both articles and reviews in ESI affect the reliability of HCP selection because, on average, reviews tend to have higher citation rates than articles. ESI’s selection of HCPs provides an intuitive feel for the problems of normalized citation impact indicators, such as those provided in InCites and SciVal.  相似文献   

5.
The new web-based academic communication platforms do not only enable researchers to better advertise their academic outputs, making them more visible than ever before, but they also provide a wide supply of metrics to help authors better understand the impact their work is making. This study has three objectives: a) to analyse the uptake of some of the most popular platforms (Google Scholar Citations, ResearcherID, ResearchGate, Mendeley and Twitter) by a specific scientific community (bibliometrics, scientometrics, informetrics, webometrics, and altmetrics); b) to compare the metrics available from each platform; and c) to determine the meaning of all these new metrics. To do this, the data available in these platforms about a sample of 811 authors (researchers in bibliometrics for whom a public profile Google Scholar Citations was found) were extracted. A total of 31 metrics were analysed. The results show that a high number of the analysed researchers only had a profile in Google Scholar Citations (159), or only in Google Scholar Citations and ResearchGate (142). Lastly, we find two kinds of metrics of online impact. First, metrics related to connectivity (followers), and second, all metrics associated to academic impact. This second group can further be divided into usage metrics (reads, views), and citation metrics. The results suggest that Google Scholar Citations is the source that provides more comprehensive citation-related data, whereas Twitter stands out in connectivity-related metrics.  相似文献   

6.
基于引文评价与同行评审方法相结合进行论文评价的思路,利用F1000数据库随机获取同行评审指标论文131篇,利用WoS、JCR、ESI及ImpactStory检索工具获取每篇论文的常用网络计量指标,探讨与同行评价相关联的网络计量指标,并将其替代同行评价纳入学术影响力综合评价模型。研究结果显示,综合评价能弥补单一类型指标评价的缺陷,实际的计量评价中采用相对指标和标准化处理,可以消除不同学科领域的影响因素和期刊数量的差异性,使评价具有跨学科、跨时间的可比性,通过对指标间相关性和相似性分析,可简化、替代或扩展指标。通过调整指标权重,突出同行评审在评价模型中作用,并具有一定的可操作性。  相似文献   

7.
陈斯斯  刘春丽 《情报学报》2022,41(2):142-154
在重大突发公共卫生事件的背景下,科技论文的临床应用价值被提到前所未有的重要位置。但如何评价这种类型的影响力以及有哪些有效指标仍需要深入挖掘与探索。引文桂冠奖是基于被引频次的诺贝尔生理学或医学奖的预测方法,而未被其预测却最终获得诺贝尔奖的原因可能是传统被引指标无法探测到论文的潜在临床影响力。本文引入美国NIH (National Institutes of Health)提出的论文临床转化潜力近似值指标(approximate potential to translate scores,APT),选择诺贝尔生理学或医学奖得主的论文集为样本,比较被引文桂冠奖预测和未被预测两组作者论文集的总被引次数、加权RCR (relative citation ratio)、被临床论文引用次数、APT均值、Human均值、Animal均值、Mol/Cell (Molecular/Cellular)均值这7项指标,以及转化力三角形模型的差异与指标间的相关性。被预测获奖和未被预测获奖两组论文的总被引次数、加权RCR、Mol/Cell均值有显著差异,未被预测组的Human均值与Animal均值及中位数均高于被...  相似文献   

8.
An expert ranking of forestry journals was compared with Journal Impact Factors and h-indices computed from the ISI Web of Science and internet-based data. Citations reported by Google Scholar offer an efficient way to rank all journals objectively, in a manner consistent with other indicators. This h-index exhibited a high correlation with the Journal Impact Factor (r = 0.92), but is not confined to journals selected by any particular commercial provider. A ranking of 180 forestry journals is presented, on the basis of this index.  相似文献   

9.
F1000是一个新的科研文献在线评估系统,它提供了一种系统的结构化的专家评议机制.通过与ISI Web Of Science中由被引次数所确定的影响力较高的文献对比,F1000 的专家评议机制能够及时准确的对优秀文献做出推荐,并给出推荐的评语和文献重要程度等级,对文献质量的评定具有极高的参考性,也对科研工作者快速选择相...  相似文献   

10.
A new size-independent indicator of scientific journal prestige, the SJR2 indicator, is proposed. This indicator takes into account not only the prestige of the citing scientific journal but also its closeness to the cited journal using the cosine of the angle between the vectors of the two journals’ cocitation profiles. To eliminate the size effect, the accumulated prestige is divided by the fraction of the journal's citable documents, thus eliminating the decreasing tendency of this type of indicator and giving meaning to the scores. Its method of computation is described, and the results of its implementation on the Scopus 2008 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), and SNIP, the comparison being made both overall and within specific scientific areas. All three, the SJR2 indicator, the SNIP indicator and the JIF distributions, were found to fit well to a logarithmic law. Although the three metrics were strongly correlated, there were major changes in rank. In addition, the SJR2 was distributed more equalized than the JIF by Subject Area and almost as equalized as the SNIP, and better than both at the lower level of Specific Subject Areas. The incorporation of the cosine increased the values of the flows of prestige between thematically close journals.  相似文献   

11.
通过对比F1000因子与被引频次、F1000因子与期刊评价指标,并对主要指标进行相关性分析,来验证同行评议与引文分析间的相关性。结果表明,F1000因子与被引频次呈正相关性,即专家打分与被引频次变动方向相同;但也存在专家打分高的论文被引频次低,而专家打分低的论文被引频次高的事实。相关性分析的结果表明:在特征因子、SNIP等主要指标中,SJR、IF与专家评议值相关度最大。  相似文献   

12.
Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag “good for teaching” do achieve higher altmetric counts than papers without this tag – if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (“new finding”). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact.If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers’ subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters’ journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.  相似文献   

13.
为了探讨同行评议、影响计量学以及传统文献计量指标在科学评价中的有效性,本文选取F1000、Mendeley以及Web of Science、Google Scholar数据库,采用SPSS 19.0软件,将心理学与生态学的1,3篇论文的同行评议结果即F1000因子、Mendeley阅读统计、期刊影响因子,以及Web of Science、Google Scholar数据库中被引频次进行相关分析。结果表明:同行评议结果、传统引文分析指标以及以Mendeley为代表的影响计量指标具有低度正相关性,这意味着上述指标在科学评价中审视视角的不同以及数字时代科学评价的多维构成;心理学筛选数据中F1000因子与期刊影响因子相关度几近为0,这一结论进一步证实了期刊影响因子与单篇论文影响力的严重背离;生态学与心理学指标相关分析结果的不同折射出科学评价中自然科学、社会科学的差异。图3。表4。参考文献10。  相似文献   

14.
[目的/意义] 比较分析不同学科的外文学术电子图书影响力差异,丰富电子图书评价方法,为完善电子图书分类分学科的科学评价体系提供有益参考。[方法/过程] 采用Bookmetrix,以经管类、教育类的学术电子图书为研究对象,对其传统引文指标与Altmetrics指标(Mendeley读者数、关注量、下载量)、书评量的相关性与一致性定量分析,比较两学科外文电子图书各指标之间的差异并进行非参数检验。[结果/结论] 研究发现:被引量、读者数、下载量等具有较高的指标覆盖率;经K-S Z独立双样本检验,经管类和教育类电子图书的被引量、下载量存在显著差异,关注量、读者数、书评量无显著差异(p=0.05);指标相关性具有学科差异性,被引量与Mendeley读者数的相关性,经管类图书高于教育类图书;被引量测度的是学术电子图书的学术影响力,使用数据(下载量等)与补充计量学数据较多反映图书的社会影响力。评价中文学术电子图书应将多源异构数据处理转化,构建多指标综合评价体系,将定性与定量方法相融合,使评价更全面、科学。  相似文献   

15.
The process of assessing individual authors should rely upon a proper aggregation of reliable and valid papers’ quality metrics. Citations are merely one possible way to measure appreciation of publications. In this study we propose some new, SJR- and SNIP-based indicators, which not only take into account the broadly conceived popularity of a paper (manifested by the number of citations), but also other factors like its potential, or the quality of papers that cite a given publication. We explore the relation and correlation between different metrics and study how they affect the values of a real-valued generalized h-index calculated for 11 prominent scientometricians. We note that the h-index is a very unstable impact function, highly sensitive for applying input elements’ scaling. Our analysis is not only of theoretical significance: data scaling is often performed to normalize citations across disciplines. Uncontrolled application of this operation may lead to unfair and biased (toward some groups) decisions. This puts the validity of authors assessment and ranking using the h-index into question. Obviously, a good impact function to be used in practice should not be as much sensitive to changing input data as the analyzed one.  相似文献   

16.
Faculty of 1000 ( www.facultyof1000.com ) is a new on‐line literature awareness and assessment service of research papers, on the basis of selections by 1400 of the world's top biologists, that combines metrics with judgement. The service offers a systematic and comprehensive form of post‐publication peer review that focuses on the best papers regardless of the journal in which they are published. It is now possible to draw some conclusions about how this new form of post‐publication peer review meets the needs of scientists, and the organizations that fund them, in practice. In addition, inferences about the relative importance of journals are set out, which should also interest publishers and librarians.  相似文献   

17.
The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most prominent and common measures of the prestige, position, and importance of a scientific journal. The JIF may profit from its comprehensibility, robustness, methodological reproducibility, simplicity, and rapid availability, but it is at the expense of serious technical and methodological flaws. The paper discusses two core problems with the JIF: first, citations of documents are generally not normally distributed, and, furthermore, the distribution is affected by outliers, which has serious consequences for the use of the mean value in the JIF calculation. Second, the JIF is affected by bias factors that have nothing to do with the prestige or quality of a journal (e.g., document type). For solving these two problems, we suggest using McCall's area transformation and the Rubin Causal Model. Citation data for documents of all journals in the ISI Subject Category “Psychology, Mathematical” (Journal Citation Report) are used to illustrate the proposal.  相似文献   

18.
论文围绕一种全新的文献在线评估系统———F1000,分析其对学术论文的评价原理和衡量机制。与传统的学术科研评价体系比较,F1000的专家评议机制能够及时准确的对优秀文献做出推荐,对生物医学领域青年人才学术水平的评估具有极高的参考性。  相似文献   

19.
为了解近五年图书情报学在SCI优质期刊中的研究主流及研究热点,提供借鉴及潜在研究方向,本研究提取Web of Science中2018年Journal Citation Reports排名靠前的期刊上发表的论文及其引文数据,应用“DEAN”数据清洗流程,借助CiteSpace软件进行分析并绘制可视化图谱,分别从发文机构、著者、研究热点等角度对国际图书情报学研究状态及成果进行分析识别,分析各聚类的代表文章,归纳领域主流研究热点。  相似文献   

20.
刘晶晶 《编辑学报》2017,29(2):200-203
通过网络调研和文献梳理的方式,结合具体案例,如Elsevier、Nature、PLoS、F1000 Research等,对国外开放获取期刊的同行评议方式进行研究.认为结构化同行评议、发表后开放式同行评议以及第三方独立同行评议,各有利弊,应该取长补短,优化评议方式,更好地发挥科技期刊作为学术质量把关者和过滤器的作用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号