首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using 17 open-access journals published without interruption between 2000 and 2004 in the field of library and information science, this study compares the pattern of cited/citing hyperlinked references of Web-based scholarly electronic articles under various citation ranges in terms of language, file format, source and top-level domain. While the patterns of cited references were manually examined by counting the live hyperlinked-cited references, the patterns of citing references were examined by using the cited by tag in Google Scholar. The analysis indicates that although language, top-level domain, and file format of citations did not differ significantly for articles under different citation ranges, sources of citation differed significantly for articles in different citation ranges. Articles with fewer citations mostly cite less-scholarly sources such as Web pages, whereas articles with a higher number of citations mostly cite scholarly sources such as journal articles, etc. The findings suggest that 8 out of 17 OA journals in LIS have significant research impact in the scholarly communication process.  相似文献   

2.
Although there are at least six dimensions of journal quality, Beall's List identifies predatory Open Access journals based almost entirely on their adherence to procedural norms. The journals identified as predatory by one standard may be regarded as legitimate by other standards. This study examines the scholarly impact of the 58 accounting journals on Beall's List, calculating citations per article and estimating CiteScore percentile using Google Scholar data for more than 13,000 articles published from 2015 through 2018. Most Beall's List accounting journals have only modest citation impact, with an average estimated CiteScore in the 11th percentile among Scopus accounting journals. Some have a substantially greater impact, however. Six journals have estimated CiteScores at or above the 25th percentile, and two have scores at or above the 30th percentile. Moreover, there is considerable variation in citation impact among the articles within each journal, and high-impact articles (cited up to several hundred times) have appeared even in some of the Beall's List accounting journals with low citation rates. Further research is needed to determine how well the citing journals are integrated into the disciplinary citation network—whether the citing journals are themselves reputable or not.  相似文献   

3.
Unlike Impact Factors (IF), Article Influence (AI) scores assign greater weight to citations that appear in highly cited journals. The natural sciences tend to have higher citation rates than the social sciences. We might therefore expect that relative to IF, AI overestimates the citation impact of social science journals in subfields that are related to (and presumably cited in) higher-impact natural science disciplines. This study evaluates that assertion through a set of simple and multiple regressions covering seven social science disciplines: anthropology, communication, economics, education, library and information science, psychology, and sociology. Contrary to expectations, AI underestimates 5IF (five-year Impact Factor) for journals in science-related subfields such as scientific communication, science education, scientometrics, biopsychology, and medical sociology. Journals in these subfields have low AI scores relative to their 5IF values. Moreover, the effect of science-related status is considerable—typically 0.60 5IF units or 0.50 SD. This effect is independent of the more general finding that AI scores underestimate 5IF for higher-impact journals. It is also independent of the very modest curvilinearity in the relationship between AI and 5IF.  相似文献   

4.
This study examines how the social sciences' debate between qualitative and quantitative methods is reflected in the citation patterns of sociology journal articles. Citation analysis revealed that quantitative articles were more likely to cite journal articles than monographs, while qualitative articles were more likely to cite monographs than journals. Quantitative articles cited other articles from their own quantitative-dominated journals but virtually excluded citations to articles from qualitative journals, while qualitative articles cited articles from the quantitative-dominated journals as well as their own qualitative-specialized journals. Discussion and conclusions include this study's implications for library collection development.  相似文献   

5.
Past studies of citation coverage of Web of Science, Scopus, and Google Scholar do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work journals was conducted. Web of Science had the fewest citing references and almost no variety in source format. Scopus provided higher citation counts, but the pattern of coverage was similar to Web of Science. Google Scholar provided substantially more citing references, but only a relatively small percentage of them were unique scholarly journal articles.The patterns of database coverage were replicated when the citations were broken out for each journal separately. The results of this analysis demonstrate the need to determine what resources constitute scholarly research and reflect the need for future researchers to consider the merits of each database before undertaking their research. This study will be of interest to scholars in library and information science as well as social work, as it facilitates a greater understanding of the strengths and limitations of each database and brings to light important considerations for conducting future research.  相似文献   

6.
Dissertations can be the single most important scholarly outputs of junior researchers. Whilst sets of journal articles are often evaluated with the help of citation counts from the Web of Science or Scopus, these do not index dissertations and so their impact is hard to assess. In response, this article introduces a new multistage method to extract Google Scholar citation counts for large collections of dissertations from repositories indexed by Google. The method was used to extract Google Scholar citation counts for 77,884 American doctoral dissertations from 2013 to 2017 via ProQuest, with a precision of over 95%. Some ProQuest dissertations that were dual indexed with other repositories could not be retrieved with ProQuest-specific searches but could be found with Google Scholar searches of the other repositories. The Google Scholar citation counts were then compared with Mendeley reader counts, a known source of scholarly-like impact data. A fifth of the dissertations had at least one citation recorded in Google Scholar and slightly fewer had at least one Mendeley reader. Based on numerical comparisons, the Mendeley reader counts seem to be more useful for impact assessment purposes for dissertations that are less than two years old, whilst Google Scholar citations are more useful for older dissertations, especially in social sciences, arts and humanities. Google Scholar citation counts may reflect a more scholarly type of impact than that of Mendeley reader counts because dissertations attract a substantial minority of their citations from other dissertations. In summary, the new method now makes it possible for research funders, institutions and others to systematically evaluate the impact of dissertations, although additional Google Scholar queries for other online repositories are needed to ensure comprehensive coverage.  相似文献   

7.
The purpose of this study was to identify objectively a hierarchical ranking of journals for health sciences librarians with faculty status. Such a guideline can indicate a journal's value for promotion and tenure consideration. Lists of recent research articles (1982-1986) in health sciences librarianship, and articles written by health sciences librarians, were compiled by searching Social SCISEARCH and MEDLINE. The journals publishing those articles are presented. Results show BMLA as the most prominent journal in the field. Therefore, citations from articles in BMLA from 1982 to 1986 were chosen as a sample for citation analysis. Citation analysis was employed to identify the most frequently cited journals. Some characteristics of the citations in BMLA are also discussed. The ranking of journals based on citation frequency, as a result, was identified.  相似文献   

8.
Collection development in college and university libraries most often occurs using longstanding traditional selection methods, such as favorable book reviews or local user needs. This study uses citation analysis as a tool to select books for the social science book collection in one academic library and compares the circulation of books using traditional methods to those books using citation analysis. The journal impact factor was used to determine those journals and authors cited the most in the disciplines of business, anthropology, education, political science, psychology, and sociology. If those authors published books, the books were purchased and circulation data on the books were tabulated and compared to books chosen using traditional methods. Findings indicate that books purchased using traditional methods of selection circulated more, except when individual disciplines were measured. In the areas of business, political science, and psychology, there was no significant difference in circulation statistics, and together both the traditional and citation analysis methods accounted for circulation of nearly 95% of the social science collection. Since it is based on scholarly activity, citation analysis is a collection development method that could be used in all academic libraries.  相似文献   

9.
This guide describes several information sources that can be used to assist faculty interested in quantitative and qualitative assessments of journal reputation and scholarly impact: Journal Citation Reports, Eigenfactor, Google Scholar Metrics, Elsevier Journal Metrics, Excellence in Research for Australia, Cabell’s International, Web of Science, Scopus, Google Scholar, and Beall’s List. It also introduces the indicators most often used to represent citation impact: impact factor, article influence score, eigenfactor, h5-index, source normalized impact per paper, impact per publication, and SCImago journal rank. Methods of assessing the influence of individual articles are also presented, along with strategies for the identification of predatory or low-quality journals.  相似文献   

10.
11.
The promotion of scholarly journal articles to journalists and bloggers via the dissemination of press releases generates a positive impact on the number of citations that publicized journal articles receive. Research by John Wiley & Sons, Inc. shows that article‐level publicity efforts and media coverage boosts downloads by an average of 1.8 times and were found to increase citations by as much as 2.0–2.2 times in the articles analyzed in this study. We evaluated scholarly journal articles published in nearly 100 Wiley journals, which were also covered in 296 press releases. The results in this case study suggest a need for greater investment in media support for scholarly journals publishing research that sparks interest to a broad news audience, as it could increase citations.  相似文献   

12.
Influence and capital are two concepts used to evaluate scholarly outputs, and these can be measured using the Scholarly Capital Model as a modelling tool. The tool looks at the concepts of connectedness, venue representation, and ideational influence using centrality measures within a social network. This research used co‐authorships and h‐indices to investigate authors who have published papers in the field of information behaviour between 1980 and 2015 as extracted from Web of Science. The findings show a relationship between the authors’ connectedness and the venue (journal) representation. It could be seen that the venue (journal) influences the chance of citation, and equally, the prestige (centrality) of authors probably raises the citations of the journals. The research also shows a significant positive relationship between the venue representation and ideational influence. This means that a research work that is published in a highly cited journal will find more visibility and will receive more citations.  相似文献   

13.
Previous research has shown that citation data from different types of Web sources can potentially be used for research evaluation. Here we introduce a new combined Integrated Online Impact (IOI) indicator. For a case study, we selected research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003. We compared the citation counts from Web of Science (WoS) and Scopus with five online sources of citation data including Google Scholar, Google Books, Google Blogs, PowerPoint presentations and course reading lists. The mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. We also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication. Further analysis showed that the overall percentage for unique Google Scholar citations outside the WoS were 73% and 60% for the articles published in JASIST and Scientometrics, respectively. An important conclusion is that in subject areas where wider types of intellectual impact indicators outside the WoS and Scopus databases are needed for research evaluation, IOI can be used to help monitor research performance.  相似文献   

14.
Web引文数量探析   总被引:6,自引:0,他引:6  
通过对我国图书情报学科的 7种核心期刊近 5年来发表论文的参考文献中的网络引文的数量分析 ,本文揭示了网络信息资源对学术交流活动的影响和学者们利用网络信息资源的状况。并与国外的网络引文研究进行了比较。  相似文献   

15.
Journal weighted impact factor: A proposal   总被引:3,自引:0,他引:3  
The impact factor of a journal reflects the frequency with which the journal's articles are cited. It is the best available measure of journal quality. For calculation of impact factor, we just count the number of citations, no matter how prestigious the citing journal is. We think that impact factor as a measure of journal quality, may be improved if in its calculation, we not only take into account the number of citations, but also incorporate a factor reflecting the prestige of the citing journals relative to the cited journal. In calculation of this proposed “weighted impact factor,” each citation has a coefficient (weight) the value of which is 1 if the citing journal is as prestigious as the cited journal; is >1 if the citing journal is more prestigious than the cited journal; and is <1 if the citing journal has a lower standing than the cited journal. In this way, journals receiving many citations from prestigious journals are considered prestigious themselves and those cited by low-status journals seek little credit. By considering both the number of citations and the prestige of the citing journals, we expect the weighted impact factor be a better scientometrics measure of journal quality.  相似文献   

16.
Many journals post accepted articles online before they are formally published in an issue. Early citation impact evidence for these articles could be helpful for timely research evaluation and to identify potentially important articles that quickly attract many citations. This article investigates whether Microsoft Academic can help with this task. For over 65,000 Scopus in-press articles from 2016 and 2017 across 26 fields, Microsoft Academic found 2–5 times as many citations as Scopus, depending on year and field. From manual checks of 1122 Microsoft Academic citations not found in Scopus, Microsoft Academic’s citation indexing was faster but not much wider than Scopus for journals. It achieved this by associating citations to preprints with their subsequent in-press versions and by extracting citations from in-press articles. In some fields its coverage of scholarly digital libraries, such as arXiv.org, was also an advantage. Thus, Microsoft Academic seems to be a more comprehensive automatic source of citation counts for in-press articles than Scopus.  相似文献   

17.
The references cited in scientific articles are as important as any other part of the paper, because of their usefulness to the scientific community and to abstracting and indexing services and citation databases. I studied inaccuracies in references and in‐text citations in sample of 97 of the 519 peer‐reviewed journals accredited by the Iranian National Commission for Journal Accreditation Policy (Ministry of Research, Science and Technology). The target journals published 2,980 articles with 74,577 cited references and 108,151 in‐text citations. The results showed 36.6% as the average percentage error rate (range 5.6% to 61.3%). The mean number of errors in cited reference and in‐text citations was 2.7 per article, and the mean number of errors per journal was 690. For the entire sample of articles, 4,369 in‐text citations did not match any source in the list of references (4%), and 8,683 cited references did not match any in‐text citation (11.6%). The stakeholders in scholarly communication, especially authors, pay insufficient attention to the accuracy of bibliographic references. Peer‐reviewed journals should encourage the use of standardized journal policies and quality‐control measures regarding peer review, data quality and accuracy.  相似文献   

18.
This study, conducted at the Indiana University School of Dentistry Library using citation analysis, examined graduate dental student theses citations to determine the nature of materials cited, journal ownership, journal citation frequency, and citation age distribution. The results were compared to other scientific discipline citation analyses. Study results indicated that for the period studied, masters dental students, like medical and science students, cited recently published scholarly dental journal literature. The majority of the journals cited were owned by Indiana University system libraries. Areas for further research include faculty resource usage, e-journal impact, and interdisciplinary resource use.  相似文献   

19.
[目的/意义]探讨被引频次位置指标在科技期刊评价中的作用,确定合适时间窗口的最优位置指标。[方法/过程]从Web of Science数据库中选取符合条件的14种眼科期刊作为研究对象,分别计算各期刊2014年度不同位置指标,包括2年、5年、8年和10年引证时间窗口(citation time window,CTW)的h指数(h2、h5、h8和h10)、累计h指数(a-h2、a-h5、a-h8和a-h10)以及相对应的期刊2014年度被引频次百分位数位置(percentage rank position,PRP)指标(Top1%、Top5%、Top10%、Top25%Top50%)和累计PRP指标(a-Top1%、a-Top5%、a-Top10%、a-Top25%和a-Top50%)。比较影响因子、不同CTW位置指标与期刊问卷调查评分的相关度,确定不同位置指标应用于期刊评价的效果。[结果/结论]合理的位置指标在期刊影响力评价中优于影响因子和5年影响因子,累计被引频次位置指标普遍优于年度指标,2年CTW的h指数优于其他CTW的h指数,5年CTW的a-h2、h2,5年和8年CTW的a-Top50%和Top50%与影响因子和5年影响因子相比具有更理想的期刊评价效果。  相似文献   

20.
付中静 《出版科学》2016,24(4):77-82
收集 Web of Science(WoS)数据库的高被引撤销论文数据,分析其分布规律和引证特征。结果发现,TOP20%高被引撤销论文430篇,分布于31个国家,多学科领域最多,35种期刊>3篇。高被引撤销论文撤销时滞和撤销论文总被引频次相关性较弱(P=0.014),和撤销前被引频次相关性较强(P=0.000)。期刊 IF和撤销论文数量、撤销论文总被引频次、撤销论文篇均被引频次正相关(P=0.017、P=0.000、P=0.005)。撤销后年均被引频次低于撤销前(P=0.000)。本研究说明 IF 高的期刊发表的撤销论文对学术界带来的负面影响较大,撤销时滞延长增加了撤销前引用,撤销起到了一定的净化效果,但是净化效果还不理想,建议国内外学者加强对撤销论文及其不良影响的关注。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号