首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have developed a (freeware) routine for “Referenced Publication Years Spectroscopy” (RPYS) and apply this method to the historiography of “iMetrics,” that is, the junction of the journals Scientometrics, Informetrics, and the relevant subset of JASIST (approx. 20%) that shapes the intellectual space for the development of information metrics (bibliometrics, scientometrics, informetrics, and webometrics). The application to information metrics (our own field of research) provides us with the opportunity to validate this methodology, and to add a reflection about using citations for the historical reconstruction. The results show that the field is rooted in individual contributions of the 1920s to 1950s (e.g., Alfred J. Lotka), and was then shaped intellectually in the early 1960s by a confluence of the history of science (Derek de Solla Price), documentation (e.g., Michael M. Kessler's “bibliographic coupling”), and “citation indexing” (Eugene Garfield). Institutional development at the interfaces between science studies and information science has been reinforced by the new journal Informetrics since 2007. In a concluding reflection, we return to the question of how the historiography of science using algorithmic means—in terms of citation practices—can be different from an intellectual history of the field based, for example, on reading source materials.  相似文献   

2.
Past studies of citation coverage of Web of Science, Scopus, and Google Scholar do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work journals was conducted. Web of Science had the fewest citing references and almost no variety in source format. Scopus provided higher citation counts, but the pattern of coverage was similar to Web of Science. Google Scholar provided substantially more citing references, but only a relatively small percentage of them were unique scholarly journal articles.The patterns of database coverage were replicated when the citations were broken out for each journal separately. The results of this analysis demonstrate the need to determine what resources constitute scholarly research and reflect the need for future researchers to consider the merits of each database before undertaking their research. This study will be of interest to scholars in library and information science as well as social work, as it facilitates a greater understanding of the strengths and limitations of each database and brings to light important considerations for conducting future research.  相似文献   

3.
A citation analysis of 61 library science and information science dissertations revealed some interesting publication patterns. About 80% of the citations are to single authors, and as in analyses of periodical literature, males are cited more than females overall (about 61% to 39%). In dissertations related to school or public libraries, the male/female distribution is less disparate; for studies in academic or special libraries two thirds of the authors are male, and male authorship is 75% when only information science dissertations are analyzed. Journal articles are cited more than books, book chapters, proceedings, theses, and other formats with College & Research Libraries and Journal of the American Society for Information Science used most. Library and information science is impacted by several other disciplines, primarily education, computer science, health/medicine, psychology, communications, and business. Authors cited in dissertations represent a somewhat less parochial list in terms of citing U.S. publications than authors cited in studies analyzing journal citations; over half of all works cited were published within the last 10 years.  相似文献   

4.
The Web of Science is no longer the only database which offers citation indexing of the social sciences. Scopus, CSA Illumina and Google Scholar are new entrants in this market. The holdings and citation records of these four databases were assessed against two sets of data one drawn from the 2001 Research Assessment Exercise and the other from the International bibliography of the Social Sciences. Initially, CSA Illumina's coverage at journal title level appeared to be the most comprehensive. But when recall and average citation count was tested at article level and rankings extrapolated by submission frequency to individual journal titles, Scopus was ranked first. When issues of functionality, the quality of record processing and depth of coverage are taken into account, Scopus and Web of Science have a significant advantage over the other two databases. From this analysis, Scopus offers the best coverage from amongst these databases and could be used as an alternative to the Web of Science as a tool to evaluate the research impact in the social sciences.  相似文献   

5.
The non-citation rate refers to the proportion of papers that do not attract any citation over a period of time following their publication. After reviewing all the related papers in Web of Science, Google Scholar and Scopus database, we find the current literature on citation distribution gives more focus on the distribution of the percentages and citations of papers receiving at least one citation, while there are fewer studies on the time-dependent patterns of the percentage of never-cited papers, on what distribution model can fit their time-dependent patterns, as well as on the factors influencing the non-citation rate. Here, we perform an empirical pilot analysis to the time-dependent distribution of the percentages of never-cited papers in a series of different, consecutive citation time windows following their publication in our selected six sample journals, and study the influence of paper length on the chance of papers’ getting cited. Through the above analysis, the following general conclusions are drawn: (1) a three-parameter negative exponential model can well fit time-dependent distribution curve of the percentages of never-cited papers; (2) in the initial citation time window, the percentage of never-cited papers in each journal is very high. However, as the citation time window becomes wider and wider, the percentage of never-cited papers begins to drop rapidly at first, and then drop more slowly, and the total degree of decline for most of journals is very large; (3) when applying the wider citation time windows, the percentage of never-cited papers for each journal begins to approach a stable value, and after that value, there will be very few changes in these stable percentages, unless we meet a large amount of “Sleeping Beauties” type papers; (4) the length of an paper has a great influence on whether it will be cited or not.  相似文献   

6.
What science does, what science could do, and how to make science work? If we want to know the answers to these questions, we need to be able to uncover the mechanisms of science, going beyond metrics that are easily collectible and quantifiable. In this perspective piece, we link metrics to mechanisms by demonstrating how emerging metrics of science not only offer complementaries to existing ones, but also shed light on the hidden structure and mechanisms of science. Based on fundamental properties of science, we classify existing theories and findings into: hot and cold science referring to attention shift between scientific fields, fast and slow science reflecting productivity of scientists and teams, soft and hard science revealing reproducibility of scientific research. We suggest that interest about mechanisms of science since Derek J. de Solla Price, Robert K. Merton, Eugene Garfield, and many others complement the zeitgeist in pursuing new, complex metrics without understanding the underlying processes. We propose that understanding and modeling the mechanisms of science condition effective development and application of metrics.  相似文献   

7.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

8.
Using the dataset based on Thomson Reuters Scientific “Web of Science” the distributions of some well-known indicators, such as h-index and g-index, were investigated, and different citation behaviors across different scientific fields resulting from their field dependences were found. To develop a field-independent index, two scaling methods, based on average citation of subject category and journal, were used to normalize the citation received by each paper of a certain author. The distributions of the generalized h-indices in different fields were found to follow a lognormal function with mean and standard deviation of approximately ?0.8 and 0.8, respectively. A field-independent index fi-index was then proposed, and its distribution was found to satisfy a universal power-law function with scaling exponent α approaching 3.0. Both the power-law and the lognormal universality of the distributions verified the field independence of these indicators. However, deciding which of the scaling methods is the better one is necessary for the validation of the field-independent index.  相似文献   

9.
In the present paper the Percentage Rank Position (PRP) index concluding from the principle of Similar Distribution of Information Impact in different fields of science (Vinkler, 2013), is suggested to assess journals in different research fields comparatively. The publications in the journals dedicated to a field are ranked by citation frequency, and the PRP-index of the papers in the elite set of the field is calculated. The PRP-index relates the citation rank number of the paper to the total number of papers in the corresponding set. The sum of the PRP-index of the elite papers in a journal, PRP(j,F) may represent the eminence of the journal in the field. The non-parametric and non-dimensional PRP(j,F) index of journals is believed to be comparable across fields.  相似文献   

10.
Questions of definition and measurement continue to constrain a consensus on the measurement of interdisciplinarity. Using Rao-Stirling (RS) Diversity sometimes produces anomalous results. We argue that these unexpected outcomes can be related to the use of “dual-concept diversity” which combines “variety” and “balance” in the definitions (ex ante). We propose to modify RS Diversity into a new indicator (DIV) which operationalizes “variety,” “balance,” and “disparity” independently and then combines them ex post. “Balance” can be measured using the Gini coefficient. We apply DIV to the aggregated citation patterns of 11,487 journals covered by the Journal Citation Reports 2016 of the Science Citation Index and the Social Sciences Citation Index as an empirical domain and, in more detail, to the citation patterns of 85 journals assigned to the Web-of-Science category “information science & library science” in both the cited and citing directions. We compare the results of the indicators and show that DIV provides improved results in terms of distinguishing between interdisciplinary knowledge integration (citing references) versus knowledge diffusion (cited impact). The new diversity indicator and RS diversity measure different features. A routine for the measurement of the various operationalization of diversity (in any data matrix) is made available online.  相似文献   

11.
Previous research has shown that citation data from different types of Web sources can potentially be used for research evaluation. Here we introduce a new combined Integrated Online Impact (IOI) indicator. For a case study, we selected research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003. We compared the citation counts from Web of Science (WoS) and Scopus with five online sources of citation data including Google Scholar, Google Books, Google Blogs, PowerPoint presentations and course reading lists. The mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. We also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication. Further analysis showed that the overall percentage for unique Google Scholar citations outside the WoS were 73% and 60% for the articles published in JASIST and Scientometrics, respectively. An important conclusion is that in subject areas where wider types of intellectual impact indicators outside the WoS and Scopus databases are needed for research evaluation, IOI can be used to help monitor research performance.  相似文献   

12.
This article reviews how the journal Archival Science––International Journal on Recorded Information in the first 10 years has endeavoured to be integrated, interdisciplinary, and intercultural in promoting the development of archival science as an autonomous scientific discipline.  相似文献   

13.
The principle of a new type of impact measure was introduced recently, called the “Audience Factor” (AF). It is a variant of the journal impact factor where emitted citations are weighted inversely to the propensity to cite of the source. In the initial design, propensity was calculated using the average length of bibliography at the source level with two options: a journal-level average or a field-level average. This citing-side normalization controls for propensity to cite, the main determinant of impact factor variability across fields. The AF maintains the variability due to exports–imports of citations across field and to growth differences. It does not account for influence chains, powerful approaches taken in the wake of Pinski–Narin's influence weights. Here we introduce a robust variant of the audience factor, trying to combine the respective advantages of the two options for calculating bibliography lengths: the classification-free scheme when the bibliography length is calculated at the individual journal level, and the robustness and avoidance of ad hoc settings when the bibliography length is averaged at the field level. The variant proposed relies on the relative neighborhood of a citing journal, regarded as its micro-field and assumed to reflect the citation behavior in this area of science. The methodology adopted allows a large range of variation of the neighborhood, reflecting the local citation network, and partly alleviates the “cross-scale” normalization issue. Citing-side normalization is a general principle which may be extended to other citation counts.  相似文献   

14.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

15.
Dynamic development is an intrinsic characteristic of research topics. To study this, this paper proposes two sets of topic attributes to examine topic dynamic characteristics: topic continuity and topic popularity. Topic continuity comprises six attributes: steady, concentrating, diluting, sporadic, transforming, and emerging topics; topic popularity comprises three attributes: rising, declining, and fluctuating topics. These attributes are applied to a data set on library and information science publications during the past 11 years (2001–2011). Results show that topics on “web information retrieval”, “citation and bibliometrics”, “system and technology”, and “health science” have the highest average popularity; topics on “h-index”, “online communities”, “data preservation”, “social media”, and “web analysis” are increasingly becoming popular in library and information science.  相似文献   

16.
The study explores the publication trends of scholarly journal articles in two core Library and Information Science (LIS) journals indexed under ScienceDirect Database during the period for the period 2000–2010, and for the “Top 25 Hottest Papers” for 2006–2010. It examines and presents an analysis of 1000 research papers in the area of LIS published in two journals: The International Information & Library Review (IILR) and Library & Information Science Research (LISR). The study examines the content of the journals, including growth of the literature, authorship patterns, geographical distributions of authors, distribution of papers by journal, citation pattern, ranking pattern, length of articles, and most cited authors. Collaboration was calculated using Subramanyam's formula, and Lotka's law was used to identify authors' productivity. The results indicated that authors' distributions did not follow Lotka's law. The study identified the eight most productive authors with a high of 19 publications in this field. The findings indicate that these publications experienced rapid and exponential growth in literature production. The contributions by scientists from India are examined.  相似文献   

17.
Using 17 open-access journals published without interruption between 2000 and 2004 in the field of library and information science, this study compares the pattern of cited/citing hyperlinked references of Web-based scholarly electronic articles under various citation ranges in terms of language, file format, source and top-level domain. While the patterns of cited references were manually examined by counting the live hyperlinked-cited references, the patterns of citing references were examined by using the cited by tag in Google Scholar. The analysis indicates that although language, top-level domain, and file format of citations did not differ significantly for articles under different citation ranges, sources of citation differed significantly for articles in different citation ranges. Articles with fewer citations mostly cite less-scholarly sources such as Web pages, whereas articles with a higher number of citations mostly cite scholarly sources such as journal articles, etc. The findings suggest that 8 out of 17 OA journals in LIS have significant research impact in the scholarly communication process.  相似文献   

18.
In this study, we identified and analyzed characteristics of top-cited single-author articles published in the Science Citation Index Expanded from 1991 to 2010. A top-cited single-author article was defined as an article that had been cited at least 1000 times from the time of its publication to 2012. Results showed that 1760 top-cited single-author articles were published in 539 journals listed in 130 Web of Science categories between 1901 and 2010. The top productive journal was Science and the most productive category was multidisciplinary physics. Most of the articles were not published in high-impact journals. Harvard University led all other institutions in publishing top-cited single-author articles. Nobel Prize winners contributed 7.0% of articles. In total, 72 Nobel Prize winners published 124 single-author articles. Single-authored papers published in different periods exhibited different patterns of citation trends. However, top-cited articles consistently showed repetitive peaks regardless of the time period of publication. “Theory (or theories)” was the most frequently appeared title word of all time. Leading title words varied at different time periods, and only five title words, method(s), protein(s), structure(s), molecular, and quantum consistently remained in the top 20 in different time periods.  相似文献   

19.
The aim of the study is to explore the effects of the increase in the number of publications or citations on several impact indicators by a single journal paper or citation. The possible change of the h-index, A-index, R-index, π-index, π-rate, Journal Paper Citedness (JPC), and Citation Distribution Score (CDS) is followed by models. Particular attention is given to the increase of the indices by a single plus citation. The results obtained by the “successively built-up indicator” model show that with increasing number of citations or self-citations the indices may increase substantially.  相似文献   

20.
The aim of this brief communication is to reply to a letter by Kosmulski (Journal of Informetrics 6(3):368–369, 2012), which criticizes a recent indicator called “success-index”. The most interesting features of this indicator, presented in Franceschini et al. (Scientometrics, in press), are: (i) allowing the selection of an “elite” subset from a set of publications and (ii) implementing the field-normalization at the level of an individual publication. We show that the Kosmulski's criticism is unfair and inappropriate, as it is the result of a misinterpretation of the indicator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号