首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
试析中文搜索引擎的评价标准   总被引:24,自引:2,他引:22  
宛玲  杨秀丹  杜晓静 《情报科学》2000,18(1):28-31,38
搜索引擎是一种网络检索工具,本文主要针对中文搜索引擎评价标准进行探讨。笔者认为对它的检索功能的评价主要应从收录范围、查询结果反馈信息的质量、检索款目的信息量、查错率、更新与报道速度、查询功能,检索界面的友好性、精品推荐、与其它搜索引擎的友情链接、响应速度等多方面进行。  相似文献   

2.
张士靖 《情报杂志》2002,21(5):61-62,64
PubMed和Gateway是美国国家医学图书馆(NLM)推出的网上生物医学信息数据库。本文从基本情况、检索原理、基本检索、主题词检索、结果显示和其它功能等六个方面对两库进行了全面的比较和分析评价,并由此提出了选择方案,旨在帮助用户更有针对性地选择、使用这两个数据库。  相似文献   

3.
网络环境下检索效果评价标准浅析   总被引:8,自引:0,他引:8  
孙昊  刘玉照 《情报杂志》2003,22(1):56-58
讨论了一些主要的检索效果评价标准,如查全率、查准率、收录范围等在网络环境下的一些变化和发展,还总结了在这种新环境下检索效果评价标准的发展趋势。  相似文献   

4.
杨爱群  罗任秀 《现代情报》2005,25(3):137-138,140
网络信息资源的迅猛增长,使人们获得有用信息越来越困难,网络检索工具应运而生,文章介绍了网络信息检索工具的类型和功能,提出了检索工具的发展趋势。  相似文献   

5.
一种成功的检索策略——结构检索   总被引:5,自引:1,他引:5  
吴江文 《情报科学》2002,20(1):90-92
文章介绍结构检索策略的基本原理、原则以及它的理论依据。采用结构检索策略,即根据理性的检索程序和技术检索是成功检索的关键。成功的检索策略是任务、资源、词、方法和评估5个基本连续的阶段,以及7个基本方针:(1)确定任务;(2)找出资源位置;(3)选择检索词;(4)挑选分类法;(5)执行检索;(6)评价结构;(7)如果必要,通过精练以前的决定重新检索。根据这些计划来检索的。  相似文献   

6.
本文通过利用Google Scholar、EI和SCIE的作者检索功能,对这3种检索工具的检索结果进行了比较分析。研究表明Google Scholar是比较权威和全面的免费学术检索工具,但Google Scholar在中文学术文献检索中文献重复率高达2837%,且由于来源数据库的局限,文献的漏检现象比较严重。  相似文献   

7.
网络信息检索研究综述   总被引:7,自引:0,他引:7  
周丽霞 《情报科学》2004,22(4):395-399
本文对1995~2002年问发表的有关网络信息检索的40篇优秀论文,从检索工具、检索技巧和发展趋势等几个方面进行了分析和评价,展示了我国学者对网络信息检索方面研究的现状和成果,表明我国在网络检索研究上已经达到了一定水平。  相似文献   

8.
潘莉  王红兵 《情报杂志》2001,20(6):64-65
阐述了网络检索工具Cambridge Scientific Abstracts《剑桥科学文摘》的快速检索和高级检索两大主要检索功能,对其检索效果进行了客观评价。  相似文献   

9.
中国期刊网专题文献数据库的性能评价   总被引:1,自引:0,他引:1  
戚敏 《情报杂志》2001,20(7):21-23
以满足用户需要的角度对中国期刊网(CJN)专题文献数据库的收录范围、更新频率与时效、常用检索入口的查全率和查准率,系统响应时间、用户检索时间、逻辑检索功能、检索输出方式、检索策略保留、易用性及管理服务功能等方面作了实验性评价,并对现版2.0和新版3.0作了比较评价,最后提出了进一步完善的建议。  相似文献   

10.
新型网络信息检索效果评价指标体系设计   总被引:8,自引:1,他引:8  
金玉坚  刘焱 《现代情报》2005,25(4):184-186
本文根据传统信息检索效果评价指标体系结构在网络时代存在的缺陷,提出了新型的网络信息检索效果评价指标体系的设计方案,分为索引数据库、检索功能、检索结果和用户负担评价四大部分,同时还提出了网络时代新型查全率和查准率的算法探讨。  相似文献   

11.
赵发珍 《现代情报》2013,33(6):91-95
论文通过Yahoo!和Bing搜索引擎获取30个网络社区网站的网页总数、链接总数、内、外部链接数、PR值,并计算了网络影响因子等,运用灰色关联分析对以上多项链接指标数据进行综合排序。研究结果表明:这30个网络社区网站网络影响力前几位是:51.com、腾讯微博、腾讯博客、腾讯论坛、网易微博、网易博客、新浪博客、豆瓣网。最后通过对比Yahoo!和Bing搜索引擎获取的链接数据,验证了两大搜索引擎对于网站链接分析是可行的,但是用Yahoo搜索引擎统计的数据来分析更为准确一些。  相似文献   

12.
Ecommerce is developing into a fast-growing channel for new business, so a strong presence in this domain could prove essential to the success of numerous commercial organizations. However, there is little research examining ecommerce at the individual customer level, particularly on the success of everyday ecommerce searches. This is critical for the continued success of online commerce. The purpose of this research is to evaluate the effectiveness of search engines in the retrieval of relevant ecommerce links. The study examines the effectiveness of five different types of search engines in response to ecommerce queries by comparing the engines’ quality of ecommerce links using topical relevancy ratings. This research employs 100 ecommerce queries, five major search engines, and more than 3540 Web links. The findings indicate that links retrieved using an ecommerce search engine are significantly better than those obtained from most other engines types but do not significantly differ from links obtained from a Web directory service. We discuss the implications for Web system design and ecommerce marketing campaigns.  相似文献   

13.
袁毅 《情报科学》2005,23(10):1499-1504
在学术网站评价中,一项重要的指标便是网站的权威性指标,而若干权威性指标中,最重要的又是网站作者的权威性指标。本文分析了国内外对网站作者权威性的诸多研究,提出了定量测度网站作者权威性的指标:作者影响度,并论证了作者影响度在学术网站评价中的可靠性和可测性。  相似文献   

14.
Drawing on the ideas of the Sense-Making approach, the ways in which people face and bridge gaps in Web searching are analyzed. The empirical study is based on videotaped Web searches conducted by seven participants. Altogether 11 gaps and 13 search tactics of various types were identified. The gaps faced by the searchers originated from three major factors: problematic content of information, insufficient search competence and problems caused by the search environment. Of individual gaps, no relevant material available, inaccessible content and confusion were most frequent. Of the search tactics used in gap-bridging, following links and activating the Back button were most popular. Gaps related to the problematic content of information led the informants to redirect the search to find Web pages that focus better on the search topic. If the movement was stopped by insufficient search competence, the searchers tended to return to material that was familiar from earlier use contexts in order to regain control of the search process. Alternatively, they tried to specify the search terms. In cases where the search was interrupted by technical problems or other factors originating from the search system, gap-bridging aimed at returning to familiar and technically reliable links. The Sense-Making theory provides relevant conceptual tools to approach the dynamic and discontinuous nature of Web searching in terms of gap-facing and gap-bridging. The concept of gap-facing enables a context-sensitive analysis of the ways in which Web search processes may be stopped. Gap-bridging indicates a general level motive to find alternative ways to continue searching.  相似文献   

15.
This article presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualize user search for information. Based on ontologies, NavCon automatically inserts conceptual links in Web pages. By using these links, the user may navigate in a graph representing ontology concepts and their relationships. By browsing this graph, it is possible to reach documents associated with the user desired ontology concept. This Web navigation supported by ontology concepts we call conceptual navigation. Conceptual navigation is a technique to browse Web sites within a context. The context filters relevant retrieved information. The context also drives user navigation through paths that meet his needs. A company may implement conceptual navigation to improve user search for information in a knowledge management environment. We suggest that the use of an ontology to conduct navigation in an Intranet may help the user to have a better understanding about the knowledge structure of the company.  相似文献   

16.
基于HTMLParser对网页进行解析,可抽取标签间的Link、image、meta和title等信息。使用HTMLParser来提取Web文献中的题名、关键字、摘要、作者、来源等信息,清洗后存入MySql数据库当中,以备后续数据挖掘使用。对此进行了论述。  相似文献   

17.
Commercial search engines are now playing an increasingly important role in Web information dissemination and access. Of particular interest to business and national governments is whether the big engines have coverage biased towards the US or other countries. In our study we tested for national biases in three major search engines and found significant differences in their coverage of commercial Web sites. The US sites were much better covered than the others in the study: sites from China, Taiwan and Singapore. We then examined the possible technical causes of the differences and found that the language of a site does not affect its coverage by search engines. However, the visibility of a site, measured by the number of links to it, affects its chance to be covered by search engines. We conclude that the coverage bias does exist but this is due not to deliberate choices of the search engines but occurs as a natural result of cumulative advantage effects of US sites on the Web. Nevertheless, the bias remains a cause for international concern.  相似文献   

18.
In “Shaping the Web: Why the Politics of Search Engines Matters,” Introna and Nissenbaum (2000) introduced scholars to the political, as well as technical, issues central to the development of online search engines. Since that time, scholars have critically evaluated the role that search engines play in structuring the scope of online information access for the rest of society, with an emphasis on the implications for a democratic and diverse Web. This article describes the thought behind search engine regulation, online diversity, and information bias, and it places these issues within the context of the technical and societal changes that have occurred in the online search industry. The author assesses which of the initial concerns expressed about online search engines remain relevant today and discusses how technical changes demand a new approach to measuring online diversity and democracy. The author concludes with a proposal to direct the research and thought in online search going forward.  相似文献   

19.
Stochastic simulation has been very effective in many domains but never applied to the WWW. This study is a premiere in using neural networks in stochastic simulation of the number of rejected Web pages per search query. The evaluation of the quality of search engines should involve not only the resulting set of Web pages but also an estimate of the rejected set of Web pages. The iterative radial basis functions (RBF) neural network developed by Meghabghab and Nasr [Iterative RBF neural networks as meta-models for stochastic simulations, in: Second International Conference on Intelligent Processing and Manufacturing of Materials, IPMM’99, Honolulu, Hawaii, 1999, pp. 729–734] was adapted to the actual evaluation of the number of rejected Web pages on four search engines, i.e., Yahoo, Alta Vista, Google, and Northern Light. Nine input variables were selected for the simulation: (1) precision, (2) overlap, (3) response time, (4) coverage, (5) update frequency, (6) boolean logic, (7) truncation, (8) word and multi-word searching, (9) portion of the Web pages indexed. Typical stochastic simulation meta-modeling uses regression models in response surface methods. RBF becomes a natural target for such an attempt because they use a family of surfaces each of which naturally divides an input space into two regions X+ and X− and the n patterns for testing will be assigned either class X+ or X−. This technique divides the resulting set of responses to a query into accepted and rejected Web pages. To test the hypothesis that the evaluation of any search engine query should involve an estimate of the number of rejected Web pages as part of the evaluation, RBF meta-model was trained on 937 examples from a set of 9000 different simulation runs on the nine different input variables. Results show that two of the variables can be eliminated which include: response time and portion of the Web indexed without affecting evaluation results. Results show that the number of rejected Web pages for a specific set of search queries on these four engines very high. Also a goodness measure of a search engine for a given set of queries can be designed which is a function of the coverage of the search engine and the normalized age of a new document in result set for the query. This study concludes that unless search engine designers address the issue of rejected Web pages, indexing, and crawling, the usage of the Web as a research tool for academic and educational purposes will stay hindered.  相似文献   

20.
针对传统的基于Web图的垂直搜索策略Authorities and Hubs,提出了一种融合了网页内容评价和Web图的启发式垂直搜索策略,此外,引入向量空间模型进行针对网页内容的主题相关度判断,进一步提高主题网页下载的准确率.实验表明,文中算法有效地提高了主题网页的聚合程度,且随着网页下载数量的增加,垂直搜索引擎的准确率逐渐递增,并在下载网页达到一定数量后,准确率趋于稳定,算法具有较好的鲁棒性,可以应用到相关垂直搜索引擎系统中.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号