首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The goal of the study presented in this article is to investigate to what extent the classification of a web page by a single genre matches the users’ perspective. The extent of agreement on a single genre label for a web page can help understand whether there is a need for a different classification scheme that overrides the single-genre labelling. My hypothesis is that a single genre label does not account for the users’ perspective. In order to test this hypothesis, I submitted a restricted number of web pages (25 web pages) to a large number of web users (135 subjects) asking them to assign only a single genre label to each of the web pages. Users could choose from a list of 21 genre labels, or select one of the two ‘escape’ options, i.e. ‘Add a label’ and ‘I don’t know’. The rationale was to observe the level of agreement on a single genre label per web page, and draw some conclusions about the appropriateness of limiting the assignment to only a single label when doing genre classification of web pages. Results show that users largely disagree on the label to be assigned to a web page.  相似文献   

2.
随着网络的飞速发展,网页数量急剧膨胀,近几年来更是以指数级进行增长,搜索引擎面临的挑战越来越严峻,很难从海量的网页中准确快捷地找到符合用户需求的网页。网页分类是解决这个问题的有效手段之一,基于网页主题分类和基于网页体裁分类是网页分类的两大主流,二者有效地提高了搜索引擎的检索效率。网页体裁分类是指按照网页的表现形式及其用途对网页进行分类。介绍了网页体裁的定义,网页体裁分类研究常用的分类特征,并且介绍了几种常用特征筛选方法、分类模型以及分类器的评估方法,为研究者提供了对网页体裁分类的概要性了解。  相似文献   

3.
Due to the proliferation and abundance of information on the web, ranking algorithms play an important role in web search. Currently, there are some ranking algorithms based on content and connectivity such as BM25 and PageRank. Unfortunately, these algorithms have low precision and are not always satisfying for users. In this paper, we propose an adaptive method, called A3CRank, based on the content, connectivity, and click-through data triple. Our method tries to aggregate ranking algorithms such as BM25, PageRank, and TF-IDF. We have used reinforcement learning to incorporate user behavior and find a measure of user satisfaction for each ranking algorithm. Furthermore, OWA, an aggregation operator is used for merging the results of the various ranking algorithms. A3CRank adapts itself with user needs and makes use of user clicks to aggregate the results of ranking algorithms. A3CRank is designed to overcome some of the shortcomings of existing ranking algorithms by combining them together and producing an overall better ranking criterion. Experimental results indicate that A3CRank outperforms other combinational ranking algorithms such as Ranking SVM in terms of P@n and NDCG metrics. We have used 130 queries on University of California at Berkeley’s web to train and evaluate our method.  相似文献   

4.
Broken hypertext links are a frequent problem in the Web. Sometimes the page which a link points to has disappeared forever, but in many other cases the page has simply been moved to another location in the same web site or to another one. In some cases the page besides being moved, is updated, becoming a bit different to the original one but rather similar. In all these cases it can be very useful to have a tool that provides us with pages highly related to the broken link, since we could select the most appropriate one. The relationship between the broken link and its possible linkable pages, can be defined as a function of many factors. In this work we have employed several resources both in the context of the link and in the Web to look for pages related to a broken link. From the resources in the context of a link, we have analyzed several sources of information such as the anchor text, the text surrounding the anchor, the URL and the page containing the link. We have also extracted information about a link from the Web infrastructure such as search engines, Internet archives and social tagging systems. We have combined all of these resources to design a system that recommends pages that can be used to recover the broken link. A novel methodology is presented to evaluate the system without resorting to user judgments, thus increasing the objectivity of the results, and helping to adjust the parameters of the algorithm. We have also compiled a web page collection with true broken links, which has been used to test the full system by humans.  相似文献   

5.
This research is a part of ongoing study to better understand citation analysis on the Web. It builds on Kleinberg's research (J. Kleinberg, R. Kumar, P. Raghavan, P. Rajagopalan, A. Tomkins, Invited survey at the International Conference on Combinatorics and Computing, 1999) that hyperlinks between web pages constitute a web graph structure and tries to classify different web graphs in the new coordinate space: out-degree, in-degree. The out-degree coordinate is defined as the number of outgoing web pages from a given web page. The in-degree coordinate is the number of web pages that point to a given web page. In this new coordinate space a metric is built to classify how close or far are different web graphs. Kleinberg's web algorithm (J. Kleinberg, Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 1998, pp. 668–677) on discovering “hub web pages” and “authorities web pages” is applied in this new coordinate space. Some very uncommon phenomenon has been discovered and new interesting results interpreted. This study does not look at enhancing web retrieval by adding context information. It only considers web hyperlinks as a source to analyze citations on the web. The author believes that understanding the underlying web page as a graph will help design better web algorithms, enhance retrieval and web performance, and recommends using graphs as a part of visual aid for search engine designers.  相似文献   

6.
In the whole world, the internet is exercised by millions of people every day for information retrieval. Even for a small to smaller task like fixing a fan, to cook food or even to iron clothes persons opt to search the web. To fulfill the information needs of people, there are billions of web pages, each having a different degree of relevance to the topic of interest (TOI), scattered throughout the web but this huge size makes manual information retrieval impossible. The page ranking algorithm is an integral part of search engines as it arranges web pages associated with a queried TOI in order of their relevance level. It, therefore, plays an important role in regulating the search quality and user experience for information retrieval. PageRank, HITS, and SALSA are well-known page ranking algorithm based on link structure analysis of a seed set, but ranking given by them has not yet been efficient. In this paper, we propose a variant of SALSA to give sNorm(p) for the efficient ranking of web pages. Our approach relies on a p-Norm from Vector Norm family in a novel way for the ranking of web pages as Vector Norms can reduce the impact of low authority weight in hub weight calculation in an efficient way. Our study, then compares the rankings given by PageRank, HITS, SALSA, and sNorm(p) to the same pages in the same query. The effectiveness of the proposed approach over state of the art methods has been shown using performance measurement technique, Mean Reciprocal Rank (MRR), Precision, Mean Average Precision (MAP), Discounted Cumulative Gain (DCG) and Normalized DCG (NDCG). The experimentation is performed on a dataset acquired after pre-processing of the results collected from initial few pages retrieved for a query by the Google search engine. Based on the type and amount of in-hand domain expertise 30 queries are designed. The extensive evaluation and result analysis are performed using MRR, [email protected], MAP, DCG, and NDCG as the performance measuring statistical metrics. Furthermore, results are statistically verified using a significance test. Findings show that our approach outperforms state of the art methods by attaining 0.8666 as MRR value, 0.7957 as MAP value. Thus contributing to the improvement in the ranking of web pages more efficiently as compared to its counterparts.  相似文献   

7.
介绍了网络监控系统的概念,并根据实践需要提出了一种适用于网络监控系统的网页分类技术。该网页分类技术是基于网站本身所具有的结构性,并通过URL充分表现这一特点提出来的。与传统的基于数据挖掘技术的网页分类技术有本质区别。该技术着重于实用性,实现算法只需要少量的计算机资源,是适合网络监控系统的一种网页分类技术。  相似文献   

8.
随着互联网的快速发展,恶意网页所造成的危害也越来越大。对典型恶意网页进行了分析与分类,通过对现有的恶意网页检测技术的比较分类,分析了各种检测技术的优缺点。  相似文献   

9.
Modern web search engines are expected to return the top-k results efficiently. Although many dynamic index pruning strategies have been proposed for efficient top-k computation, most of them are prone to ignoring some especially important factors in ranking functions, such as term-proximity (the distance relationship between query terms in a document). In our recent work [Zhu, M., Shi, S., Li, M., & Wen, J. (2007). Effective top-k computation in retrieving structured documents with term-proximity support. In Proceedings of 16th CIKM conference (pp. 771–780)], we demonstrated that, when term-proximity is incorporated into ranking functions, most existing index structures and top-k strategies become quite inefficient. To solve this problem, we built the inverted index based on web page structure and proposed the query processing strategies accordingly. The experimental results indicate that the proposed index structures and query processing strategies significantly improve the top-k efficiency. In this paper, we study the possibility of adopting additional techniques to further improve top-k computation efficiency. We propose a Proximity-Probe Heuristic to make our top-k algorithms more efficient. We also test the efficiency of our approaches on various settings (linear or non-linear ranking functions, exact or approximate top-k processing, etc.).  相似文献   

10.
以净化网页、提取网页主题内容为目标,提出一个基于网页规划布局的网页主题内容抽取算法。该算法依据原始网页的规划布局,通过构造标签树为网页分块分类,进而通过计算内容块的主题相关度,辨别网页主题,剔除不相关信息,提取网页主题内容。实验表明,算法适用于主题型网页的“去噪”及内容提取,具体应用中有较理想的表现。  相似文献   

11.
Despite a number of studies looking at Web experience and Web searching tactics and behaviours, the specific relationships between experience and cognitive search strategies have not been widely researched. This study investigates how the cognitive search strategies of 80 participants might vary with Web experience as they engaged in two researcher-defined tasks and two participant-defined information seeking tasks. Each of the two researcher-defined tasks and participant-defined tasks included a directed search task and a general-purpose browsing task. While there were almost no significant performance differences between experience levels on any of the four tasks, there were significant differences in the use of cognitive search strategies. Participants with higher levels of Web experience were more likely to use “Parallel player”, “Parallel hub-and-spoke”, “Known address search domain” and “Known address” strategies, whereas participants with lower levels of Web experience were more likely to use “Virtual tourist”, “Link-dependent”, “To-the-point”, “Sequential player”, “Search engine narrowing”, and “Broad first” strategies. The patterns of use and differences between researcher-defined and participant-defined tasks and between directed search tasks and general-purpose browsing tasks are also discussed, although the distribution of search strategies by Web experience were not statistically significant for each individual task.  相似文献   

12.
DIV+CSS网页布局越来越多的被广泛应用于网页设计中,文章通过使用DIV+CSS技术制作一个网页页面详细说明DIV+CSS的使用方法。  相似文献   

13.
Micro-blogging services such as Twitter allow anyone to publish anything, anytime. Needless to say, many of the available contents can be diminished as babble or spam. However, given the number and diversity of users, some valuable pieces of information should arise from the stream of tweets. Thus, such services can develop into valuable sources of up-to-date information (the so-called real-time web) provided a way to find the most relevant/trustworthy/authoritative users is available. Hence, this makes a highly pertinent question for which graph centrality methods can provide an answer. In this paper the author offers a comprehensive survey of feasible algorithms for ranking users in social networks, he examines their vulnerabilities to linking malpractice in such networks, and suggests an objective criterion against which to compare such algorithms. Additionally, he suggests a first step towards “desensitizing” prestige algorithms against cheating by spammers and other abusive users.  相似文献   

14.
数据挖掘就是从大量的数据中发现隐含的规律性的内容。本文从Web数据挖掘方面入手,对网站优化的个性化推荐方法进行了较为系统地研究,并且通过采用适当的关联规则,对用户所浏览网页之间的关联性进行了分析,最后对个性化推荐服务的性能进行了验证。  相似文献   

15.
JavaScript传统上是单线程的,在HTML页面中执行一个需较长时间运行的脚本会阻塞所有的页面功能直至脚本完成。Web Worker是HTML5提供的JavaScript多线程解决方案。解析了Web Worker的工作原理和过程;提供了Web Worker代码示例和代码调试方法;说明了使用Web Worker如何提高Web应用的性能。由于Web Worker相对较新,目前关于Web Worker的示例和文献非常有限。该研究院提供了Web Worker的参考应用场景及进一步研究和应用的方向。  相似文献   

16.
This paper is concerned with the quality of training data in learning to rank for information retrieval. While many data selection techniques have been proposed to improve the quality of training data for classification, the study on the same issue for ranking appears to be insufficient. As pointed out in this paper, it is inappropriate to extend technologies for classification to ranking, and the development of novel technologies is sorely needed. In this paper, we study the development of such technologies. To begin with, we propose the concept of “pairwise preference consistency” (PPC) to describe the quality of a training data collection from the ranking point of view. PPC takes into consideration the ordinal relationship between documents as well as the hierarchical structure on queries and documents, which are both unique properties of ranking. Then we select a subset of the original training documents, by maximizing the PPC of the selected subset. We further propose an efficient solution to the maximization problem. Empirical results on the LETOR benchmark datasets and a web search engine dataset show that with the subset of training data selected by our approach, the performance of the learned ranking model can be significantly improved.  相似文献   

17.
在现有相关研究的基础上,设计一种基于数据库分类的deep web爬行器。该爬行器首先从抓取的网页中识别出deep web数据库的入口表单,然后采用查询探测方法对数据库进行自动分类,并根据分类结果来选取一组合适的关键词作为查询词,自动填写入口表单中的文本框并向数据库提出查询请求。实验结果表明,基于数据库分类的deep web爬行器的爬行效果要优于基于指定查询词的deep web爬行器的爬行效果。  相似文献   

18.
王成 《人天科学研究》2011,(10):126-127
随着Internet的飞速发展,恶意网页已经成为影响网络安全的主要问题之一。Rootkit是一种基于Windows分层驱动模型的技术。介绍了一种基于Rootkit技术的恶意网页防护系统的设计,对恶意网页防护的研究具有一定的参考价值。  相似文献   

19.
本文通过对网页结构和内容特征的深入分析和识别,对噪音网页的过滤方法进行研究和实验。首先利用阈值过滤具有明显特征的噪音网页,而后建立网页特征向量,利用SVM对网页进行分类。采用采集自Web的网页数据进行实验分析,最后得出研究结论,并展望下一步工作。  相似文献   

20.
Metrics derived from user visits or sessions provide a means of evaluating Websites and an important insight into online information seeking behaviour, the most important of them being the duration of sessions and the number of pages viewed in a session, a possible busyness indicator. However, the identification of session (termed often ‘sessionization’) is fraught with difficulty in that there is no way of determining from a transactional log file that a user has ended their session. No one logs out. Instead a session delimiter has to be applied and this is typically done on the basis of a standard period of inactivity. To date researchers have discussed the issue of a time out delimiter in terms of a single value and if a page view time exceeds the cut-off value the session is deemed to have ended. This approach assumes that page view time is a single distribution and that the cut-off value is one point on that distribution. The authors however argue that page time distribution is composed of a number of quite separate view time distributions because of the marked differences in view times between pages (abstract, contents page, full text). This implies that a number of timeout delimiters should be applied. Employing data from a study of the OhioLINK digital journal library, the authors demonstrate how the setting of a time out delimiter impacts on the estimate of page view time and the number of estimated session. Furthermore, they also show how a number of timeout delimiters might apply and they argue that this gives a better and more robust estimate of the number of sessions, session time and page view time compared to an application of a single timeout delimiter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号