首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
Eigentaste: A Constant Time Collaborative Filtering Algorithm   总被引:32,自引:2,他引:30  
Eigentaste is a collaborative filtering algorithm that uses universal queries to elicit real-valued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline clustering of users and rapid computation of recommendations. For a database of n users, standard nearest-neighbor techniques require O(n) processing time to compute recommendations, whereas Eigentaste requires O(1) (constant) time. We compare Eigentaste to alternative algorithms using data from Jester, an online joke recommending system.Jester has collected approximately 2,500,000 ratings from 57,000 users. We use the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms. In the Appendix we use Uniform and Normal distribution models to derive analytic estimates of NMAE when predictions are random. On the Jester dataset, Eigentaste computes recommendations two orders of magnitude faster with no loss of accuracy. Jester is online at: http://eigentaste.berkeley.edu  相似文献   

2.
Privacy-preserving collaborative filtering algorithms are successful approaches. However, they are susceptible to shilling attacks. Recent research has increasingly focused on collaborative filtering to protect against both privacy and shilling attacks. Malicious users may add fake profiles to manipulate the output of privacy-preserving collaborative filtering systems, which reduces the accuracy of these systems. Thus, it is imperative to detect fake profiles for overall success. Many methods have been developed for detecting attack profiles to keep them outside of the system. However, these techniques have all been established for non-private collaborative filtering schemes. The detection of shilling attacks in privacy-preserving recommendation systems has not been deeply examined. In this study, we examine the detection of shilling attacks in privacy-preserving collaborative filtering systems. We utilize four attack-detection methods to filter out fake profiles produced by six well-known shilling attacks on perturbed data. We evaluate these detection methods with respect to their ability to identify bogus profiles. Real data-based experiments are performed. Empirical outcomes demonstrate that some of the detection methods are very successful at filtering out fake profiles in privacy-preserving collaborating filtering schemes.  相似文献   

3.
Recommender systems have dramatically changed the way we consume content. Internet applications rely on these systems to help users navigate among the ever-increasing number of choices available. However, most current systems ignore the fact that user preferences can change according to context, resulting in recommendations that do not fit user interests. This research addresses these issues by proposing the \(({ CF})^2\) architecture, which uses local learning techniques to embed contextual awareness into collaborative filtering models. The proposed architecture is demonstrated on two large-scale case studies involving over 130 million and over 7 million unique samples, respectively. Results show that contextual models trained with a small fraction of the data provided similar accuracy to collaborative filtering models trained with the complete dataset. Moreover, the impact of taking into account context in real-world datasets has been demonstrated by higher accuracy of context-based models in comparison to random selection models.  相似文献   

4.
面向专利领域的机器翻译近年来已成为机器翻译的重要应用领域之一。本文提出了一个汉英专利文本机器翻译融合系统,该系统以规则系统为主导搭建,并把规则翻译方法和基于短语的统计翻译系统相结合。在融合系统中,规则系统主要负责源语言的分析和转换阶段的处理,生成相应的源语言句法分析树与转换树,并确定目标语言的基本句法框架。统计翻译系统则在目标语生成阶段根据生成的目标语句法结构寻找合适的对译词形,并产生最终的候选译文。通过利用自动评测指标对融合系统进行测试,融合系统的结果均优于单个规则系统和统计系统的结果,表明了融合方法的有效性和可行性,可以改善系统的翻译性能,提高翻译质量。  相似文献   

5.
针对传统TF-IDF在文本过滤时存在的缺点,提出一种基于特征词抽取的文本过滤算法。简要分析文档信息过滤原理和流程,重点讨论文档信息过滤算法设计及技术实现。实验结果表明,所提出的算法可有效对文档信息进行过滤,能够提高信息检索质量。  相似文献   

6.
Multilingual retrieval (querying of multiple document collections each in a different language) can be achieved by combining several individual techniques which enhance retrieval: machine translation to cross the language barrier, relevance feedback to add words to the initial query, decompounding for languages with complex term structure, and data fusion to combine monolingual retrieval results from different languages. Using the CLEF 2001 and CLEF 2002 topics and document collections, this paper evaluates these techniques within the context of a monolingual document ranking formula based upon logistic regression. Each individual technique yields improved performance over runs which do not utilize that technique. Moreover the techniques are complementary, in that combining the best techniques outperforms individual technique performance. An approximate but fast document translation using bilingual wordlists created from machine translation systems is presented and evaluated. The fast document translation is as effective as query translation in multilingual retrieval. Furthermore, when fast document translation is combined with query translation in multilingual retrieval, the performance is significantly better than that of query translation or fast document translation.  相似文献   

7.
Collaborative filtering is a general technique for exploiting the preference patterns of a group of users to predict the utility of items for a particular user. Three different components need to be modeled in a collaborative filtering problem: users, items, and ratings. Previous research on applying probabilistic models to collaborative filtering has shown promising results. However, there is a lack of systematic studies of different ways to model each of the three components and their interactions. In this paper, we conduct a broad and systematic study on different mixture models for collaborative filtering. We discuss general issues related to using a mixture model for collaborative filtering, and propose three properties that a graphical model is expected to satisfy. Using these properties, we thoroughly examine five different mixture models, including Bayesian Clustering (BC), Aspect Model (AM), Flexible Mixture Model (FMM), Joint Mixture Model (JMM), and the Decoupled Model (DM). We compare these models both analytically and experimentally. Experiments over two datasets of movie ratings under different configurations show that in general, whether a model satisfies the proposed properties tends to be correlated with its performance. In particular, the Decoupled Model, which satisfies all the three desired properties, outperforms the other mixture models as well as many other existing approaches for collaborative filtering. Our study shows that graphical models are powerful tools for modeling collaborative filtering, but careful design is necessary to achieve good performance.  相似文献   

8.
When speaking of information retrieval, we often mean text retrieval. But there exist many other forms of information retrieval applications. A typical example is collaborative filtering that suggests interesting items to a user by taking into account other users’ preferences or tastes. Due to the uniqueness of the problem, it has been modeled and studied differently in the past, mainly drawing from the preference prediction and machine learning view point. A few attempts have yet been made to bring back collaborative filtering to information (text) retrieval modeling and subsequently new interesting collaborative filtering techniques have been thus derived. In this paper, we show that from the algorithmic view point, there is an even closer relationship between collaborative filtering and text retrieval. Specifically, major collaborative filtering algorithms, such as the memory-based, essentially calculate the dot product between the user vector (as the query vector in text retrieval) and the item rating vector (as the document vector in text retrieval). Thus, if we properly structure user preference data and employ the target user’s ratings as query input, major text retrieval algorithms and systems can be directly used without any modification. In this regard, we propose a unified formulation under a common notational framework for memory-based collaborative filtering, and a technique to use any text retrieval weighting function with collaborative filtering preference data. Besides confirming the rationale of the framework, our preliminary experimental results have also demonstrated the effectiveness of the approach in using text retrieval models and systems to perform item ranking tasks in collaborative filtering.  相似文献   

9.
Collaborative filtering is a popular recommendation technique. Although researchers have focused on the accuracy of the recommendations, real applications also need efficient algorithms. An index structure can be used to store the rating matrix and compute recommendations very fast. In this paper we study how compression techniques can reduce the size of this index structure and, at the same time, speed up recommendations. We show how coding techniques commonly used in Information Retrieval can be effectively applied to collaborative filtering, reducing the matrix size up to 75 %, and almost doubling the recommendation speed. Additionally, we propose a novel identifier reassignment technique, that achieves high compression rates, reducing by 40 % the size of an already compressed matrix. It is a very simple approach based on assigning the smallest identifiers to the items and users with the highest number of ratings, and it can be efficiently computed using a two pass indexing. The usage of the proposed compression techniques can significantly reduce the storage and time costs of recommender systems, which are two important factors in many real applications.  相似文献   

10.
11.
We introduce fast filtering methods for content-based music retrieval problems, where the music is modeled as sets of points in the Euclidean plane, formed by the (on-set time, pitch) pairs. The filters exploit a precomputed index for the database, and run in time dependent on the query length and intermediate output sizes of the filters, being almost independent of the database size. With a quadratic size index, the filters are provably lossless for general point sets of this kind. In the context of music, the search space can be narrowed down, which enables the use of a linear sized index for effective and efficient lossless filtering. For the checking phase, which dominates the overall running time, we exploit previously designed algorithms suitable for local checking. In our experiments on a music database, our best filter-based methods performed several orders of a magnitude faster than the previously designed solutions.  相似文献   

12.
Evolving local and global weighting schemes in information retrieval   总被引:1,自引:0,他引:1  
This paper describes a method, using Genetic Programming, to automatically determine term weighting schemes for the vector space model. Based on a set of queries and their human determined relevant documents, weighting schemes are evolved which achieve a high average precision. In Information Retrieval (IR) systems, useful information for term weighting schemes is available from the query, individual documents and the collection as a whole. We evolve term weighting schemes in both local (within-document) and global (collection-wide) domains which interact with each other correctly to achieve a high average precision. These weighting schemes are tested on well-known test collections and are compared to the traditional tf-idf weighting scheme and to the BM25 weighting scheme using standard IR performance metrics. Furthermore, we show that the global weighting schemes evolved on small collections also increase average precision on larger TREC data. These global weighting schemes are shown to adhere to Luhn’s resolving power as both high and low frequency terms are assigned low weights. However, the local weightings evolved on small collections do not perform as well on large collections. We conclude that in order to evolve improved local (within-document) weighting schemes it is necessary to evolve these on large collections.  相似文献   

13.
A computerized English/Spanish correlation index to five biomedical library classification schemes and a computerized English/Spanish, Spanish/English listings of MeSH are described. The index was accomplished by supplying appropriate classification numbers of five classification schemes (National Library of Medicine; Library of Congress; Dewey Decimal; Cunningham; Boston Medical) to MeSH and a Spanish translation of MeSH The data were keypunched, merged on magnetic tape, and sorted in a computer alphabetically by English and Spanish subject headings and sequentially by classification number.SOME BENEFITS AND USES OF THE INDEX ARE: a complete index to classification schemes based on MeSH terms; a tool for conversion of classification numbers when reclassifying collections; a Spanish index and a crude Spanish translation of five classification schemes; a data base for future applications, e.g., automatic classification. Other classification schemes, such as the UDC, and translations of MeSH into other languages can be added.  相似文献   

14.
Modern information retrieval systems use several levels of caching to speedup computation by exploiting frequent, recent or costly data used in the past. Previous studies show that the use of caching techniques is crucial in search engines, as it helps reducing query response times and processing workloads on search servers. In this work we propose and evaluate a static cache that acts simultaneously as list and intersection cache, offering a more efficient way of handling cache space. We also use a query resolution strategy that takes advantage of the existence of this cache to reorder the query execution sequence. In addition, we propose effective strategies to select the term pairs that should populate the cache. We also represent the data in cache in both raw and compressed forms and evaluate the differences between them using different configurations of cache sizes. The results show that the proposed Integrated Cache outperforms the standard posting lists cache in most of the cases, taking advantage not only of the intersection cache but also the query resolution strategy.  相似文献   

15.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

16.
Some insight into the behavior of adaptive filtering systems may be gained by comparing them with similar ranked-output retrieval systems. This is not easy; however, a new optimization measure, introduced for the TREC-9 filtering track, makes some such comparison possible. A series of experiments using the TREC-9 filtering data shows that filtering effectiveness is comparable to routing effectiveness, and demonstrates the gains to be made from adaptation.  相似文献   

17.
This paper describes and evaluates different retrieval strategies that are useful for search operations on document collections written in various European languages, namely French, Italian, Spanish and German. We also suggest and evaluate different query translation schemes based on freely available translation resources. In order to cross language barriers, we propose a combined query translation approach that has resulted in interesting retrieval effectiveness. Finally, we suggest a collection merging strategy based on logistic regression that tends to perform better than other merging approaches.  相似文献   

18.
[目的/意义] 微博作为一种新兴的社交媒体平台,被互联网用户广泛关注。微博数据中包含着大量的用户信息、用户行为及用户生成内容,基于微博内容自动识别图书名有利于分析用户阅读兴趣、收集用户对图书的评价和挖掘图书相关知识。[方法/过程] 基于微博的数据特点,提出一种基于深度神经网络的表示学习方法,利用微博中候选图书名的上下文连续向量化表示,实现微博内容中的图书名自动识别。[结果/结论] 实验结果表明,所提出的方法显著优于传统基于特征工程的有指导机器学习方法,并达到91.92%的精确率。  相似文献   

19.
李伟  孔桃 《情报学报》2000,19(5):458-463
本文利用模糊继承机制,对数据库结构中概念模式进行扩展。在保证多个IR系统数据独立性基础上,形成一共同映射。检索过程中,参照共同映射将查询表达成继承式的传递到各IR系统,在这过程中,不影响各数据库结构的外模式。  相似文献   

20.
Users often issue all kinds of queries to look for the same target due to the intrinsic ambiguity and flexibility of natural languages. Some previous work clusters queries based on co-clicks; however, the intents of queries in one cluster are not that similar but roughly related. It is desirable to conduct automatic mining of queries with equivalent intents from a large scale search logs. In this paper, we take account of similarities between query strings. There are two issues associated with such similarities: it is too costly to compare any pair of queries in large scale search logs, and two queries with a similar formulation, such as “SVN” (Apache Subversion) and support vector machine (SVM), are not necessarily similar in their intents. To address these issues, we propose using the similarities of query strings above the co-click based clustering results. Our method improves precision over the co-click based clustering method (lifting precision from 0.37 to 0.62), and outperforms a commercial search engine’s query alteration (lifting \(F_1\) measure from 0.42 to 0.56). As an application, we consider web document retrieval. We aggregate similar queries’ click-throughs with the query’s click-throughs and evaluate them on a large scale dataset. Experimental results indicate that our proposed method significantly outperforms the baseline method of using a query’s own click-throughs in all metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号