首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
Evolving local and global weighting schemes in information retrieval   总被引:1,自引:0,他引:1  
This paper describes a method, using Genetic Programming, to automatically determine term weighting schemes for the vector space model. Based on a set of queries and their human determined relevant documents, weighting schemes are evolved which achieve a high average precision. In Information Retrieval (IR) systems, useful information for term weighting schemes is available from the query, individual documents and the collection as a whole. We evolve term weighting schemes in both local (within-document) and global (collection-wide) domains which interact with each other correctly to achieve a high average precision. These weighting schemes are tested on well-known test collections and are compared to the traditional tf-idf weighting scheme and to the BM25 weighting scheme using standard IR performance metrics. Furthermore, we show that the global weighting schemes evolved on small collections also increase average precision on larger TREC data. These global weighting schemes are shown to adhere to Luhn’s resolving power as both high and low frequency terms are assigned low weights. However, the local weightings evolved on small collections do not perform as well on large collections. We conclude that in order to evolve improved local (within-document) weighting schemes it is necessary to evolve these on large collections.  相似文献   

2.
The effective representation of the relationship between the documents and their contents is crucial to increase classification performance of text documents in the text classification. Term weighting is a preprocess aiming to represent text documents better in Vector Space by assigning proper weights to terms. Since the calculation of the appropriate weight values directly affects performance of the text classification, in the literature, term weighting is still one of the important sub-research areas of text classification. In this study, we propose a novel term weighting (MONO) strategy which can use the non-occurrence information of terms more effectively than existing term weighting approaches in the literature. The proposed weighting strategy also performs intra-class document scaling to supply better representations of distinguishing capabilities of terms occurring in the different quantity of documents in the same quantity of class. Based on the MONO weighting strategy, two novel supervised term weighting schemes called TF-MONO and SRTF-MONO were proposed for text classification. The proposed schemes were tested with two different classifiers such as SVM and KNN on 3 different datasets named Reuters-21578, 20-Newsgroups, and WebKB. The classification performances of the proposed schemes were compared with 5 different existing term weighting schemes in the literature named TF-IDF, TF-IDF-ICF, TF-RF, TF-IDF-ICSDF, and TF-IGM. The results obtained from 7 different schemes show that SRTF-MONO generally outperformed other schemes for all three datasets. Moreover, TF-MONO has promised both Micro-F1 and Macro-F1 results compared to other five benchmark term weighting methods especially on the Reuters-21578 and 20-Newsgroups datasets.  相似文献   

3.
In many probabilistic modeling approaches to Information Retrieval we are interested in estimating how well a document model “fits” the user’s information need (query model). On the other hand in statistics, goodness of fit tests are well established techniques for assessing the assumptions about the underlying distribution of a data set. Supposing that the query terms are randomly distributed in the various documents of the collection, we actually want to know whether the occurrences of the query terms are more frequently distributed by chance in a particular document. This can be quantified by the so-called goodness of fit tests. In this paper, we present a new document ranking technique based on Chi-square goodness of fit tests. Given the null hypothesis that there is no association between the query terms q and the document d irrespective of any chance occurrences, we perform a Chi-square goodness of fit test for assessing this hypothesis and calculate the corresponding Chi-square values. Our retrieval formula is based on ranking the documents in the collection according to these calculated Chi-square values. The method was evaluated over the entire test collection of TREC data, on disks 4 and 5, using the topics of TREC-7 and TREC-8 (50 topics each) conferences. It performs well, outperforming steadily the classical OKAPI term frequency weighting formula but below that of KL-Divergence from language modeling approach. Despite this, we believe that the technique is an important non-parametric way of thinking of retrieval, offering the possibility to try simple alternative retrieval formulas within goodness-of-fit statistical tests’ framework, modeling the data in various ways estimating or assigning any arbitrary theoretical distribution in terms.  相似文献   

4.
雷晓  常春  刘伟 《图书情报工作》2019,63(20):121-128
[目的/意义]为增强叙词表实用性,需要不断地将领域中出现的新术语更新到叙词表中,更新维护过程中,从时间及词频等角度对新术语分布特征进行探索研究,可以为新术语发现方法提供参考。[方法/过程]基于新术语相关特征,结合对应文档频率在时间点和时间段上的发展分布,通过相关统计分析,研究术语在不同成长时期的分布特征,尤其界定术语在开始期与成长期的分布差异。[结果/结论]实证分析表明新术语一般处于术语发展的成长期,当候选新术语保持正向增长趋势超过一定年限,可以认为该术语同时具有新颖性、时间持续性及术语性特征。基于该分布特征进行领域新术语的识别,结合词表编制专家的判断,该方法在新术语收录判断中具有较高的准确率,且能有效识别实际应用中占比较多的低频词。  相似文献   

5.
The need to cluster small text corpora composed of a few hundreds of short texts rises in various applications; e.g., clustering top-retrieved documents based on their snippets. This clustering task is challenging due to the vocabulary mismatch between short texts and the insufficient corpus-based statistics (e.g., term co-occurrence statistics) due to the corpus size. We address this clustering challenge using a framework that utilizes a set of external knowledge resources that provide information about term relations. Specifically, we use information induced from the resources to estimate similarity between terms and produce term clusters. We also utilize the resources to expand the vocabulary used in the given corpus and thus enhance term clustering. We then project the texts in the corpus onto the term clusters to cluster the texts. We evaluate various instantiations of the proposed framework by varying the term clustering method used, the approach of projecting the texts onto the term clusters, and the way of applying external knowledge resources. Extensive empirical evaluation demonstrates the merits of our approach with respect to applying clustering algorithms directly on the text corpus, and using state-of-the-art co-clustering and topic modeling methods.  相似文献   

6.
We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion, personalized query suggestion and patient query expansion. The methods for term scoring that have been proposed in the literature were designed with a specific goal in mind. However, it is as yet unclear how these methods perform on collections with characteristics different than what they were designed for, and which method is the most suitable for a given (new) collection. In a series of experiments, we evaluate, compare and analyse the output of six term scoring methods for the collections at hand. We found that the most important factors in the success of a term scoring method are the size of the collection and the importance of multi-word terms in the domain. Larger collections lead to better terms; all methods are hindered by small collection sizes (below 1000 words). The most flexible method for the extraction of single-word and multi-word terms is pointwise Kullback–Leibler divergence for informativeness and phraseness. Overall, we have shown that extracting relevant terms using unsupervised term scoring methods is possible in diverse use cases, and that the methods are applicable in more contexts than their original design purpose.  相似文献   

7.
一种用于主题提取的非线性加权方法   总被引:15,自引:0,他引:15  
韩客松  王永成 《情报学报》2000,19(6):650-653
主题提取是文本处理的一项重要工作。本文首先分析了主题抽取中加权方法形成时的一些定量问题,然后提出了主题相关词一种非线性加权处理方法,对比实验结果显示它不仅是一种比较稳健的方法,而且能在一定程度上提高主题提取的正确率。  相似文献   

8.
对信息检索系统返回结果相关度的改进,一直是信息检索领域重要的研究内容。本文首先引入查询词出现信息的概念,随后给出了查询词出现权重的形式化表示,进而将其与BM25模型结合起来。对于查询词出现权重的计算,本文采用了两种方法,即线性加权方法和因数加权方法。我们通过在GOV2数据集上的实验发现,无论哪种方法,通过加入查询词出现权重,都可以有效的改进检索结果的相关度。实验显示,对于TREC 2005的查询,MAP值的改进达到15.78%,p@10的改进达到3468%。本文所描述的方法已经应用到TREC 2009的WebTrack中。  相似文献   

9.
特征词抽取和相关性融合的伪相关反馈查询扩展   总被引:2,自引:0,他引:2  
针对现有信息检索系统中存在的词不匹配问题,提出一种基于特征词抽取和相关性融合的伪相关反馈查询扩展算法以及新的扩展词权重计算方法。该算法从前列n篇初检局部文档中抽取与原查询相关的特征词,根据特征词在初检文档集中出现的频度以及与原查询的相关度,将特征词确定为最终的扩展词实现查询扩展。实验结果表明,该方法有效,并能提高和改善信息检索性能。  相似文献   

10.
��ͼ���ѧ�о�����   总被引:19,自引:1,他引:18  
本文回顾了人们对图书馆学研究方法的认识过程和图书馆学探索自身科学性的努力。论述了不同方法的运用不仅要依据具体对象和任务,而且与图书馆学概念、术语是否明确有关。提出学科的独立性与方法的特殊性密切相关,方法及方法论是图书馆学获得其科学地位的关键。  相似文献   

11.
Every information retrieval (IR) model embeds in its scoring function a form of term frequency (TF) quantification. The contribution of the term frequency is determined by the properties of the function of the chosen TF quantification, and by its TF normalization. The first defines how independent the occurrences of multiple terms are, while the second acts on mitigating the a priori probability of having a high term frequency in a document (estimation usually based on the document length). New test collections, coming from different domains (e.g. medical, legal), give evidence that not only document length, but in addition, verboseness of documents should be explicitly considered. Therefore we propose and investigate a systematic combination of document verboseness and length. To theoretically justify the combination, we show the duality between document verboseness and length. In addition, we investigate the duality between verboseness and other components of IR models. We test these new TF normalizations on four suitable test collections. We do this on a well defined spectrum of TF quantifications. Finally, based on the theoretical and experimental observations, we show how the two components of this new normalization, document verboseness and length, interact with each other. Our experiments demonstrate that the new models never underperform existing models, while sometimes introducing statistically significantly better results, at no additional computational cost.  相似文献   

12.
曾文  徐红姣  李颖  王莉军  赵婧 《情报工程》2016,2(3):037-042
文本相似度的计算方法以采用TF-IDF的方法对文本建模成词频向量空间模型(VSM)为主,本文结合科技期刊文献和专利文献特点,对TF-IDF的计算方法进行了改进,将词频的统计改进为科技术语的频率统计,提出了一种针对科技文献相似度的计算方法,该方法首先应用自然语言处理技术对科技文献进行预处理,采用科技术语的自动抽取方法进行科技文献术语的自动抽取,结合该文提出的术语权重计算公式构建向量空间模型,来计算科技期刊文献和专利文献之间的相似度。并利用真实有效的科学期刊和文献数据进行实验测试,实验结果表明文中提出的方法优于传统的TF-IDF计算方法。  相似文献   

13.
在调查研究与信息资源开发利用工作相关测度方法的基础上,分析信息资源开发利用测度方法所遵循的设计原则、理论框架、变量选择、权重设计和有效性分析方法,探讨数据获取、标准化处理、数据估算、预测、溯源和结果分析方法。在此基础上,进一步深入分析信息资源开发利用测度方法的基本特点和发展趋势,并预测信息资源开发利用工作的变化趋势。  相似文献   

14.
For the purposes of classification it is common to represent a document as a bag of words. Such a representation consists of the individual terms making up the document together with the number of times each term appears in the document. All classification methods make use of the terms. It is common to also make use of the local term frequencies at the price of some added complication in the model. Examples are the naïve Bayes multinomial model (MM), the Dirichlet compound multinomial model (DCM) and the exponential-family approximation of the DCM (EDCM), as well as support vector machines (SVM). Although it is usually claimed that incorporating local word frequency in a document improves text classification performance, we here test whether such claims are true or not. In this paper we show experimentally that simplified forms of the MM, EDCM, and SVM models which ignore the frequency of each word in a document perform about at the same level as MM, DCM, EDCM and SVM models which incorporate local term frequency. We also present a new form of the naïve Bayes multivariate Bernoulli model (MBM) which is able to make use of local term frequency and show again that it offers no significant advantage over the plain MBM. We conclude that word burstiness is so strong that additional occurrences of a word essentially add no useful information to a classifier.  相似文献   

15.
[目的/意义]为克服关键词绝对词频分析的局限性,以关键词多因素加权及得分排名实现领域热点与趋势探索。[方法/过程]构建年度-关键词频次矩阵,用水平加权和垂直加权处理关键词词频,设计相对词频模型,计算关键词加权综合分值,以获得更有效的关键词排序。[结果/结论]基于关键词加权排序,可以识别量高质优型、量低质优型和突变型关键词,有利于挖掘研究热点和分析趋势。  相似文献   

16.
从信息分析的实际需求出发,对与电动汽车相关的5 405条专利数据进行术语抽取、生僻术语识别和字段比较研究。结果显示关键短语抽取的方法可行,互信息抽取的术语所在文档的平均文档长度更接近集合的平均文档长度;摘要和First Claim字段的术语存在一定差别,但对分类或聚类同等重要;生僻术语识别算法能够发现生僻词和高频词的对应关系。研究结论可以为专利文本挖掘和专利信息分析提供结果和方法,并为信息分析工作提供所需的参考术语。  相似文献   

17.
Terminology extraction is an essential task in domain knowledge acquisition, as well as for information retrieval. It is also a mandatory first step aimed at building/enriching terminologies and ontologies. As often proposed in the literature, existing terminology extraction methods feature linguistic and statistical aspects and solve some problems related (but not completely) to term extraction, e.g. noise, silence, low frequency, large-corpora, complexity of the multi-word term extraction process. In contrast, we propose a cutting edge methodology to extract and to rank biomedical terms, covering all the mentioned problems. This methodology offers several measures based on linguistic, statistical, graphic and web aspects. These measures extract and rank candidate terms with excellent precision: we demonstrate that they outperform previously reported precision results for automatic term extraction, and work with different languages (English, French, and Spanish). We also demonstrate how the use of graphs and the web to assess the significance of a term candidate, enables us to outperform precision results. We evaluated our methodology on the biomedical GENIA and LabTestsOnline corpora and compared it with previously reported measures.  相似文献   

18.
利用遗传算法确定医学文献的研究热点   总被引:3,自引:0,他引:3  
提取医学文献的主题词特征向量,检索文献在发表两年后的最新被引次数,以此作为热点文献的指标,利用遗传算法编程,优化提取出能表达热点文献内容的主题词组合。以肿瘤干细胞为测试对象,以词频分析所得的结果作为基准线,并请相关领域专家对两组结果进行评价比较,结果显示遗传算法所得到的结果优于词频分析。  相似文献   

19.
鉴于专利术语的翻译要求高度的准确性和专业性,而专利术语的自动获取翻译对于机器翻译、词典自动编纂、跨语言信息检索等自然语言处理具有重要的实用价值,从双语的专利摘要中分别抽取术语,之后融合多术语识别方法,采用规则翻译和统计机器翻译来动态地辅助词汇化方法进行术语对齐,以期尽可能多地在双语的专利文献中获取准确的专利术语翻译对。在专利文摘中进行实验验证的结果是:专利术语翻译对的准确率达到80%。  相似文献   

20.
We investigate the effect of feature weighting on document clustering, including a novel investigation of Okapi BM25 feature weighting. Using eight document datasets and 17 well-established clustering algorithms we show that the benefit of tf-idf weighting over tf weighting is heavily dependent on both the dataset being clustered and the algorithm used. In addition, binary weighting is shown to be consistently inferior to both tf-idf weighting and tf weighting. We investigate clustering using both BM25 term saturation in isolation and BM25 term saturation with idf, confirming that both are superior to their non-BM25 counterparts under several common clustering quality measures. Finally, we investigate estimation of the k1 BM25 parameter when clustering. Our results indicate that typical values of k1 from other IR tasks are not appropriate for clustering; k1 needs to be higher.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号