首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Cross-language information retrieval (CLIR) has so far been studied with the assumption that some rich linguistic resources such as bilingual dictionaries or parallel corpora are available. But creation of such high quality resources is labor-intensive and they are not always at hand. In this paper we investigate the feasibility of using only comparable corpora for CLIR, without relying on other linguistic resources. Comparable corpora are text documents in different languages that cover similar topics and are often naturally attainable (e.g., news articles published in different languages at the same time period). We adapt an existing cross-lingual word association mining method and incorporate it into a language modeling approach to cross-language retrieval. We investigate different strategies for estimating the target query language models. Our evaluation results on the TREC Arabic–English cross-lingual data show that the proposed method is effective for the CLIR task, demonstrating that it is feasible to perform cross-lingual information retrieval with just comparable corpora.  相似文献   

2.
The increasing trend of cross-border globalization and acculturation requires text summarization techniques to work equally well for multiple languages. However, only some of the automated summarization methods can be defined as “language-independent,” i.e., not based on any language-specific knowledge. Such methods can be used for multilingual summarization, defined in Mani (Automatic summarization. Natural language processing. John Benjamins Publishing Company, Amsterdam, 2001) as “processing several languages, with a summary in the same language as input”, but, their performance is usually unsatisfactory due to the exclusion of language-specific knowledge. Moreover, supervised machine learning approaches need training corpora in multiple languages that are usually unavailable for rare languages, and their creation is a very expensive and labor-intensive process. In this article, we describe cross-lingual methods for training an extractive single-document text summarizer called MUSE (MUltilingual Sentence Extractor)—a supervised approach, based on the linear optimization of a rich set of sentence ranking measures using a Genetic Algorithm. We evaluated MUSE’s performance on documents in three different languages: English, Hebrew, and Arabic using several training scenarios. The summarization quality was measured using ROUGE-1 and ROUGE-2 Recall metrics. The results of the extensive comparative analysis showed that the performance of MUSE was better than that of the best known multilingual approach (TextRank) in all three languages. Moreover, our experimental results suggest that using the same sentence ranking model across languages results in a reasonable summarization quality, while saving considerable annotation efforts for the end-user. On the other hand, using parallel corpora generated by machine translation tools may improve the performance of a MUSE model trained on a foreign language. Comparative evaluation of an alternative optimization technique—Multiple Linear Regression—justifies the use of a Genetic Algorithm.  相似文献   

3.
Word form normalization through lemmatization or stemming is a standard procedure in information retrieval because morphological variation needs to be accounted for and several languages are morphologically non-trivial. Lemmatization is effective but often requires expensive resources. Stemming is also effective in most contexts, generally almost as good as lemmatization and typically much less expensive; besides it also has a query expansion effect. However, in both approaches the idea is to turn many inflectional word forms to a single lemma or stem both in the database index and in queries. This means extra effort in creating database indexes. In this paper we take an opposite approach: we leave the database index un-normalized and enrich the queries to cover for surface form variation of keywords. A potential penalty of the approach would be long queries and slow processing. However, we show that it only matters to cover a negligible number of possible surface forms even in morphologically complex languages to arrive at a performance that is almost as good as that delivered by stemming or lemmatization. Moreover, we show that, at least for typical test collections, it only matters to cover nouns and adjectives in queries. Furthermore, we show that our findings are particularly good for short queries that resemble normal searches of web users. Our approach is called FCG (for Frequent Case (form) Generation). It can be relatively easily implemented for Latin/Greek/Cyrillic alphabet languages by examining their (typically very skewed) nominal form statistics in a small text sample and by creating surface form generators for the 3–9 most frequent forms. We demonstrate the potential of our FCG approach for several languages of varying morphological complexity: Swedish, German, Russian, and Finnish in well-known test collections. Applications include in particular Web IR in languages poor in morphological resources.  相似文献   

4.
余传明  李浩男  安璐 《情报学报》2020,39(5):521-533
随着大数据的迅速发展,知识网络在不同语言、不同领域和不同模态等情境下呈现高度多样性和复杂性,如何对齐与整合多源情境下的异构知识网络,成为研究者所面临的严峻挑战。本文在知识网络深度表示学习的基础上,提出一种由知识网络构建、跨语言网络表示学习和统计机器学习三个模块构成的知识网络对齐(knowledge network alignment,KNA)模型。为验证模型的有效性,在中英文双语知识网络数据集上开展实证研究,借助于网络表示学习算法将异构知识网络表征到同一空间,利用已知的对齐链接来训练统计机器学习模型,并通过模型来预测未知的节点对齐链接。KNA模型在跨语言共词网络对齐任务中取得Precision@1值为0.7731,高于基线方法 (0.6806),验证了KNA模型在跨语言知识网络对齐上的有效性。研究结果对于改进知识网络的节点对齐效果,促进多源情境下的异构知识网络融合具有重要意义。  相似文献   

5.
中文图书MARC字段的选择与数据转换   总被引:1,自引:0,他引:1  
对中文回溯建库使用 MARC数据中出现的问题进行了阐述,对 MARC字段的选择说明了理由,在实际数据转换中使用C, C+ +语言对数据由 FoxPro MEMO 格式转成普通文本格式,再转成 FoxPro数据库格式,顺利完成了整个转换工作。  相似文献   

6.
一种基于Native XML的全文检索引擎   总被引:5,自引:0,他引:5  
王弘蔚  肖诗斌 《情报学报》2003,22(5):550-556
随着XML的日益流行 ,基于XML的全文检索应用需求也迅速扩大。在这些应用中 ,native XML数据库是发展方向。虽然商业化的native XML数据库已经出现 ,但其全文检索的性能还不尽人意。本文提出一种方法 :在传统的倒排索引的框架下 ,对XML的标记建立索引 ,使得一个全文数据库能够以Native的方式存储、索引、检索和输出XML文档 ,成为一个真正意义上的native XML全文数据库 ,既有传统全文数据库的优越性能 ,又能满足基于na tive XML的应用需求  相似文献   

7.
The morning report reference file was automated at the Stollerman Library because the manual system was time-intensive to maintain and cumbersome to search. A general database management system (DBMS) was chosen so that it could be used in the future for other data management functions in the library. DBMS features that should be examined before use with a bibliographic application include size limitations, data entry forms, data types, search options, index files, sort options, report generation, query and programming languages, command and/or menu files, file interaction, interface with other software, and documentation. Desired requirements for this application are discussed. It is noted that a general database manager probably will not meet all of the desired requirements. For some bibliographic applications, software specifically designed for bibliographic information management and retrieval should be used. A database for the purposes of searching the morning report reference file and producing a weekly reference list and a yearly index was developed using CONDOR 3. The structure of the database is described, and examples of the reports are given. The system has been in operation since December 1984 and has been well-received by staff and patrons.  相似文献   

8.
Text Categorization (TC) is the automated assignment of text documents to predefined categories based on document contents. TC has been an application for many learning approaches, which prove effective. Nevertheless, TC provides many challenges to machine learning. In this paper, we suggest, for text categorization, the integration of external WordNet lexical information to supplement training data for a semi-supervised clustering algorithm which can learn from both training and test documents to classify new unseen documents. This algorithm is the Semi-Supervised Fuzzy c-Means (ssFCM). Our experiments use Reuters 21578 database and consist of binary classifications for categories selected from the 115 TOPICS classes of the Reuters collection. Using the Vector Space Model, each document is represented by its original feature vector augmented with external feature vector generated using WordNet. We verify experimentally that the integration of WordNet helps ssFCM improve its performance, effectively addresses the classification of documents into categories with few training documents and does not interfere with the use of training data.  相似文献   

9.
基于改进编辑距离的相似重复记录清理算法   总被引:1,自引:0,他引:1  
相似度计算是相似重复记录清理过程中的一个关键问题,编辑距离算法在其中具有广泛应用。在传统编辑距离算法的基础上,通过分析影响相似度计算结果的序列长度、同义词等因素,得到一种同时引入同义词词库和归一化处理思想的改进的基于语义编辑距离的相似重复记录清理算法,适用于相似记录的识别过程。实验分析表明,改进算法计算结果更符合句子的语义信息,绝大部分结果符合人们的认知经验,从而可以有效地提高相似重复记录识别的准确率和精确度。  相似文献   

10.
Applying Machine Learning to Text Segmentation for Information Retrieval   总被引:2,自引:0,他引:2  
We propose a self-supervised word segmentation technique for text segmentation in Chinese information retrieval. This method combines the advantages of traditional dictionary based, character based and mutual information based approaches, while overcoming many of their shortcomings. Experiments on TREC data show this method is promising. Our method is completely language independent and unsupervised, which provides a promising avenue for constructing accurate multi-lingual or cross-lingual information retrieval systems that are flexible and adaptive. We find that although the segmentation accuracy of self-supervised segmentation is not as high as some other segmentation methods, it is enough to give good retrieval performance. It is commonly believed that word segmentation accuracy is monotonically related to retrieval performance in Chinese information retrieval. However, for Chinese, we find that the relationship between segmentation and retrieval performance is in fact nonmonotonic; that is, at around 70% word segmentation accuracy an over-segmentation phenomenon begins to occur which leads to a reduction in information retrieval performance. We demonstrate this effect by presenting an empirical investigation of information retrieval on Chinese TREC data, using a wide variety of word segmentation algorithms with word segmentation accuracies ranging from 44% to 95%, including 70% word segmentation accuracy from our self-supervised word-segmentation approach. It appears that the main reason for the drop in retrieval performance is that correct compounds and collocations are preserved by accurate segmenters, while they are broken up by less accurate (but reasonable) segmenters, to a surprising advantage. This suggests that words themselves might be too broad a notion to conveniently capture the general semantic meaning of Chinese text. Our research suggests machine learning techniques can play an important role in building adaptable information retrieval systems and different evaluation standards for word segmentation should be given to different applications.  相似文献   

11.
This study investigates the information seeking behavior of general Korean Web users. The data from transaction logs of selected dates from August 2006 to August 2007 were used to examine characteristics of Web queries and to analyze click logs that consist of a collection of documents that users clicked and viewed for each query. Changes in search topics are explored for NAVER users from 2003/2004 to 2006/2007. Patterns involving spelling errors and queries in foreign languages are also investigated. Search behaviors of Korean Web users are compared to those of the United States and other countries. The results show that entertainment is the topranked category, followed by shopping, education, games, and computer/Internet. Search topics changed from computer/Internet to entertainment and shopping from 2003/2004 to 2006/2007 in Korea. The ratios of both spelling errors and queries in foreign languages are low. This study reveals differences for search topics among different regions of the world. The results suggest that the analysis of click logs allows for the reduction of unknown or unidentifiable queries by providing actual data on user behaviors and their probable underlying information needs. The implications for system designers and Web content providers are discussed.  相似文献   

12.
提出了汉字全文检索系统的新的数据结构、建库和检索的算法,完成了程序设计、用于对中国化学文献数据库标题和文摘的检索,测定了索引建立时间、空间消耗和检索的响应时间,计算了每篇文献的长度在不同范围时的高频字数和索引空间消耗,讨论了索引膨胀比与文献长度的关系  相似文献   

13.
基于Lucene的Ftp搜索引擎的设计   总被引:2,自引:0,他引:2  
针对当前网络中所使用的基于数据库的Ftp搜索引擎没有标准资源文档且不支持中文分词和动态数据更新的缺陷,提出基于Lucene这个功能强大的全文索引引擎工具包的Ftp搜索引擎的设计方案。此Ftp搜索引擎不仅能够自动生成标准格式的XML资源文档,而且采用基于字典的前向最大匹配中文分词法在Lucene中动态更新全文索引。该设计还能够对检索关键字进行中英文混合分析和检索。  相似文献   

14.
In the information retrieval process, functions that rank documents according to their estimated relevance to a query typically regard query terms as being independent. However, it is often the joint presence of query terms that is of interest to the user, which is overlooked when matching independent terms. One feature that can be used to express the relatedness of co-occurring terms is their proximity in text. In past research, models that are trained on the proximity information in a collection have performed better than models that are not estimated on data. We analyzed how co-occurring query terms can be used to estimate the relevance of documents based on their distance in text, which is used to extend a unigram ranking function with a proximity model that accumulates the scores of all occurring term combinations. This proximity model is more practical than existing models, since it does not require any co-occurrence statistics, it obviates the need to tune additional parameters, and has a retrieval speed close to competing models. We show that this approach is more robust than existing models, on both Web and newswire corpora, and on average performs equal or better than existing proximity models across collections.  相似文献   

15.
文本分类中一种基于密度的KNN改进方法   总被引:2,自引:1,他引:1  
特征降维与分类算法的性能是文本自动分类的两个主要问题.KNN算法以其简单、有效、非参数特点常用于文本分类,但是训练文本分布的不均匀对KNN的分类效果产生负面影响,而在实际应用中训练文本分布不均是常见现象.本文针对这种分类环境,首先提出了一种改进的tf-idf赋权方法用于特征降维,在此基础上进一步提出了一种基于密度的改进KNN方法用于文本分类, 使处于样本点分布较密集区域的样本点之间的距离增大.随后的文本分类试验表明,本文提出的方法基于密度的KNN方法具有较好的文本分类效果.  相似文献   

16.
本文将潜在语义索引理论与支持向量机方法相结合,对文本向量各维与文本的语义联系进行特征抽取,建立了完整的基于潜在语义索引的支持向量机文本分类模型,分析了该方法与分词的维数以及SVM惩罚因子选择之间的关系.并在NN-SVM分类算法的基础上,通过计算样本点与其最近邻点类别的异同以及该点与其k个同类近邻点在核空间的平均距离来修剪混淆点,提出了一种改进的NN-SVM算法:KCNN-SVM算法.利用该算法对降维后的训练集进行修剪.实验表明,用新的模型进行文本分类,与单纯支持向量机相比,受到文本分词维数以及支持向量机惩罚因子的影响更小,其分类正确率更高.  相似文献   

17.
Most current methods for automatic text categorization are based on supervised learning techniques and, therefore, they face the problem of requiring a great number of training instances to construct an accurate classifier. In order to tackle this problem, this paper proposes a new semi-supervised method for text categorization, which considers the automatic extraction of unlabeled examples from the Web and the application of an enriched self-training approach for the construction of the classifier. This method, even though language independent, is more pertinent for scenarios where large sets of labeled resources do not exist. That, for instance, could be the case of several application domains in different non-English languages such as Spanish. The experimental evaluation of the method was carried out in three different tasks and in two different languages. The achieved results demonstrate the applicability and usefulness of the proposed method.  相似文献   

18.
19.
本文采用BORLANDIDAPI关系数据库集成技术,集成多种关系数据库系统,并用信息存储与检索软件QUICKIMS进行管理,实现对关系数据库的全文检索。对基于PC和基于SQL的关系数据库数据结构、数据访问方式、数据类型进行集成;对基本表和单库或多库查询的结果进行转移,生成QUICKIMS的必要文件和索引;对关系数据库提供布尔检索、前方一致检索、字段限定检索、相邻检索和位置检索等检索方式。采用动态转换关系数据库数据,减少了空间的浪费  相似文献   

20.
The Web contains a tremendous amount of information. It is challenging to determine which Web documents are relevant to a user query, and even more challenging to rank them according to their degrees of relevance. In this paper, we propose a probabilistic retrieval model using logistic regression for recognizing multiple-record Web documents against an application ontology, a simple conceptual modeling approach. We notice that many Web documents contain a sequence of chunks of textual information, each of which constitutes a record. This type of documents is referred to as multiple-record documents. In our categorization approach, a document is represented by a set of term frequencies of index terms, a density heuristic value, and a grouping heuristic value. We first apply the logistic regression analysis on relevant probabilities using the (i) index terms, (ii) density value, and (iii) grouping value of each training document. Hereafter, the relevant probability of each test document is interpolated from the fitting curves. Contrary to other probabilistic retrieval models, our model makes only a weak independent assumption and is capable of handling any important dependent relationships among index terms. In addition, we use logistic regression, instead of linear regression analysis, because the relevance probabilities of training documents are discrete. Using a test set of car-ads and another one for obituary Web documents, our probabilistic model achieves the averaged recall ratio of 100%, precision ratio of 83.3%, and accuracy ratio of 92.5%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号