首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
全文检索搜索引擎中文信息处理技术研究   总被引:2,自引:0,他引:2  
唐培丽  胡明  解飞  刘钢 《情报科学》2006,24(6):895-899,909
本文深入分析了全文检索中文搜索引擎的关键技术,提出了一种适用于全文检索搜索引擎的中文分词方案,既提高了分词的准确性,又能识别文中的未登录词。针对向量空间信息检索模型,本文设计了一个综合考虑中文词在Web文本中的位置、长度以及频率等重要因素的词条权重计算函数,并且用量化的方法表示出其重要性,能够较准确地反映出词条在Web文档中的重要程度。最后对分词算法进行了测试,测试表明该方法能够提高分词准确度满足实用的要求。  相似文献   

2.
郑阳  莫建文 《大众科技》2012,14(4):20-23
针对在科技文献中,未登录词等相关专业术语其变化多端,在中文分词中难以识别,影响了专业领域文章的分词准确度,结合实际情况给出了一种基于专业术语提取的中文分词方法。通过大量特定领域的专业语料库,基于互信息和统计的方法,对文中的未登录词等专业术语进行提取,构造专业术语词典,并结合通用词词典,利用最大匹配方法进行中文分词。经实验证明,该分词方法可以较准确的抽取出相关专业术语,从而提高分词的精度,具有实际的应用价值。  相似文献   

3.
【目的/意义】从海量微博信息中提取准确的主题词,以期为政府和企业进行舆情分析提供有价值的参考。 【方法/过程】通过分析传统微博主题词提取方法的特点及不足,提出了基于语义概念和词共现的微博主题词提取 方法,该方法利用文本扩充策略将微博从短文本扩充为较长文本,借助于语义词典对微博文本中的词汇进行语义 概念扩展,结合微博文本结构特点分配词汇权重,再综合考虑词汇的共现度来提取微博主题词。【结果/结论】实验 结果表明本文提出的微博主题词提取算法优于传统方法,它能够有效提高微博主题词提取的性能。【创新/局限】利 用语义概念结合词共现思想进行微博主题词提取是一种新的探索,由于算法中的分词方法对个别网络新词切分可 能不合适,会对关键词提取准确性造成微小影响。  相似文献   

4.
In Mongolian, two different alphabets are used, Cyrillic and Mongolian. In this paper, we focus solely on the Mongolian language using the Cyrillic alphabet, in which a content word can be inflected when concatenated with one or more suffixes. Identifying the original form of content words is crucial for natural language processing and information retrieval. We propose a lemmatization method for Mongolian. The advantage of our lemmatization method is that it does not rely on noun dictionaries, enabling us to lemmatize out-of-dictionary words. We also apply our method to indexing for information retrieval. We use newspaper articles and technical abstracts in experiments that show the effectiveness of our method. Our research is the first significant exploration of the effectiveness of lemmatization for information retrieval in Mongolian.  相似文献   

5.
This work addresses the information retrieval problem of auto-indexing Arabic documents. Auto-indexing a text document refers to automatically extracting words that are suitable for building an index for the document. In this paper, we propose an auto-indexing method for Arabic text documents. This method is mainly based on morphological analysis and on a technique for assigning weights to words. The morphological analysis uses a number of grammatical rules to extract stem words that become candidate index words. The weight assignment technique computes weights for these words relative to the container document. The weight is based on how spread is the word in a document and not only on its rate of occurrence. The candidate index words are then sorted in descending order by weight so that information retrievers can select the more important index words. We empirically verify the usefulness of our method using several examples. For these examples, we obtained an average recall of 46% and an average precision of 64%.  相似文献   

6.
In this article, we focus on Chinese word segmentation by systematically incorporating non-local information based on latent variables and word-level features. Differing from previous work which captures non-local information by using semi-Markov models, we propose an alternative method for modeling non-local information: a latent variable word segmenter employing word-level features. In order to reduce computational complexity of learning non-local information, we further present an improved online training method, which can arrive the same objective optimum with a significantly accelerated training speed. We find that the proposed method can help the learning of long range dependencies and improve the segmentation quality of long words (for example, complicated named entities). Experimental results demonstrate that the proposed method is effective. With this improvement, evaluations on the data of the second SIGHAN CWS bakeoff show that our system is competitive with the state-of-the-art systems.  相似文献   

7.
宁琳 《现代情报》2014,34(1):155-158
跨语言检索是一种重要的信息检索手段之一。为了提高跨语言检索效率,采用语义扩展的方法,通过分析其设计思想和工作流程,构建出一种基于语义扩展的跨语言自动检索模型,重点对其语义扩展、知识库和结果聚类等设计进行了阐述,提出了语义理解切分法的分词方法,采用了Single-Pass算法进行聚类,实验结果表明,该模型能有效提高跨语言检索的查全率和查准率。  相似文献   

8.
Documents in computer-readable form can be used to provide information about other documents, i.e. those they cite.To do this efficiently requires procedures for computer recognition of citing statements. This is not easy, especially for multi-sentence citing statements. Computer recognition procedures have been developed which are accurate to the following extent: 73% of the words in statements selected by computer procedures as being citing statements are words which are correctly attributable to the corresponding documents.The retrieval effectiveness of computer-recognized citing statements was tested in the following way. First, for eight retrieval requests in inorganiic chemistry, average recall by search of Chemical Abstracts Service indexing and Chemical Abstracts abstract text words was found to be 50%. Words from citing statements referring to the papers to be retrieved were then added to the index terms and abstract words as additional access points, and searching was repeated. Average recall increased to 70%. Only words from citing statements published within a year of the cited papers were used.The retrieval effect of citing statement words alone (published within a year) without index or abstract terms was the following: average recall was 40%. When just the words of the titles of the cited papers were added to those citing statement words, average recall increased to 50%.  相似文献   

9.
OCR errors in text harm information retrieval performance. Much research has been reported on modelling and correction of Optical Character Recognition (OCR) errors. Most of the prior work employ language dependent resources or training texts in studying the nature of errors. However, not much research has been reported that focuses on improving retrieval performance from erroneous text in the absence of training data. We propose a novel approach for detecting OCR errors and improving retrieval performance from the erroneous corpus in a situation where training samples are not available to model errors. In this paper we propose a method that automatically identifies erroneous term variants in the noisy corpus, which are used for query expansion, in the absence of clean text. We employ an effective combination of contextual information and string matching techniques. Our proposed approach automatically identifies the erroneous variants of query terms and consequently leads to improvement in retrieval performance through query expansion. Our proposed approach does not use any training data or any language specific resources like thesaurus for identification of error variants. It also does not expend any knowledge about the language except that the word delimiter is blank space. We have tested our approach on erroneous Bangla (Bengali in English) and Hindi FIRE collections, and also on TREC Legal IIT CDIP and TREC 5 Confusion track English corpora. Our proposed approach has achieved statistically significant improvements over the state-of-the-art baselines on most of the datasets.  相似文献   

10.
11.
彭秋茹  王东波  黄水清 《情报科学》2021,39(11):103-109
【目的/意义】对近几年的人民日报语料中文分词结果进行统计和分析有利于总结新时代的中文语料在分 词歧义方面的规律,提高分词效果,促进中文信息处理的相关研究和技术的发展。【方法/过程】本文以2015年以后 的共4个月新时代的人民日报分词语料为研究对象,通过统计词频、词长、从合度等信息,从名词、动词、数词、量词、 副词、形容词、区别词、方位词、处所词、时间词、代词、介词、连词、助词、习用语、否定词、前后缀等类型来讨论变异 词的切分规律。【结果/结论】结果发现新时代的人民日报语料中的切分变异大部分为假歧义,相同语法结构的二字 词要比三字词、四字词的切分变异从合度更高。【创新/局限】本文首次面向新时代的人民日报语料讨论了中文分词 歧义的问题,但缺少与旧语料的对比分析。  相似文献   

12.
自然语言检索中的中文分词技术研究进展及应用   总被引:1,自引:0,他引:1  
何莘  王琬芜 《情报科学》2008,26(5):787-791
中文分词技术是实现自然语言检索的重要基础,是信息检索领域研究的关键课题,无论是专业信息检索系统还是搜索引擎都依赖于分词技术的研究成果。本文通过在国内外著名数据库中进行相关检索,分析了研究中文分词技术及其在著名搜索引擎中的应用。  相似文献   

13.
刘爱琴  安婷 《现代情报》2019,39(8):52-58
[目的/意义]面向非相关文献的知识关联能够促进新知识的产生,为科学研究提供了一种有效的辅助手段。[方法/过程]本文以《中国分类主题词表》为主题词受控词表,首先对文献摘要进行中文分词处理并提取主题词,利用计量分析技术和聚类技术分析文献间特征的相似、相异水平,然后基于该系统为用户检索并利用用TOP-K算法反馈用户精确结果。[结果/结论]设计了面向非相关文献的知识关联检索系统,从更细的粒度层面揭示文献之间的知识关联,为用户提供高质量的服务。  相似文献   

14.
李旭晖  周怡 《情报科学》2022,40(3):99-108
【目的/意义】关键词抽取的本质是找到能够表达文档核心语义信息的关键词汇,因此使用语义代替词语进 行分析更加符合实际需求。本文基于TextRank词图模型,利用语义代替词语进行分析,提出了一种基于语义聚类 的关键词抽取方法。【方法/过程】首先,将融合知网(HowNet)义原信息训练的词向量聚类,把词义相近的词语聚集 在一起,为各个词语获取相应的语义类别。然后,将词语所属语义类别的窗口共现频率作为词语间的转移概率计 算节点得分。最后,将TF-IDF值与节点得分进行加权求和,对关键词抽取结果进行修正。【结果/结论】从整体的关 键词抽取结果看,本文提出的关键词抽取方法在抽取效果上有一定提升,相比于TextRank算法在准确率P,召回率 R以及 F值上分别提升了 12.66%、13.77%、13.16%。【创新/局限】本文的创新性在于使用语义代替词语,从语义层面 对相关性网络进行分析。同时,首次引入融合知网义原信息的词向量用于关键词抽取工作。局限性在于抽取方法 依赖知网信息,只适用于中文文本抽取。  相似文献   

15.
The study of query performance prediction (QPP) in information retrieval (IR) aims to predict retrieval effectiveness. The specificity of the underlying information need of a query often determines how effectively can a search engine retrieve relevant documents at top ranks. The presence of ambiguous terms makes a query less specific to the sought information need, which in turn may degrade IR effectiveness. In this paper, we propose a novel word embedding based pre-retrieval feature which measures the ambiguity of each query term by estimating how many ‘senses’ each word is associated with. Assuming each sense roughly corresponds to a Gaussian mixture component, our proposed generative model first estimates a Gaussian mixture model (GMM) from the word vectors that are most similar to the given query terms. We then use the posterior probabilities of generating the query terms themselves from this estimated GMM in order to quantify the ambiguity of the query. Previous studies have shown that post-retrieval QPP approaches often outperform pre-retrieval ones because they use additional information from the top ranked documents. To achieve the best of both worlds, we formalize a linear combination of our proposed GMM based pre-retrieval predictor with NQC, a state-of-the-art post-retrieval QPP. Our experiments on the TREC benchmark news and web collections demonstrate that our proposed hybrid QPP approach (in linear combination with NQC) significantly outperforms a range of other existing pre-retrieval approaches in combination with NQC used as baselines.  相似文献   

16.
A main challenge in Cross-Language Information Retrieval (CLIR) is to estimate a proper translation model from available translation resources, since translation quality directly affects the retrieval performance. Among different translation resources, we focus on obtaining translation models from comparable corpora, because they provide appropriate translations for both languages and domains with limited linguistic resources. In this paper, we employ a two-step approach to build an effective translation model from comparable corpora, without requiring any additional linguistic resources, for the CLIR task. In the first step, translations are extracted by deriving correlations between source–target word pairs. These correlations are used to estimate word translation probabilities in the second step. We propose a language modeling approach for the first step, where modeling based on probability distribution provides two key advantages. First, our approach can be tuned easier in comparison with heuristically adjusted previous work. Second, it provides a principled basis for integrating additional lexical and translational relations to improve the accuracy of translations from comparable corpora. As an indication, we integrate monolingual relations of word co-occurrences into the process of translation extraction, which helps to extract more reliable translations for low-frequency words in a comparable corpus. Experimental results on an English–Persian comparable corpus show that our method outperforms the previous approaches in terms of both translation quality and the performance of CLIR. Indeed, the proposed method is naturally applicable to any comparable corpus, regardless of its languages. In addition, we demonstrate the significant impact of word translation probabilities, estimated in the second step of our approach, on the performance of CLIR.  相似文献   

17.
In this paper, we propose a new learning method for extracting bilingual word pairs from parallel corpora in various languages. In cross-language information retrieval, the system must deal with various languages. Therefore, automatic extraction of bilingual word pairs from parallel corpora with various languages is important. However, previous works based on statistical methods are insufficient because of the sparse data problem. Our learning method automatically acquires rules, which are effective to solve the sparse data problem, only from parallel corpora without any prior preparation of a bilingual resource (e.g., a bilingual dictionary, a machine translation system). We call this learning method Inductive Chain Learning (ICL). Moreover, the system using ICL can extract bilingual word pairs even from bilingual sentence pairs for which the grammatical structures of the source language differ from the grammatical structures of the target language because the acquired rules have the information to cope with the different word orders of source language and target language in local parts of bilingual sentence pairs. Evaluation experiments demonstrated that the recalls of systems based on several statistical approaches were improved through the use of ICL.  相似文献   

18.
The quality of stemming algorithms is typically measured in two different ways: (i) how accurately they map the variant forms of a word to the same stem; or (ii) how much improvement they bring to Information Retrieval systems. In this article, we evaluate various stemming algorithms, in four languages, in terms of accuracy and in terms of their aid to Information Retrieval. The aim is to assess whether the most accurate stemmers are also the ones that bring the biggest gain in Information Retrieval. Experiments in English, French, Portuguese, and Spanish show that this is not always the case, as stemmers with higher error rates yield better retrieval quality. As a byproduct, we also identified the most accurate stemmers and the best for Information Retrieval purposes.  相似文献   

19.
The absence of diacritics in text documents or search queries is a serious problem for Turkish information retrieval because it creates homographic ambiguity. Thus, the inappropriate handling of diacritics reduces the retrieval performance in search engines. A straightforward solution to this problem is to normalize tokens by replacing diacritic characters with their American Standard Code for Information Interchange (ASCII) counterparts. However, this so-called ASCIIfication produces either synthetic words that are not legitimate Turkish words or legitimate words with meanings that are completely different from those of the original words. These non-valid synthetic words cannot be processed by morphological analysis components (such as stemmers or lemmatizers), which expect the input to be valid Turkish words. By contrast, synthetic words are not a problem when no stemmer or a simple first-n-characters-stemmer is used in the text analysis pipeline. This difference emphasizes the notion of the diacritic sensitivity of stemmers. In this study, we propose and evaluate an alternative solution based on the application of deASCIIfication, which restores accented letters in query terms or text documents. Our risk-sensitive evaluation results showed that the diacritics restoration approach yielded more effective and robust results compared with normalizing tokens to remove diacritics.  相似文献   

20.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号