首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 281 毫秒
1.
孟旭阳  白海燕  梁冰  王莉 《情报杂志》2021,(3):125-131,7
[目的/意义]资源数字化时代文献服务向知识服务方向转变,高质量的文献自动标引是文献知识服务能力提升的基础和关键,针对目前英文科技文献自动标引准确率不高的问题,提出了基于语义感知的概念遴选优化方法。[方法/过程]基于知识组织系统的自动主题标引,采用自然语言处理中的神经网络词向量技术,对概念和英文文献内容语义进行表示并进行语义感知与评估,实现概念标引结果在语义层面的遴选。该方法采用基于知识组织系统与自然语言处理技术相结合的方法,弥补了在语义层面上的不足,从而进一步降低不相关概念的影响,提高概念标引结果的准确率。[结果/结论]实验结果表明,该方法具有较好的语义感知性能,在概念遴选上有效降低了不相关概念,大大提高了标引结果的文献相关性,为科技文献资源知识化服务建设和相关研究提供有价值的参考和支持。  相似文献   

2.
分析了词向量在自然语言处理中的作用。使用已经训练好的词向量进行了简单类比推理、词语间离和句子相似度分析。给出一种结合词向量和传统语义解析两者优点的句子相似度计算WCos公式,通过传统语义分析将程度词单独提取处理,使用权值修改对照表进行权值设置,然后对余弦相似度公式进行修正。该方法在句子相似度分析方面明显优于单纯依赖于词向量的句子相似度分析方法。  相似文献   

3.
[目的/意义]针对技术功效图构建过程中的主要问题和薄弱环节,提出了一种基于SAO结构和词向量的专利技术功效图构建方法。[方法/过程]利用Python程序获取专利摘要中的SAO结构,从中识别技术词和功效词;结合领域词典与专利领域语料库,运用Word2Vec和WordNet计算词语间的语义相似度;利用基于网络关系的主题聚类算法实现主题的自动标引;采用基于SAO结构的共现关系构建技术功效矩阵。[结果/结论]实现了基于SAO结构和词向量的技术功效图自动构建,该构建方法提高了构建技术功效主题的合理性和专利分类标注的准确性,为技术功效图的自动化构建提供新的思路。  相似文献   

4.
[目的/意义]旨在为跨语言文本聚类研究提供参考。[方法/过程]首先,通过分句及计算每个句子的语义特征值确定文档的特征句集并进行文档向量表示;其次,将词旋转距离(Word Rotator’s Distance,WRD)的思路引入相似度计算步骤中,提出语义特征句向量距离(Semantic Feature Sentence Vectors’ Distance, SFSVD)相似度计算方法,获得不同文档间的相似度;最后,利用HAC聚类算法获得文本聚类的结果。[结果/结论]提出的汉语-俄语跨语言文本聚类方法对比现有方法,其Purity值和NMI值显著提升且表现稳定。基于语义特征句和SFSVD相似度计算方法能够较准确地表示文本信息,从而进一步提升汉语-俄语跨语言文本聚类的性能。  相似文献   

5.
李海蓉 《情报科学》2012,(6):852-857
简要介绍语义模板的概念,提出基于语义模板向量空间的文档自动分类模型。利用支持向量机(SVM,Support Vector Machine)分类算法对文档测试集进行基于语义模板空间、词向量空间的分类实验,实验结果表明,基于语义模板空间的文本分类性能比基于词向量空间的分类性能要高。  相似文献   

6.
针对向量空间模型中语义缺失问题,将语义词典(知网)应用到文本分类的过程中以提高文本分类的准确度。对于中文文本中的一词多义现象,提出改进的词汇语义相似度计算方法,通过词义排歧选取义项进行词语的相似度计算,将相似度大于阈值的词语进行聚类,对文本特征向量进行降维,给出基于语义的文本分类算法,并对该算法进行实验分析。结果表明,该算法可有效提高中文文本分类效果。  相似文献   

7.
[目的/意义]专利关键词提取是专利挖掘任务中非常重要的前置子任务,基于图模型的关键词提取是目前最有效的算法。传统图模型只考虑了单词的局部上下文信息,为了捕获单词的全局信息,提出一种基于图神经网络的专利关键词提取算法,结合词向量与图模型实现专利关键词的提取。[方法/过程]首先,用专利数据集构建异构网络,以专利分类号为标签,训练图神经网络模型,使得同一主题下的单词具有相似的向量表示,获取包含主题信息的词向量;然后,根据专利摘要在滑动窗口内的单词共现关系和词向量相似度,构建融合了单词主题信息的文本图,利用词向量中的主题信息捕获单词的全局联系;最后,在文本图上使用PageRank算法,获取关键节点,构成专利的关键词。[结果/结论]与基线方法相比,该算法在提取专利关键词时,能够检测到新颖性与准确性更高的关键词。  相似文献   

8.
基于语义向量空间模型的文档检索系统研究   总被引:1,自引:0,他引:1  
针对向量空间模型中因义相似度,建立了语义向量空间模型,并设计了基于语义向量空间模型的文档检索系统,重点研究了其中语义相似度计算和查询扩展两个核心技术,并通过实例验证了该检索系统的有效性.  相似文献   

9.
LDA模型在专利文本分类中的应用   总被引:1,自引:0,他引:1  
对传统专利文本自动分类方法中,使用向量空间模型文本表示方法存在的问题,提出一种基于LDA模型专利文本分类方法。该方法利用LDA主题模型对专利文本语料库建模,提取专利文本的文档-主题和主题-特征词矩阵,达到降维目的和提取文档间的语义联系,引入类的类-主题矩阵,为类进行主题语义拓展,使用主题相似度构造层次分类,小类采用KNN分类方法。实验结果:与基于向量空间文本表示模型的KNN专利文本分类方法对比,此方法能够获得更高的分类评估指数。  相似文献   

10.
一种基于TFIDF方法的中文关键词抽取算法   总被引:4,自引:1,他引:3  
本文在海量智能分词基础之上,提出了一种基于向量空间模型和TFIDF方法的中文关键词抽取算法.该算法在对文本进行自动分词后,用TFIDF方法对文献空间中的每个词进行权重计算,然后根据计算结果抽取出科技文献的关键词.通过自编软件进行的实验测试表明该算法对中文科技文献的关键词自动抽取成效显著.  相似文献   

11.
Today, due to a vast amount of textual data, automated extractive text summarization is one of the most common and practical techniques for organizing information. Extractive summarization selects the most appropriate sentences from the text and provide a representative summary. The sentences, as individual textual units, usually are too short for major text processing techniques to provide appropriate performance. Hence, it seems vital to bridge the gap between short text units and conventional text processing methods.In this study, we propose a semantic method for implementing an extractive multi-document summarizer system by using a combination of statistical, machine learning based, and graph-based methods. It is a language-independent and unsupervised system. The proposed framework learns the semantic representation of words from a set of given documents via word2vec method. It expands each sentence through an innovative method with the most informative and the least redundant words related to the main topic of sentence. Sentence expansion implicitly performs word sense disambiguation and tunes the conceptual densities towards the central topic of each sentence. Then, it estimates the importance of sentences by using the graph representation of the documents. To identify the most important topics of the documents, we propose an inventive clustering approach. It autonomously determines the number of clusters and their initial centroids, and clusters sentences accordingly. The system selects the best sentences from appropriate clusters for the final summary with respect to information salience, minimum redundancy, and adequate coverage.A set of extensive experiments on DUC2002 and DUC2006 datasets was conducted for investigating the proposed scheme. Experimental results showed that the proposed sentence expansion algorithm and clustering approach could considerably enhance the performance of the summarization system. Also, comparative experiments demonstrated that the proposed framework outperforms most of the state-of-the-art summarizer systems and can impressively assist the task of extractive text summarization.  相似文献   

12.
In existing unsupervised methods, Latent Semantic Analysis (LSA) is used for sentence selection. However, the obtained results are less meaningful, because singular vectors are used as the bases for sentence selection from given documents, and singular vector components can have negative values. We propose a new unsupervised method using Non-negative Matrix Factorization (NMF) to select sentences for automatic generic document summarization. The proposed method uses non-negative constraints, which are more similar to the human cognition process. As a result, the method selects more meaningful sentences for generic document summarization than those selected using LSA.  相似文献   

13.
Most knowledge accumulated through scientific discoveries in genomics and related biomedical disciplines is buried in the vast amount of biomedical literature. Since understanding gene regulations is fundamental to biomedical research, summarizing all the existing knowledge about a gene based on literature is highly desirable to help biologists digest the literature. In this paper, we present a study of methods for automatically generating gene summaries from biomedical literature. Unlike most existing work on automatic text summarization, in which the generated summary is often a list of extracted sentences, we propose to generate a semi-structured summary which consists of sentences covering specific semantic aspects of a gene. Such a semi-structured summary is more appropriate for describing genes and poses special challenges for automatic text summarization. We propose a two-stage approach to generate such a summary for a given gene – first retrieving articles about a gene and then extracting sentences for each specified semantic aspect. We address the issue of gene name variation in the first stage and propose several different methods for sentence extraction in the second stage. We evaluate the proposed methods using a test set with 20 genes. Experiment results show that the proposed methods can generate useful semi-structured gene summaries automatically from biomedical literature, and our proposed methods outperform general purpose summarization methods. Among all the proposed methods for sentence extraction, a probabilistic language modeling approach that models gene context performs the best.  相似文献   

14.
The rise in the amount of textual resources available on the Internet has created the need for tools of automatic document summarization. The main challenges of query-oriented extractive summarization are (1) to identify the topics of the documents and (2) to recover query-relevant sentences of the documents that together cover these topics. Existing graph- or hypergraph-based summarizers use graph-based ranking algorithms to produce individual scores of relevance for the sentences. Hence, these systems fail to measure the topics jointly covered by the sentences forming the summary, which tends to produce redundant summaries. To address the issue of selecting non-redundant sentences jointly covering the main query-relevant topics of a corpus, we propose a new method using the powerful theory of hypergraph transversals. First, we introduce a new topic model based on the semantic clustering of terms in order to discover the topics present in a corpus. Second, these topics are modeled as the hyperedges of a hypergraph in which the nodes are the sentences. A summary is then produced by generating a transversal of nodes in the hypergraph. Algorithms based on the theory of submodular functions are proposed to generate the transversals and to build the summaries. The proposed summarizer outperforms existing graph- or hypergraph-based summarizers by at least 6% of ROUGE-SU4 F-measure on DUC 2007 dataset. It is moreover cheaper than existing hypergraph-based summarizers in terms of computational time complexity.  相似文献   

15.
Automatic text summarization attempts to provide an effective solution to today’s unprecedented growth of textual data. This paper proposes an innovative graph-based text summarization framework for generic single and multi document summarization. The summarizer benefits from two well-established text semantic representation techniques; Semantic Role Labelling (SRL) and Explicit Semantic Analysis (ESA) as well as the constantly evolving collective human knowledge in Wikipedia. The SRL is used to achieve sentence semantic parsing whose word tokens are represented as a vector of weighted Wikipedia concepts using ESA method. The essence of the developed framework is to construct a unique concept graph representation underpinned by semantic role-based multi-node (under sentence level) vertices for summarization. We have empirically evaluated the summarization system using the standard publicly available dataset from Document Understanding Conference 2002 (DUC 2002). Experimental results indicate that the proposed summarizer outperforms all state-of-the-art related comparators in the single document summarization based on the ROUGE-1 and ROUGE-2 measures, while also ranking second in the ROUGE-1 and ROUGE-SU4 scores for the multi-document summarization. On the other hand, the testing also demonstrates the scalability of the system, i.e., varying the evaluation data size is shown to have little impact on the summarizer performance, particularly for the single document summarization task. In a nutshell, the findings demonstrate the power of the role-based and vectorial semantic representation when combined with the crowd-sourced knowledge base in Wikipedia.  相似文献   

16.
Text summarization is a process of generating a brief version of documents by preserving the fundamental information of documents as much as possible. Although most of the text summarization research has been focused on supervised learning solutions, there are a few datasets indeed generated for summarization tasks, and most of the existing summarization datasets do not have human-generated goal summaries which are vital for both summary generation and evaluation. Therefore, a new dataset was presented for abstractive and extractive summarization tasks in this study. This dataset contains academic publications, the abstracts written by the authors, and extracts in two sizes, which were generated by human readers in this research. Then, the resulting extracts were evaluated to ensure the validity of the human extract production process. Moreover, the extractive summarization problem was reinvestigated on the proposed summarization dataset. Here the main point taken into account was to analyze the feature vector to generate more informative summaries. To that end, a comprehensive syntactic feature space was generated for the proposed dataset, and the impact of these features on the informativeness of the resulting summary was investigated. Besides, the summarization capability of semantic features was experienced by using GloVe and word2vec embeddings. Finally, the use of ensembled feature space, which corresponds to the joint use of syntactic and semantic features, was proposed on a long short-term memory-based neural network model. ROUGE metrics evaluated the model summaries, and the results of these evaluations showed that the use of the proposed ensemble feature space remarkably improved the single-use of syntactic or semantic features. Additionally, the resulting summaries of the proposed approach on ensembled features prominently outperformed or provided comparable performance than summaries obtained by state-of-the-art models for extractive summarization.  相似文献   

17.
High quality summary is the target and challenge for any automatic text summarization. In this paper, we introduce a different hybrid model for automatic text summarization problem. We exploit strengths of different techniques in building our model: we use diversity-based method to filter similar sentences and select the most diverse ones, differentiate between the more important and less important features using the swarm-based method and use fuzzy logic to make the risks, uncertainty, ambiguity and imprecise values of the text features weights flexibly tolerated. The diversity-based method focuses to reduce redundancy problems and the other two techniques concentrate on the scoring mechanism of the sentences. We presented the proposed model in two forms. In the first form of the model, diversity measures dominate the behavior of the model. In the second form, the diversity constraint is no longer imposed on the model behavior. That means the diversity-based method works same as fuzzy swarm-based method. The results showed that the proposed model in the second form performs better than the first form, the swarm model, the fuzzy swarm method and the benchmark methods. Over results show that combination of diversity measures, swarm techniques and fuzzy logic can generate good summary containing the most important parts in the document.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号