首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
黄莉  李湘东 《情报杂志》2012,31(7):177-181,176
KNN最邻近算法是文本自动分类中最基本且常用的算法,该算法中需要计算文本之间的相似度.以Jensen-Shannon散度为例,在推导和说明其基本原理的基础之上,将其用于计算文本之间的相似度;作为对比,也使用常规的余弦值方法计算文本之间的相似度,并进而使用KNN最邻近算法对文本进行分类,以探讨不同的相似度计算方法对使用KNN最邻近算法进行文本自动分类效果的影响.多种试验材料的实证研究说明,较之于余弦值方法,基于Jensen-Shannon散度计算文本相似度的自动分类会使分类正确率更高,但会花费更长的时间.  相似文献   

2.
一种基于TFIDF方法的中文关键词抽取算法   总被引:4,自引:1,他引:3  
本文在海量智能分词基础之上,提出了一种基于向量空间模型和TFIDF方法的中文关键词抽取算法.该算法在对文本进行自动分词后,用TFIDF方法对文献空间中的每个词进行权重计算,然后根据计算结果抽取出科技文献的关键词.通过自编软件进行的实验测试表明该算法对中文科技文献的关键词自动抽取成效显著.  相似文献   

3.
【目的/意义】文本相似度计算是自然语言处理中的一项基础性研究,通过总结和分析文本相似度计算的经 典方法和当前最新的研究成果,完善对文本相似度计算方法的系统化研究,以便于快速学习和掌握文本相似度计 算方法。【方法/内容】对过去20年的文本相似度计算领域的经典文献进行整理,分析不同计算方法的基本思想、优 缺点,总结每种计算方法的侧重点和不同方向上最新的研究进展。【结果/结论】从表面文本相似度计算方法和语义 相似度计算方法两方面进行阐述,形成较为全面的分类体系,其中语义相似度计算方法中的基于语料库的方法是 该领域最为主要的研究方向。  相似文献   

4.
针对向量空间模型中语义缺失问题,将语义词典(知网)应用到文本分类的过程中以提高文本分类的准确度。对于中文文本中的一词多义现象,提出改进的词汇语义相似度计算方法,通过词义排歧选取义项进行词语的相似度计算,将相似度大于阈值的词语进行聚类,对文本特征向量进行降维,给出基于语义的文本分类算法,并对该算法进行实验分析。结果表明,该算法可有效提高中文文本分类效果。  相似文献   

5.
针对短语文本的分类、聚类、信息查询问题,提出了一种新的中文短语文本相似度计算方法。用该方法计算出的文本相似度及一个比较文本与多个被比较文本所得相似度变化趋势是合理的,因此可以满足短语文本分类/聚类和信息查询的需要。  相似文献   

6.
陈旭毅 《情报科学》2007,25(10):1530-1533
自动文本分类方法是文本分类中非常重要的一种分类方法,本文着重从模型与方法的角度进行探讨。首先给出了一个自动文本分类的形式化定义,然后提出了自动文本分类的流程模型。接着,对流程中的四个部分进行具体讨论。自动文本分类的应用非常广泛,为了叙述方便,以商务数据为例进行讨论,并且选择实例作为典型案例对自动文本分类后的可视化进行分析和具体研究。  相似文献   

7.
马思丹  刘东苏 《情报科学》2019,37(11):38-42
【目的/意义】利用词向量的优点,提出一种加权Word2vec的文本分类方法,以期在文本分类时获得较高的 分类效果。【方法/过程】首先对文本进行词向量训练,通过设置词语相似度阈值,将文本关键词划分为重叠部分和非 重叠部分,随后分别计算两部分加权相似度值,再采用参数化线性加权方式计算文本相似度,最后采用KNN进行 分类。【结果/结论】实验结果表明文中提出的加权Word2vec方法比TF-IDF传统文本分类模型和均值Word2vec模 型的分类效果有所提升,是一种有效的文本分类方法。  相似文献   

8.
文章以国家图书文献中心(NSTL)的多语种科技语料为研究对象,以一部科技类的英汉双语科技词典为资源工具,提出一种英汉跨语言文本分类系统的构建方法,实验结果验证了采用本方法进行跨语言分类的可行性,也为下一阶段建立跨语言分类实用系统奠定了基础。  相似文献   

9.
基于改进VSM的Web文本分类方法   总被引:2,自引:0,他引:2  
Web文本自动分类技术是Web文本挖掘的关键技术之一.针对Web文档中不同标签中的文本具有不同的表达文档内容的能力,提出了改进的特征项加权计算方法.根据特征项在文档中的位置和出现频率计算其权值,并给出了具体的Web文本分类算法和评测方法.经实验验证,改进后系统的微平均查准率均大于0.8,分类性能明显好于改进前.  相似文献   

10.
巫桂梅 《科技通报》2012,28(7):148-151
研究文本快速准确分类的问题。同一词语在不同的语言环境下或者由不同的人使用可能代表不同的含义,这些词语在文本分类中的描述特征却极为相似。传统的文本分类方法是将文本表示成向量空间模型,向量空间模型只是从词语的出现频率角度构造,当文中出现一些多义词和同义词时就会出现分类延时明显准确性不高等特点。为此提出一种基于语义索引的文本主题匹配方法。将文本进行关键词的抽取后构造文档-词语矩阵,SVD分解后通过优化平衡的方法进行矩阵降维与相似度的计算,克服传统方法的弊端。实践证明,这种方法能大幅度降低同义词与多义词对文本分类时的影响,使文本按主题匹配分类时准确高效,实验效果明显提高。  相似文献   

11.
针对图书、期刊论文等数字文献文本特征较少而导致特征向量语义表达不够准确、分类效果差的问题,本文提出一种基于特征语义扩展的数字文献分类方法。该方法首先利用TF-IDF方法获取对数字文献文本表示能力较强、具有较高TF-IDF值的核心特征词;其次分别借助知网(Hownet)语义词典以及开放知识库维基百科(Wikipedia)对核心特征词集进行语义概念的扩展,以构建维度较低、语义丰富的概念向量空间;最后采用MaxEnt、SVM等多种算法构造分类器实现对数字文献的自动分类。实验结果表明:相比传统基于特征选择的短文本分类方法,该方法能有效地实现对短文本特征的语义扩展,提高数字文献分类的分类性能。  相似文献   

12.
Automatic text classification is the problem of automatically assigning predefined categories to free text documents, thus allowing for less manual labors required by traditional classification methods. When we apply binary classification to multi-class classification for text classification, we usually use the one-against-the-rest method. In this method, if a document belongs to a particular category, the document is regarded as a positive example of that category; otherwise, the document is regarded as a negative example. Finally, each category has a positive data set and a negative data set. But, this one-against-the-rest method has a problem. That is, the documents of a negative data set are not labeled manually, while those of a positive set are labeled by human. Therefore, the negative data set probably includes a lot of noisy data. In this paper, we propose that the sliding window technique and the revised EM (Expectation Maximization) algorithm are applied to binary text classification for solving this problem. As a result, we can improve binary text classification through extracting potentially noisy documents from the negative data set using the sliding window technique and removing actually noisy documents using the revised EM algorithm. The results of our experiments showed that our method achieved better performance than the original one-against-the-rest method in all the data sets and all the classifiers used in the experiments.  相似文献   

13.
Automatic text classification is the task of organizing documents into pre-determined classes, generally using machine learning algorithms. Generally speaking, it is one of the most important methods to organize and make use of the gigantic amounts of information that exist in unstructured textual format. Text classification is a widely studied research area of language processing and text mining. In traditional text classification, a document is represented as a bag of words where the words in other words terms are cut from their finer context i.e. their location in a sentence or in a document. Only the broader context of document is used with some type of term frequency information in the vector space. Consequently, semantics of words that can be inferred from the finer context of its location in a sentence and its relations with neighboring words are usually ignored. However, meaning of words, semantic connections between words, documents and even classes are obviously important since methods that capture semantics generally reach better classification performances. Several surveys have been published to analyze diverse approaches for the traditional text classification methods. Most of these surveys cover application of different semantic term relatedness methods in text classification up to a certain degree. However, they do not specifically target semantic text classification algorithms and their advantages over the traditional text classification. In order to fill this gap, we undertake a comprehensive discussion of semantic text classification vs. traditional text classification. This survey explores the past and recent advancements in semantic text classification and attempts to organize existing approaches under five fundamental categories; domain knowledge-based approaches, corpus-based approaches, deep learning based approaches, word/character sequence enhanced approaches and linguistic enriched approaches. Furthermore, this survey highlights the advantages of semantic text classification algorithms over the traditional text classification algorithms.  相似文献   

14.
赵宁 《现代情报》2009,29(6):21-24
本文阐述知识集合的物元模型和关联函数,从优度评价法的概念、合格度与规范合格度和优度三方面分析优度评价法以及在科技期刊评价中的应用。  相似文献   

15.
Lexical cohesion is a property of text, achieved through lexical-semantic relations between words in text. Most information retrieval systems make use of lexical relations in text only to a limited extent. In this paper we empirically investigate whether the degree of lexical cohesion between the contexts of query terms’ occurrences in a document is related to its relevance to the query. Lexical cohesion between distinct query terms in a document is estimated on the basis of the lexical-semantic relations (repetition, synonymy, hyponymy and sibling) that exist between there collocates – words that co-occur with them in the same windows of text. Experiments suggest significant differences between the lexical cohesion in relevant and non-relevant document sets exist. A document ranking method based on lexical cohesion shows some performance improvements.  相似文献   

16.
A new dictionary-based text categorization approach is proposed to classify the chemical web pages efficiently. Using a chemistry dictionary, the approach can extract chemistry-related information more exactly from web pages. After automatic segmentation on the documents to find dictionary terms for document expansion, the approach adopts latent semantic indexing (LSI) to produce the final document vectors, and the relevant categories are finally assigned to the test document by using the k-NN text categorization algorithm. The effects of the characteristics of chemistry dictionary and test collection on the categorization efficiency are discussed in this paper, and a new voting method is also introduced to improve the categorization performance further based on the collection characteristics. The experimental results show that the proposed approach has the superior performance to the traditional categorization method and is applicable to the classification of chemical web pages.  相似文献   

17.
朱学芳  冯曦曦 《情报科学》2012,(7):1012-1015
通过对农业网页的HTML结构和特征研究,叙述基于文本内容的农业网页信息抽取和分类实验研究过程。实验中利用DOM结构对农业网页信息进行信息抽取和预处理,并根据文本的内容自动计算文本类别属性,得到特征词,通过总结样本文档的特征,对遇到的新文档进行自动分类。实验结果表明,本文信息提取的时间复杂度比较小、精确度高,提高了分类的正确率。  相似文献   

18.
在文本自动分类中,目前有词频和文档频率统计这两种概率估算方法,采用的估算方法恰当与否会直接影响特征抽取的质量与分类的准确度。本文采用K最近邻算法实现中文文本分类器,在中文平衡与非平衡两种训练语料下进行了训练与分类实验,实验数据表明使用非平衡语料语料时,可以采用基于词频的概率估算方法,使用平衡语料语料时,采用基于文档频率的概率估算方法,能够有效地提取高质量的文本特征,从而提高分类的准确度。  相似文献   

19.
程雅倩  黄玮  金晓祥  贾佳 《情报科学》2022,39(2):155-161
【目的/意义】由于自媒体平台中的多标签文本具有高维性和不平衡性,导致文本分类效果较差,因此通过 研究5G环境下高校图书馆自媒体平台多标签文本分类方法对解决该问题具有重要意义。【方法/过程】本文首先通 过对采集的5G环境下高校图书馆自媒体平台多标签文本进行预处理,包括无意义数据去除、文本分词以及去停用 词等;然后采用改进主成分分析方法进行多标签文本降维处理,利用向量空间模型实现文本平衡化处理;最后以处 理后的文本为基础,采用Adaboost和SVM两种算法构建文本分类器,实现多标签文本分类。【结果/结论】实验结果 表明,本文拟定的自媒体平台标签文本分类方法可以使汉明损失降低,F1值提高,多标签文本分类效果好,且耗时 较低,具有可靠性。【创新/局限】由于本研究中的数据集数量不够多,所以在测试和验证方面,得出的结果具有一定 局限性。因此在未来研究中期望利用更为丰富的数据库,对所设计的方法做出进一步的改进与创新。  相似文献   

20.
This paper presents a classifier for text data samples consisting of main text and additional components, such as Web pages and technical papers. We focus on multiclass and single-labeled text classification problems and design the classifier based on a hybrid composed of probabilistic generative and discriminative approaches. Our formulation considers individual component generative models and constructs the classifier by combining these trained models based on the maximum entropy principle. We use naive Bayes models as the component generative models for the main text and additional components such as titles, links, and authors, so that we can apply our formulation to document and Web page classification problems. Our experimental results for four test collections confirmed that our hybrid approach effectively combined main text and additional components and thus improved classification performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号