首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
英汉双语文本聚类是一项非常有价值的研究。使用单语言文本聚类算法,在英汉双语新闻语料基础上,对基于中文单语、英文单语和英汉双语混合的方法进行了文本聚类比较研究,实验结果表明,基于英汉双语混合的文本聚类方法可以取得较好的聚类结果。  相似文献   

2.
李盼池 《情报杂志》2003,22(4):54-55
针对知识发现中的模糊信息查询问题,提出了一种基于反馈网络的模糊概念聚类及模式联想设计方法。首先按照分类要求对所要查询的概念集合进行量化编码,然后对编码后的数据进行规整处理。对于概念聚类采用多层反馈神经网络的FP聚类算法,而概念联想采用白反馈神经网络的椭球学习算法实现。将基于上述算法开发出的信息模糊查询系统应用于图书信息查询,实验结果征明了该方法的有效性。  相似文献   

3.
文本自动分类是文本信息处理中的一项基础性工作。将范例推理应用于文本分类中,并利用词语间的词共现信息从文本中抽取主题词和频繁词共现项目集,以及借助聚类算法对范例库进行索引,实现了基于范例推理的文本自动分类系统。实验表明,与基于TFIDF的文本表示方法和最近邻分类算法相比,基于词共现信息的文本表示方法和范例库的聚类索引能有效地改善分类的准确性和效率,从而拓宽了范例推理的应用领域。  相似文献   

4.
本文将数据挖掘算法应用干智能答疑系统中,提出了一套基于数据挖掘算法的答疑设计方案并加以改进,传统的K-均值算法聚类虽然速度快,在文本聚类中易于实现,但其同样依赖于所有变量,聚类效果往往不尽如人意.为了克服这一缺点,提出一种改进的K-均值文本聚类算法.它在K-均值聚类过程中,向每一个聚类簇中的关键词自动计算添加一个权重,重要的关键词赋予较大的权重.经过实验测试.获得了一种基于子空闻变量自动加权的适合文本数据聚类分析的改进算法,它不仅可以在大规模、高维和稀疏的文本数据上有效地进行聚类.还能够生成质量较高的聚类结果.实验结果表明基于子空闻变量自动加权的K-均值文本聚类算法是有效的大规模文本数据聚类算法.  相似文献   

5.
基于《现代汉语语义分类词典》的文本聚类方法   总被引:1,自引:0,他引:1  
给出了一种基于语义概念的高效中文文本聚类方法,该方法是从文本的本身出发,利用<现代汉语语义分类词典>的级类主题词,在高维的文本向量集中提取概念元组,形成表示聚类结果的高层概念,最后基于这些高层概念进行样本划分,从而完成整个文本的聚类过程.试验结果表明,该聚类算法有较好的聚类结果且有较高的执行效率.  相似文献   

6.
介绍聚类算法的过程以及聚类有效性指标的分类,分别评述科学计量学常用软件中的几种聚类算法,分析聚类算法的特性并采用基于类内紧密度和类间分离度对聚类结果的有效性进行探讨,总结各聚类算法的效果并对应软件分析的结果进行案例分析。  相似文献   

7.
针对知识发现中的模糊信息查询问题,提出了一种基于反馈网络的模糊概念聚类及模式联想设计方法.按照分类要求对所要查询的概念集合进行量化编码,并对编码后的数据进行规整处理.对于概念聚类采用多层反馈神经网络的FP聚类算法,而概念联想采用自反馈神经网络的椭球学习算法实现.将基于上述算法开发出的信息模糊查询系统应用于图书信息查询,实验结果证明了该方法的有效性.  相似文献   

8.
为了提高模糊支持向量机在入侵检测数据集上的训练效率,提出了一种基于聚类的模糊支持向量机入侵检测算法.该方法可以对训练数据进行剪枝,有效地减少远离分类面的聚类边缘点的数量,同时在分类面附近保持较多的样本点,以靠近判别边界的聚类中心集合作为有效的训练样本集合对模糊支持向量机进行训练,减少了样本的训练时间,提高了算法的效率.实验结果表明该方法提高了模糊支持向量机的训练效率,而且对入侵检测是非常有效的.  相似文献   

9.
左晓飞  刘怀亮  范云杰  赵辉 《情报杂志》2012,31(5):180-184,191
传统的基于关键词的文本聚类算法,由于难以充分利用文本的语义特征,聚类效果差强人意。笔者提出一种概念语义场的概念,并给出了基于知网构建概念语义场的算法。即首先利用知网构造义原屏蔽层,将描述能力弱的义原屏蔽,然后在分析知网结构的基础上给出抽取相关概念的规则,以及简单概念语义场和复杂概念语义场的构造方法。最后给出一种基于概念语义场的文本聚类算法。该算法可充分利用特征词的语义关系,对不规则形状的聚类也有较好效果。实验表明,该算法可以有效提高聚类的质量。  相似文献   

10.
李法运  农罗锋 《情报科学》2013,(2):34-37,44
针对传统的K-Means算法的不足,以及其在文本聚类中存在的局限性,提出了一种基于网页向量语义相似度的改进K-Means算法。新算法通过向量语义相似度的计算自动确定初始聚类中心,在聚类过程中,达到语义相似度阈值的网页才使用K-Means算法进行聚类。通过实验证明,新算法很好地克服了传统K-Means算法随机选取聚类中心以及无法处理语义信息的问题,提高了聚类的质量。  相似文献   

11.
In information retrieval, cluster-based retrieval is a well-known attempt in resolving the problem of term mismatch. Clustering requires similarity information between the documents, which is difficult to calculate at a feasible time. The adaptive document clustering scheme has been investigated by researchers to resolve this problem. However, its theoretical viewpoint has not been fully discovered. In this regard, we provide a conceptual viewpoint of the adaptive document clustering based on query-based similarities, by regarding the user’s query as a concept. As a result, adaptive document clustering scheme can be viewed as an approximation of this similarity. Based on this idea, we derive three new query-based similarity measures in language modeling framework, and evaluate them in the context of cluster-based retrieval, comparing with K-means clustering and full document expansion. Evaluation result shows that retrievals based on query-based similarities significantly improve the baseline, while being comparable to other methods. This implies that the newly developed query-based similarities become feasible criterions for adaptive document clustering.  相似文献   

12.
As text documents are explosively increasing in the Internet, the process of hierarchical document clustering has been proven to be useful for grouping similar documents for versatile applications. However, most document clustering methods still suffer from challenges in dealing with the problems of high dimensionality, scalability, accuracy, and meaningful cluster labels. In this paper, we will present an effective Fuzzy Frequent Itemset-Based Hierarchical Clustering (F2IHC) approach, which uses fuzzy association rule mining algorithm to improve the clustering accuracy of Frequent Itemset-Based Hierarchical Clustering (FIHC) method. In our approach, the key terms will be extracted from the document set, and each document is pre-processed into the designated representation for the following mining process. Then, a fuzzy association rule mining algorithm for text is employed to discover a set of highly-related fuzzy frequent itemsets, which contain key terms to be regarded as the labels of the candidate clusters. Finally, these documents will be clustered into a hierarchical cluster tree by referring to these candidate clusters. We have conducted experiments to evaluate the performance based on Classic4, Hitech, Re0, Reuters, and Wap datasets. The experimental results show that our approach not only absolutely retains the merits of FIHC, but also improves the accuracy quality of FIHC.  相似文献   

13.
词干化、词形还原是英文文本处理中的一个重要步骤。本文利用3种聚类算法对两个Stemming算法和一个Lemmatization算法进行较为全面的实验。结果表明,Stemming和Lemmatization都可以提高英文文本聚类的聚类效果和效率,但对聚类结果的影响并不显著。相比于Snowball Stemmer和Stanford Lemmatizer,Porter Stemmer方法在Entropy和Pu-rity表现上更好,也更为稳定。  相似文献   

14.
This paper presents a cluster validation based document clustering algorithm, which is capable of identifying an important feature subset and the intrinsic value of model order (cluster number). The important feature subset is selected by optimizing a cluster validity criterion subject to some constraint. For achieving model order identification capability, this feature selection procedure is conducted for each possible value of cluster number. The feature subset and the cluster number which maximize the cluster validity criterion are chosen as our answer. We have evaluated our algorithm using several datasets from the 20Newsgroup corpus. Experimental results show that our algorithm can find the important feature subset, estimate the cluster number and achieve higher micro-averaged precision than previous document clustering algorithms which require the value of cluster number to be provided.  相似文献   

15.
Most document clustering algorithms operate in a high dimensional bag-of-words space. The inherent presence of noise in such representation obviously degrades the performance of most of these approaches. In this paper we investigate an unsupervised dimensionality reduction technique for document clustering. This technique is based upon the assumption that terms co-occurring in the same context with the same frequencies are semantically related. On the basis of this assumption we first find term clusters using a classification version of the EM algorithm. Documents are then represented in the space of these term clusters and a multinomial mixture model (MM) is used to build document clusters. We empirically show on four document collections, Reuters-21578, Reuters RCV2-French, 20Newsgroups and WebKB, that this new text representation noticeably increases the performance of the MM model. By relating the proposed approach to the Probabilistic Latent Semantic Analysis (PLSA) model we further propose an extension of the latter in which an extra latent variable allows the model to co-cluster documents and terms simultaneously. We show on these four datasets that the proposed extended version of the PLSA model produces statistically significant improvements with respect to two clustering measures over all variants of the original PLSA and the MM models.  相似文献   

16.
任燕 《科技通报》2012,28(4):206-208
主要研究了均值聚类图像分割问题。针对传统的聚类图像分割算法对图像地分割精度较低等问题,提出一种基于模糊控制的C-均值聚类快速图像分割新方法。本文采用快速模糊C-均值聚类算法对图像分割。实验结果表明,图像分割边缘清晰,分割效果明显优于传统的聚类图像分割算法。  相似文献   

17.
We consider a challenging clustering task: the clustering of multi-word terms without document co-occurrence information in order to form coherent groups of topics. For this task, we developed a methodology taking as input multi-word terms and lexico-syntactic relations between them. Our clustering algorithm, named CPCL is implemented in the TermWatch system. We compared CPCL to other existing clustering algorithms, namely hierarchical and partitioning (k-means, k-medoids). This out-of-context clustering task led us to adapt multi-word term representation for statistical methods and also to refine an existing cluster evaluation metric, the editing distance in order to evaluate the methods. Evaluation was carried out on a list of multi-word terms from the genomic field which comes with a hand built taxonomy. Results showed that while k-means and k-medoids obtained good scores on the editing distance, they were very sensitive to term length. CPCL on the other hand obtained a better cluster homogeneity score and was less sensitive to term length. Also, CPCL showed good adaptability for handling very large and sparse matrices.  相似文献   

18.
The exponential growth of information available on the World Wide Web, and retrievable by search engines, has implied the necessity to develop efficient and effective methods for organizing relevant contents. In this field document clustering plays an important role and remains an interesting and challenging problem in the field of web computing. In this paper we present a document clustering method, which takes into account both contents information and hyperlink structure of web page collection, where a document is viewed as a set of semantic units. We exploit this representation to determine the strength of a relation between two linked pages and to define a relational clustering algorithm based on a probabilistic graph representation. The experimental results show that the proposed approach, called RED-clustering, outperforms two of the most well known clustering algorithm as k-Means and Expectation Maximization.  相似文献   

19.
聂珍  王华秋 《现代情报》2012,32(7):112-116,121
本文采取了3种必要的措施提高了聚类质量:考虑到各维数据特征属性对聚类效果影响不同,采用了基于统计方法的维度加权的方法进行特征选择;对于和声搜索算法的调音概率进行了改进,将改进的和声搜索算法和模糊聚类相结合用于快速寻找最优的聚类中心;循环测试各种中心数情况下的聚类质量以获得最佳的类中心数。接着,该算法被应用于图书馆读者兴趣度建模中,用于识别图书馆日常运行时各读者借阅图书的类型,实验表明该算法较其它算法更优。这样的读者兴趣度聚类分析可以进行图书推荐,从而提高图书馆的运行效率。  相似文献   

20.
刘高勇  汪会玲 《情报科学》2007,25(6):929-931,937
利用自组织映射网络(80M)可以实现文本聚类,在此基础上进一步对索引词聚类,从而可以得到文本聚类图和索引词聚类图。利用这两个图,就可以对普通文本进行超文本自组织,即对普通文本的某些知识点做超链接,以链接到与之相关的Web文档上。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号