首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
英汉双语文本聚类是一项非常有价值的研究。使用单语言文本聚类算法,在英汉双语新闻语料基础上,对基于中文单语、英文单语和英汉双语混合的方法进行了文本聚类比较研究,实验结果表明,基于英汉双语混合的文本聚类方法可以取得较好的聚类结果。  相似文献   

2.
基于高频词汇的英文文本可视化   总被引:1,自引:0,他引:1  
为探索高频词汇间上下文关系的远近,本文研究了一种基于英文文本中高频词汇的可视化算法流程,并进行了可视化实现。我们首先用统计算法从英文文本中抽取出高频词汇及词汇间的上下文,然后定义了3种词汇间的连接方式,计算出有上下文关系的词汇间的关系度,并通过k-means算法对词汇间的关系度进行聚类,以体现出词汇间关系的远近,最后利用放射状树布局对聚类结果进行可视化。通过这种可视化形式,我们能够快速理解英文文本的内容。  相似文献   

3.
一种基于聚类的云计算任务调度算法   总被引:1,自引:0,他引:1  
任务调度是云计算中的一个关键问题.针对 Min-Min 算法负载不平衡的缺点,引入 K-means 聚类,提出一种基于 K-means 聚类和 Min-Min 的云计算任务调度的新算法.该算法采用 K-means 聚类方法依据任务长度对任务聚类进行预处理,然后根据 Min-Min 算法的机制进行任务调度.仿真结果表明,该算法具有较好的负载均衡性和系统性能.  相似文献   

4.
目前提出的大多数聚类融合算法在策略选择上未能同时兼顾聚类成员的多样性及质量,而且对高维数据的聚类结果均不理想,针对以上问题,本文提出一种改进的投影聚类融合算法,该算法主要在以往经典的投影聚类算法的基础上进行了改进,将投影聚类与分形维数结合,可对高维数据集进行降维聚类处理;而且该算法将选出最优参照成员,并设计出合理的选择策略,对部分优质成员进行选择,以得到一个更加准确的最终结果。高维数据聚类仿真实验结果表明,本文提出的改进的投影聚类融合算法与其他经典数据聚类融合算法相比,提高了聚类的有效性,大大提高了数据融合性能。  相似文献   

5.
梅梦 《科技广场》2007,(11):26-27
本文对聚类分析中聚类算法的基本理论进行了详细分析研究,并在此基础上,提出了一个聚类算法的通用算法框架。  相似文献   

6.
介绍聚类算法的过程以及聚类有效性指标的分类,分别评述科学计量学常用软件中的几种聚类算法,分析聚类算法的特性并采用基于类内紧密度和类间分离度对聚类结果的有效性进行探讨,总结各聚类算法的效果并对应软件分析的结果进行案例分析。  相似文献   

7.
本文将数据挖掘算法应用干智能答疑系统中,提出了一套基于数据挖掘算法的答疑设计方案并加以改进,传统的K-均值算法聚类虽然速度快,在文本聚类中易于实现,但其同样依赖于所有变量,聚类效果往往不尽如人意.为了克服这一缺点,提出一种改进的K-均值文本聚类算法.它在K-均值聚类过程中,向每一个聚类簇中的关键词自动计算添加一个权重,重要的关键词赋予较大的权重.经过实验测试.获得了一种基于子空闻变量自动加权的适合文本数据聚类分析的改进算法,它不仅可以在大规模、高维和稀疏的文本数据上有效地进行聚类.还能够生成质量较高的聚类结果.实验结果表明基于子空闻变量自动加权的K-均值文本聚类算法是有效的大规模文本数据聚类算法.  相似文献   

8.
K-均值聚类算法是一种基于划分方法的聚类算法,本文通过对传统的K-均值聚类算法的分析,提出了一种改进的K-均值算法,并对该算法的时间复杂度和空间复杂度进行了分析。该算法在计算聚类中心点时采用了一种最近邻的思想,可以有效地去除"噪声"和"孤立点"对簇中平均值(聚类中心)的影响,从而使聚类结果更加合理。最后通过实验表明该算法的有效性和正确性。  相似文献   

9.
为了实现对提取边界后剩余数据对象的聚类,提出一种由图像边缘出发进行聚类的算法。该算法首先采用深度优先搜索的策略将已知的边界对象进行分类,并计算各边界曲线的最小外包矩形区域;然后运用夹角和法去除内边界类;最后依据近邻原则对每一个核心对象进行归类。实验结果表明,对于含有噪声、密度均匀的数据集,算法可以识别出各种形状的聚类,且聚类质量和时间性能较好。  相似文献   

10.
研究高效进行数据聚类,提高数据聚类能力的问题。传统的模糊C均值算法具有对初始值和噪声极为敏感和遗传算法在局部极值点收敛的缺陷。基于模糊c均值聚类算法,提出一种改进的优化聚类算法。利用混沌序列的均匀遍历特性和差分进化算法的高效全局搜索能力,对模糊c均值算法进行改进,利用Logistics混沌映射对聚类算法进行优化搜索,把混沌扰动量引入到进化种群当中,弥补了模糊C均值算法的缺陷。采用改进的Logistics映射扰动搜索聚类算法,以目标识别为案例,综合4类目标特征参数为研究对象,开发了一套有价值的目标识别专家系统软件。仿真实验表明,改进的数据聚类算法,具有优越的数据聚类性能,聚类判断准确率提高明显,设计的专家系统软件对目标识别特征分类具有较好的准确性和可靠性,具有一定的应用价值。  相似文献   

11.
This paper presents an algorithm for generating stemmers from text stemmer specification files. A small study shows that the generated stemmers are computationally efficient, often running faster than stemmers custom written to implement particular stemming algorithms. The stemmer specification files are easily written and modified by non-programmers, making it much easier to create a stemmer, or tune a stemmer's performance, than would be the case with a custom stemmer program. Stemmer generation is thus also human-resource efficient.  相似文献   

12.
Arabic is a morphologically rich language that presents significant challenges to many natural language processing applications because a word often conveys complex meanings decomposable into several morphemes (i.e. prefix, stem, suffix). By segmenting words into morphemes, we could improve the performance of English/Arabic translation pair’s extraction from parallel texts. This paper describes two algorithms and their combination to automatically extract an English/Arabic bilingual dictionary from parallel texts that exist in the Internet archive after using an Arabic light stemmer as a preprocessing step. Before using the Arabic light stemmer, the total system precision and recall were 88.6% and 81.5% respectively, then the system precision an recall increased to 91.6% and 82.6% respectively after applying the Arabic light stemmer on the Arabic documents.  相似文献   

13.
文本聚类算法的质量评价   总被引:4,自引:0,他引:4  
文本聚类是建立大规模文本集合的分类体系实例的有效手段之一。本文讨论了利用标准的分类测试集合进行聚类质量的量化评价的手段,选择了k-Means聚类算法、STC(后缀树聚类)算法和基于Ant的聚类算法进行了实验对比。对实验结果的分析表明,STC聚类算法由于在处理文本时充分考虑了文本的短语特性,其聚类效果较好;基于Ant的聚类算法的结果受参数输入的影响较大;在Ant聚类算法中引入文本特性可以提高聚类结果的质量。  相似文献   

14.
A suite of computer programs has been developed for representing the full text of lengthy documents in vector form and classifying them by a clustering method. The programs have been applied to the full text of the Conventions and Agreements of the Council of Europe which consist of some 280,000 words in the English version and a similar number in the French. Results of the clustering experiments are presented in the form of dendrograms (tree diagrams) using both the treaty and article as the clustering unit. The conclusion is that vector techniques based on the full text provide an effective method of classifying legal documents.  相似文献   

15.
Multi-label text categorization refers to the problem of assigning each document to a subset of categories by means of multi-label learning algorithms. Unlike English and most other languages, the unavailability of Arabic benchmark datasets prevents evaluating multi-label learning algorithms for Arabic text categorization. As a result, only a few recent studies have dealt with multi-label Arabic text categorization on non-benchmark and inaccessible datasets. Therefore, this work aims to promote multi-label Arabic text categorization through (a) introducing “RTAnews”, a new benchmark dataset of multi-label Arabic news articles for text categorization and other supervised learning tasks. The benchmark is publicly available in several formats compatible with the existing multi-label learning tools, such as MEKA and Mulan. (b) Conducting an extensive comparison of most of the well-known multi-label learning algorithms for Arabic text categorization in order to have baseline results and show the effectiveness of these algorithms for Arabic text categorization on RTAnews. The evaluation involves four multi-label transformation-based algorithms: Binary Relevance, Classifier Chains, Calibrated Ranking by Pairwise Comparison and Label Powerset, with three base learners (Support Vector Machine, k-Nearest-Neighbors and Random Forest); and four adaptation-based algorithms (Multi-label kNN, Instance-Based Learning by Logistic Regression Multi-label, Binary Relevance kNN and RFBoost). The reported baseline results show that both RFBoost and Label Powerset with Support Vector Machine as base learner outperformed other compared algorithms. Results also demonstrated that adaptation-based algorithms are faster than transformation-based algorithms.  相似文献   

16.
In recent years, mainly the functionality of services are described in a short natural text language. Keyword-based searching for web service discovery is not efficient for providing relevant results. When services are clustered according to the similarity, then it reduces search space and due to that search time is also reduced in the web service discovery process. So in the domain of web service clustering, basically topic modeling techniques like Latent Dirichlet Allocation (LDA), Correlated Topic Model (CTM), Hierarchical Dirichlet Processing (HDP), etc. are adopted for dimensionality reduction and feature representation of services in vector space. But as the services are described in the form of short text, so these techniques are not efficient due to lack of occurring words, limited content, etc. In this paper, the performance of web service clustering is evaluated by applying various topic modeling techniques with different clustering algorithms on the crawled dataset from ProgrammableWeb repository. Gibbs Sampling algorithm for Dirichlet Multinomial Mixture (GSDMM) model is proposed as a dimensionality reduction and feature representation of services to overcome the limitations of short text clustering. Results show that GSDMM with K-Means or Agglomerative clustering is outperforming all other methods. The performance of clustering is evaluated based on three extrinsic and two intrinsic evaluation criteria. Dimensionality reduction achieved by GSDMM is 90.88%, 88.84%, and 93.13% on three real-time crawled datasets, which is satisfactory as the performance of clustering is also enhanced by deploying this technique.  相似文献   

17.
Research into unsupervised ways of stemming has resulted, in the past few years, in the development of methods that are reliable and perform well. Our approach further shifts the boundaries of the state of the art by providing more accurate stemming results. The idea of the approach consists in building a stemmer in two stages. In the first stage, a stemming algorithm based upon clustering, which exploits the lexical and semantic information of words, is used to prepare large-scale training data for the second-stage algorithm. The second-stage algorithm uses a maximum entropy classifier. The stemming-specific features help the classifier decide when and how to stem a particular word.  相似文献   

18.
The absence of diacritics in text documents or search queries is a serious problem for Turkish information retrieval because it creates homographic ambiguity. Thus, the inappropriate handling of diacritics reduces the retrieval performance in search engines. A straightforward solution to this problem is to normalize tokens by replacing diacritic characters with their American Standard Code for Information Interchange (ASCII) counterparts. However, this so-called ASCIIfication produces either synthetic words that are not legitimate Turkish words or legitimate words with meanings that are completely different from those of the original words. These non-valid synthetic words cannot be processed by morphological analysis components (such as stemmers or lemmatizers), which expect the input to be valid Turkish words. By contrast, synthetic words are not a problem when no stemmer or a simple first-n-characters-stemmer is used in the text analysis pipeline. This difference emphasizes the notion of the diacritic sensitivity of stemmers. In this study, we propose and evaluate an alternative solution based on the application of deASCIIfication, which restores accented letters in query terms or text documents. Our risk-sensitive evaluation results showed that the diacritics restoration approach yielded more effective and robust results compared with normalizing tokens to remove diacritics.  相似文献   

19.
左晓飞  刘怀亮  范云杰  赵辉 《情报杂志》2012,31(5):180-184,191
传统的基于关键词的文本聚类算法,由于难以充分利用文本的语义特征,聚类效果差强人意。笔者提出一种概念语义场的概念,并给出了基于知网构建概念语义场的算法。即首先利用知网构造义原屏蔽层,将描述能力弱的义原屏蔽,然后在分析知网结构的基础上给出抽取相关概念的规则,以及简单概念语义场和复杂概念语义场的构造方法。最后给出一种基于概念语义场的文本聚类算法。该算法可充分利用特征词的语义关系,对不规则形状的聚类也有较好效果。实验表明,该算法可以有效提高聚类的质量。  相似文献   

20.
文本自动分类是文本信息处理中的一项基础性工作。将范例推理应用于文本分类中,并利用词语间的词共现信息从文本中抽取主题词和频繁词共现项目集,以及借助聚类算法对范例库进行索引,实现了基于范例推理的文本自动分类系统。实验表明,与基于TFIDF的文本表示方法和最近邻分类算法相比,基于词共现信息的文本表示方法和范例库的聚类索引能有效地改善分类的准确性和效率,从而拓宽了范例推理的应用领域。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号