首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
针对PSVM没有考虑不平衡数据的情况,提出一个基于改进PSVM的分类方法(PSVM-2).首先用PSVM对输入集数据进行初次训练,并得到分类超平面的法向量,做输入集在法向量上的投影,利用投影样本点提供的数据改进PSVM,并对输入集数据进行再次分类.实验证明本方法在处理不平衡数据时性能表现良好.  相似文献   

2.
常用的网页分类技术大多基于普通文本分类方法,没有充分考虑到网页分类的特殊性——网页本身的半结构化特征以及网页中存在大量干扰分类的噪音信息,同时多数网页分类的测试集和训练集来源于同一个样本集而忽视了测试集中可能包含无类别样本的可能。基于向量空间模型,将样本集看成由有类别样本和无类别样本两部分组成,同时选择了样本集来自于相同的网站,在去除网页噪音基础上结合文本相似度算法和最优截尾法,提出了一种基于不完整数据集的网页分类技术LUD(Learning by Unlabeled Data)来改善分类效果,提高分类精度。实验证明:LUD算法与传统的分类方法相比较而言,不但可以提高已有类别样本的分类精度,更主要的是提供了一种发现新类别样本的方法。  相似文献   

3.
为了提高模糊支持向量机在入侵检测数据集上的训练效率,提出了一种基于聚类的模糊支持向量机入侵检测算法.该方法可以对训练数据进行剪枝,有效地减少远离分类面的聚类边缘点的数量,同时在分类面附近保持较多的样本点,以靠近判别边界的聚类中心集合作为有效的训练样本集合对模糊支持向量机进行训练,减少了样本的训练时间,提高了算法的效率.实验结果表明该方法提高了模糊支持向量机的训练效率,而且对入侵检测是非常有效的.  相似文献   

4.
提出基于图的半监督学习算法,即类别传播算,结合K均值算法改进,用于网页分类。该K均值类别传播方法使用欧式距离的建立带权∈NN图。在这个图中,图节点表示已标记或未标记的网页,边上的权重表示节点的相似度,已标记节点的类别沿着边向邻居节点传播,从而将网页分类问题形式化为类别在图上的传播。结合K均值方法,提高了计算速度以及图方法的归纳能力,经UCI数据集测试,结果表明,此算法比类别传播算法有更好的性能,能够有效地用于半监督网页分类。  相似文献   

5.
基于不同的分类算法针对特性迥异的语料数据进行分类,其分类效果往往不同。通过研究分类算法针对专门语料库与自建语料库分类效果各不相同的根本原因,提出一种提高分类性能的新途径。从不同语料库的自动分类对比入手,定义类别聚类密度、类别复杂度、类别清晰度三个指标对语料库信息进行度量,通过多因素方差分析考察三个指标与分类性能的关系,得出语料的各项指标对不同分类算法分类性能的影响关系,并提出一种基于类别清晰度的交叠类文本分类方法以验证指标的有效性。实验表明:该三个指标都在不同程度上影响着分类算法的分类性能。语料类别的聚类密度越高,复杂度越低,类别清晰度越高,其表现出的分类效果越好。  相似文献   

6.
[目的/意义]网络新闻是获取突发事件情报的重要来源之一,提高海量网络新闻中突发事件的识别准确率和分类效果,并减少非突发事件新闻造成的开放集识别问题和降低人工标注非突发事件新闻的成本,这是当前突发事件识别与分类研究的重要课题。[方法/过程]选择BERT预训练模型获得文本的特征表示,融合不同层级之间的语义信息增强文本表示的质量,采用自适应决策边界模型,学习各突发事件类别在高维语义表示空间上的球形最佳决策边界,根据新闻样本的文本表示和各突发事件类别的球形最佳决策边界的欧几里得距离,检测出突发事件新闻并判断突发事件的类别,并在CEC公开数据集和实时爬取的中文新闻数据集CEN上对模型的有效性进行验证。[结果/结论]实验结果表明,本文模型在CEC数据集和CEN数据集上的宏F1值分别为98.46%和95.80%,与基准模型相比,本文模型的宏F1值分别提升了5.15%和19.69%。模型应用展示了提出方法在解决实际问题时的有效性。[局限]未考虑突发事件新闻可能存在多标签的情况。  相似文献   

7.
在文本自动分类中,目前有词频和文档频率统计这两种概率估算方法,采用的估算方法恰当与否会直接影响特征抽取的质量与分类的准确度。本文采用K最近邻算法实现中文文本分类器,在中文平衡与非平衡两种训练语料下进行了训练与分类实验,实验数据表明使用非平衡语料语料时,可以采用基于词频的概率估算方法,使用平衡语料语料时,采用基于文档频率的概率估算方法,能够有效地提取高质量的文本特征,从而提高分类的准确度。  相似文献   

8.
进入大数据时代,中文文本的数据量的显著增加,如何针对大数据量的文本数据进行有效分类是一个重要问题。传统的朴素贝叶斯算法在进行分类时,认为特征属性对分类决策的贡献是相同的,同时对于大数据集的处理也存在性能低下的缺点。针对如上问题,本文提出了一种基于TFIDFCF特征加权的并行化朴素贝叶斯文本分类算法,该算法通过Map Reduce并行框架实现。利用THUCNews新闻文本数据开展文本分类处理,实验结果表明,并行框架下的TFIDFCF特征加权的朴素贝叶斯算法在训练速度和预测精度上都有提高。  相似文献   

9.
黄静  薛书田  肖进 《软科学》2017,(7):131-134
将半监督学习技术与多分类器集成模型Bagging相结合,构建类别分布不平衡环境下基于Bagging的半监督集成模型(SSEBI),综合利用有、无类别标签的样本来提高模型的性能.该模型主要包括三个阶段:(1)从无类别标签数据集中选择性标记一部分样本并训练若干个基本分类器;(2)使用训练好的基本分类器对测试集样本进行分类;(3)对分类结果进行集成得到最终分类结果.在五个客户信用评估数据集上进行实证分析,结果表明本研究提出的SSEBI模型的有效性.  相似文献   

10.
周源  刘怀兰  杜朋朋  廖岭 《情报科学》2017,35(5):111-118
【目的/意义】特征提取会很大程度地影响分类效果,而传统TF-IDF特征提取方法缺乏对特征词上下文环 境和对特征词在类之间分布状况的考虑。【方法/过程】本文提出一种改进TF-IDF特征提取的方法:①基于文本网 络和改进PageRank算法计算节点重要程度值,解决传统TF-IDF忽略文本结构信息的问题;②增加特征值IDF值 的方差来衡量特征词w在不同类别文本集中程度的分布情况,解决传统TF-IDF忽略特征词在类之间分布状况的 不足。【结果/结论】基于该改进方法构建了文本分类模型,对3D打印数据进行分类实验。对比算法改进前后的分 类效果,验证了该方法能够有效提高文本特征词提取的准确度。  相似文献   

11.
In text categorization, it is quite often that the numbers of documents in different categories are different, i.e., the class distribution is imbalanced. We propose a unique approach to improve text categorization under class imbalance by exploiting the semantic context in text documents. Specifically, we generate new samples of rare classes (categories with relatively small amount of training data) by using global semantic information of classes represented by probabilistic topic models. In this way, the numbers of samples in different categories can become more balanced and the performance of text categorization can be improved using this transformed data set. Indeed, the proposed method is different from traditional re-sampling methods, which try to balance the number of documents in different classes by re-sampling the documents in rare classes. Such re-sampling methods can cause overfitting. Another benefit of our approach is the effective handling of noisy samples. Since all the new samples are generated by topic models, the impact of noisy samples is dramatically reduced. Finally, as demonstrated by the experimental results, the proposed methods can achieve better performance under class imbalance and is more tolerant to noisy samples.  相似文献   

12.
A new dictionary-based text categorization approach is proposed to classify the chemical web pages efficiently. Using a chemistry dictionary, the approach can extract chemistry-related information more exactly from web pages. After automatic segmentation on the documents to find dictionary terms for document expansion, the approach adopts latent semantic indexing (LSI) to produce the final document vectors, and the relevant categories are finally assigned to the test document by using the k-NN text categorization algorithm. The effects of the characteristics of chemistry dictionary and test collection on the categorization efficiency are discussed in this paper, and a new voting method is also introduced to improve the categorization performance further based on the collection characteristics. The experimental results show that the proposed approach has the superior performance to the traditional categorization method and is applicable to the classification of chemical web pages.  相似文献   

13.
王煜  王正欧 《情报科学》2006,24(1):96-99,123
本文首先提出一种改进的X^2统计量,以此衡量词条对文本分类的贡献。然后根据模式聚合理论,将对各文本类分类贡献比例相近似的词条聚合为一个特征,建立出文本集的特征向量空间模型。此方法有效地降低了文本特征向量空间的维数。最后使用决策树进行分类,从而既保证了分类精度又获得了决策树易于抽取可理解的分类规则的优势。  相似文献   

14.
Authorship analysis of electronic texts assists digital forensics and anti-terror investigation. Author identification can be seen as a single-label multi-class text categorization problem. Very often, there are extremely few training texts at least for some of the candidate authors or there is a significant variation in the text-length among the available training texts of the candidate authors. Moreover, in this task usually there is no similarity between the distribution of training and test texts over the classes, that is, a basic assumption of inductive learning does not apply. In this paper, we present methods to handle imbalanced multi-class textual datasets. The main idea is to segment the training texts into text samples according to the size of the class, thus producing a fairer classification model. Hence, minority classes can be segmented into many short samples and majority classes into less and longer samples. We explore text sampling methods in order to construct a training set according to a desirable distribution over the classes. Essentially, by text sampling we provide new synthetic data that artificially increase the training size of a class. Based on two text corpora of two languages, namely, newswire stories in English and newspaper reportage in Arabic, we present a series of authorship identification experiments on various multi-class imbalanced cases that reveal the properties of the presented methods.  相似文献   

15.
Text categorization pertains to the automatic learning of a text categorization model from a training set of preclassified documents on the basis of their contents and the subsequent assignment of unclassified documents to appropriate categories. Most existing text categorization techniques deal with monolingual documents (i.e., written in the same language) during the learning of the text categorization model and category assignment (or prediction) for unclassified documents. However, with the globalization of business environments and advances in Internet technology, an organization or individual may generate and organize into categories documents in one language and subsequently archive documents in different languages into existing categories, which necessitate cross-lingual text categorization (CLTC). Specifically, cross-lingual text categorization deals with learning a text categorization model from a set of training documents written in one language (e.g., L1) and then classifying new documents in a different language (e.g., L2). Motivated by the significance of this demand, this study aims to design a CLTC technique with two different category assignment methods, namely, individual- and cluster-based. Using monolingual text categorization as a performance reference, our empirical evaluation results demonstrate the cross-lingual capability of the proposed CLTC technique. Moreover, the classification accuracy achieved by the cluster-based category assignment method is statistically significantly higher than that attained by the individual-based method.  相似文献   

16.
The number of patent documents is currently rising rapidly worldwide, creating the need for an automatic categorization system to replace time-consuming and labor-intensive manual categorization. Because accurate patent classification is crucial to search for relevant existing patents in a certain field, patent categorization is a very important and useful field. As patent documents are structural documents with their own characteristics distinguished from general documents, these unique traits should be considered in the patent categorization process. In this paper, we categorize Japanese patent documents automatically, focusing on their characteristics: patents are structured by claims, purposes, effects, embodiments of the invention, and so on. We propose a patent document categorization method that uses the k-NN (k-Nearest Neighbour) approach. In order to retrieve similar documents from a training document set, some specific components to denote the so-called semantic elements, such as claim, purpose, and application field, are compared instead of the whole texts. Because those specific components are identified by various user-defined tags, first all of the components are clustered into several semantic elements. Such semantically clustered structural components are the basic features of patent categorization. We can achieve a 74% improvement of categorization performance over a baseline system that does not use the structural information of the patent.  相似文献   

17.
In this paper, a Generalized Cluster Centroid based Classifier (GCCC) and its variants for text categorization are proposed by utilizing a clustering algorithm to integrate two well-known classifiers, i.e., the K-nearest-neighbor (KNN) classifier and the Rocchio classifier. KNN, a lazy learning method, suffers from inefficiency in online categorization while achieving remarkable effectiveness. Rocchio, which has efficient categorization performance, fails to obtain an expressive categorization model due to its inherent linear separability assumption. Our proposed method mainly focuses on two points: one point is that we use a clustering algorithm to strengthen the expressiveness of the Rocchio model; another one is that we employ the improved Rocchio model to speed up the categorization process of KNN. Extensive experiments conducted on both English and Chinese corpora show that GCCC and its variants have better categorization ability than some state-of-the-art classifiers, i.e., Rocchio, KNN and Support Vector Machine (SVM).  相似文献   

18.
Text classification is an important research topic in natural language processing (NLP), and Graph Neural Networks (GNNs) have recently been applied in this task. However, in existing graph-based models, text graphs constructed by rules are not real graph data and introduce massive noise. More importantly, for fixed corpus-level graph structure, these models cannot sufficiently exploit the labeled and unlabeled information of nodes. Meanwhile, contrastive learning has been developed as an effective method in graph domain to fully utilize the information of nodes. Therefore, we propose a new graph-based model for text classification named CGA2TC, which introduces contrastive learning with an adaptive augmentation strategy into obtaining more robust node representation. First, we explore word co-occurrence and document word relationships to construct a text graph. Then, we design an adaptive augmentation strategy for the text graph with noise to generate two contrastive views that effectively solve the noise problem and preserve essential structure. Specifically, we design noise-based and centrality-based augmentation strategies on the topological structure of text graph to disturb the unimportant connections and thus highlight the relatively important edges. As for the labeled nodes, we take the nodes with same label as multiple positive samples and assign them to anchor node, while we employ consistency training on unlabeled nodes to constrain model predictions. Finally, to reduce the resource consumption of contrastive learning, we adopt a random sample method to select some nodes to calculate contrastive loss. The experimental results on several benchmark datasets can demonstrate the effectiveness of CGA2TC on the text classification task.  相似文献   

19.
Question categorization, which suggests one of a set of predefined categories to a user’s question according to the question’s topic or content, is a useful technique in user-interactive question answering systems. In this paper, we propose an automatic method for question categorization in a user-interactive question answering system. This method includes four steps: feature space construction, topic-wise words identification and weighting, semantic mapping, and similarity calculation. We firstly construct the feature space based on all accumulated questions and calculate the feature vector of each predefined category which contains certain accumulated questions. When a new question is posted, the semantic pattern of the question is used to identify and weigh the important words of the question. After that, the question is semantically mapped into the constructed feature space to enrich its representation. Finally, the similarity between the question and each category is calculated based on their feature vectors. The category with the highest similarity is assigned to the question. The experimental results show that our proposed method achieves good categorization precision and outperforms the traditional categorization methods on the selected test questions.  相似文献   

20.
In this paper, a semantic categorization method in generic home photos is proposed. In recent years, the semantic categorization of image has been a challenge due to the proliferation of digital home photos. Our approach is to detect semantically meaningful concepts contained in a photo. The proposed method incorporates an intermediate level of concepts, called local concept, so that it catches well semantic meaning of local regions of image as bridging the semantic gap of the low-level features and high-level category concepts. To detect the local concepts from the home photo, region segmentation by photographic region template and concept merging is also proposed. The efficacy of the proposed semantic categorization method was demonstrated with 3828 general home photos. The experiment results showed the proposed categorization method would be useful to detect multiple semantic meaning of the home photos.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号