首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The class distribution of imbalanced data sets is skewed in practical application. As traditional clustering methods mainly are designed for improving the overall learning performance, the majority class usually tends to be clustered and the minority class which is more valuable maybe ignored. Moreover, existing clustering methods can be limited for the performance of imbalanced and high-dimensional domains. In this paper, we present one-step spectral rotation clustering for imbalanced high-dimensional data (OSRCIH) by integrating self-paced learning and spectral rotation clustering in a unified learning framework, where sample selection and dimensionality reduction are simultaneously considered with mutual and iterative update. Specifically, the imbalance problem is considered by selecting the same number of training samples from each intrinsic group of the training data, where the sample-weight vector is obtained by self-paced learning. Moreover, dimensionality reduction is conducted by combining subspace learning and feature selection. Experimental analysis on synthetic datasets and real datasets showed that OSRCIH could recognize and enhance the weight of important samples and features so as to avoid the clustering method in favor of the majority class and to improve effectively the clustering performance.  相似文献   

2.
In text categorization, it is quite often that the numbers of documents in different categories are different, i.e., the class distribution is imbalanced. We propose a unique approach to improve text categorization under class imbalance by exploiting the semantic context in text documents. Specifically, we generate new samples of rare classes (categories with relatively small amount of training data) by using global semantic information of classes represented by probabilistic topic models. In this way, the numbers of samples in different categories can become more balanced and the performance of text categorization can be improved using this transformed data set. Indeed, the proposed method is different from traditional re-sampling methods, which try to balance the number of documents in different classes by re-sampling the documents in rare classes. Such re-sampling methods can cause overfitting. Another benefit of our approach is the effective handling of noisy samples. Since all the new samples are generated by topic models, the impact of noisy samples is dramatically reduced. Finally, as demonstrated by the experimental results, the proposed methods can achieve better performance under class imbalance and is more tolerant to noisy samples.  相似文献   

3.
Learning from imbalanced datasets is difficult. The insufficient information that is associated with the minority class impedes making a clear understanding of the inherent structure of the dataset. Most existing classification methods tend not to perform well on minority class examples when the dataset is extremely imbalanced, because they aim to optimize the overall accuracy without considering the relative distribution of each class. In this paper, we study the performance of SVMs, which have gained great success in many real applications, in the imbalanced data context. Through empirical analysis, we show that SVMs may suffer from biased decision boundaries, and that their prediction performance drops dramatically when the data is highly skewed. We propose to combine an integrated sampling technique, which incorporates both over-sampling and under-sampling, with an ensemble of SVMs to improve the prediction performance. Extensive experiments show that our method outperforms individual SVMs as well as several other state-of-the-art classifiers.  相似文献   

4.
Sequential recommendation models a user’s historical sequence to predict future items. Existing studies utilize deep learning methods and contrastive learning for data augmentation to alleviate data sparsity. However, these existing methods cannot learn accurate high-quality item representations while augmenting data. In addition, they usually ignore data noise and user cold-start issues. To solve the above issues, we investigate the possibility of Generative Adversarial Network (GAN) with contrastive learning for sequential recommendation to balance data sparsity and noise. Specifically, we propose a new framework, Enhanced Contrastive Learning with Generative Adversarial Network for Sequential Recommendation (ECGAN-Rec), which models the training process as a GAN and recommendation task as the main task of the discriminator. We design a sequence augmentation module and a contrastive GAN module to implement both data-level and model-level augmentations. In addition, the contrastive GAN learns more accurate high-quality item representations to alleviate data noise after data augmentation. Furthermore, we propose an enhanced Transformer recommender based on GAN to optimize the performance of the model. Experimental results on three open datasets validate the efficiency and effectiveness of the proposed model and the ability of the model to balance data noise and data sparsity. Specifically, the improvement of ECGAN-Rec in two evaluation metrics (HR@N and NDCG@N) compared to the state-of-the-art model performance on the Beauty, Sports and Yelp datasets are 34.95%, 36.68%, and 13.66%, respectively. Our implemented model is available via https://github.com/nishawn/ECGANRec-master.  相似文献   

5.
High-resolution probabilistic load forecasting can comprehensively characterize both the uncertainties and the dynamic trends of the future load. Such information is key to the reliable operation of the future power grid with a high penetration of renewables. To this end, various high-resolution probabilistic load forecasting models have been proposed in recent decades. Compared with a single model, it is widely acknowledged that combining different models can further enhance the prediction performance, which is called the model ensemble. However, existing model ensemble approaches for load forecasting are linear combination-based, like mean value ensemble, weighted average ensemble, and quantile regression, and linear combinations may not fully utilize the advantages of different models, seriously limiting the performance of the model ensemble. We propose a learning ensemble approach that adopts the machine learning model to directly learn the optimal nonlinear combination from data. We theoretically demonstrate that the proposed learning ensemble approach can outperform conventional ensemble approaches. Based on the proposed learning ensemble model, we also introduce a Shapley value-based method to evaluate the contributions of each model to the model ensemble. The numerical studies on field load data verify the remarkable performance of our proposed approach.  相似文献   

6.
黄静  薛书田  肖进 《软科学》2017,(7):131-134
将半监督学习技术与多分类器集成模型Bagging相结合,构建类别分布不平衡环境下基于Bagging的半监督集成模型(SSEBI),综合利用有、无类别标签的样本来提高模型的性能.该模型主要包括三个阶段:(1)从无类别标签数据集中选择性标记一部分样本并训练若干个基本分类器;(2)使用训练好的基本分类器对测试集样本进行分类;(3)对分类结果进行集成得到最终分类结果.在五个客户信用评估数据集上进行实证分析,结果表明本研究提出的SSEBI模型的有效性.  相似文献   

7.
Dynamic Ensemble Selection (DES) strategy is one of the most common and effective techniques in machine learning to deal with classification problems. DES systems aim to construct an ensemble consisting of the most appropriate classifiers selected from the candidate classifier pool according to the competence level of the individual classifier. Since several classifiers are selected, their combination becomes crucial. However, most of current DES approaches focus on the combination of the selected classifiers while ignoring the local information surrounding the query sample needed to be classified. In order to boost the performance of DES-based classification systems, we in this paper propose a dynamic weighting framework for the classifier fusion during obtaining the final output of an DES system. In particular, the proposed method first employs a DES approach to obtain a group of classifiers for a query sample. Then, the hypothesis vector of the selected ensemble is obtained based on the analysis of consensus. Finally, a distance-based weighting scheme is developed to adjust the hypothesis vector depending on the closeness of the query sample to each class. The proposed method is tested on 30 real-world datasets with six well-known DES approaches based on both homogeneous and heterogeneous ensemble. The obtained results, supported by proper statistical tests, show that our method outperforms, both in terms of accuracy and kappa measures, the original DES framework.  相似文献   

8.
Ranking is a central component in information retrieval systems; as such, many machine learning methods for building rankers have been developed in recent years. An open problem is transfer learning, i.e. how labeled training data from one domain/market can be used to build rankers for another. We propose a flexible transfer learning strategy based on sample selection. Source domain training samples are selected if the functional relationship between features and labels do not deviate much from that of the target domain. This is achieved through a novel application of recent advances from density ratio estimation. The approach is flexible, scalable, and modular. It allows many existing supervised rankers to be adapted to the transfer learning setting. Results on two datasets (Yahoo’s Learning to Rank Challenge and Microsoft’s LETOR data) show that the proposed method gives robust improvements.  相似文献   

9.
Constructing ensemble models has become a common method for corporate credit risk early warning, while as to deep learning model with better predictive ability, there have been no fixed theoretical models formed in corporate credit risk early warning, as such models often fail to conduct further qualitative analysis of the results. Thus, this article builds a new two-stage ensemble model using a variety of machine learning methods represented by deep learning for corporate credit risk early warning, which can not only effectively improve the prediction performance of the model, but also qualitatively analyze the source of corporate credit risk from multiple angles according to the results. At first stage, the improved entropy method is used to re-assign the instance weight in correlation degree based on grey correlation analysis. At second stage, this study adopts Bagging method to integrate multiple one-dimensional convolutional neural networks, and borrows idea of N-fold cross validation to expand the difference of the base classifier. Empirically, this article selects listed companies in Chinese manufacturing industry between 2012 and 2021 as datasets, including 467 samples with 51 financial indicators. The new ensemble model has the highest F1-score (87.29%) and G-mean (89.47%) among comparative models, and qualitatively analyzes corporate risk sources. Further, it also analyzes how to increase early warning effect from the angles of indicator number and time span.  相似文献   

10.
Authorship analysis of electronic texts assists digital forensics and anti-terror investigation. Author identification can be seen as a single-label multi-class text categorization problem. Very often, there are extremely few training texts at least for some of the candidate authors or there is a significant variation in the text-length among the available training texts of the candidate authors. Moreover, in this task usually there is no similarity between the distribution of training and test texts over the classes, that is, a basic assumption of inductive learning does not apply. In this paper, we present methods to handle imbalanced multi-class textual datasets. The main idea is to segment the training texts into text samples according to the size of the class, thus producing a fairer classification model. Hence, minority classes can be segmented into many short samples and majority classes into less and longer samples. We explore text sampling methods in order to construct a training set according to a desirable distribution over the classes. Essentially, by text sampling we provide new synthetic data that artificially increase the training size of a class. Based on two text corpora of two languages, namely, newswire stories in English and newspaper reportage in Arabic, we present a series of authorship identification experiments on various multi-class imbalanced cases that reveal the properties of the presented methods.  相似文献   

11.
Text classification is an important research topic in natural language processing (NLP), and Graph Neural Networks (GNNs) have recently been applied in this task. However, in existing graph-based models, text graphs constructed by rules are not real graph data and introduce massive noise. More importantly, for fixed corpus-level graph structure, these models cannot sufficiently exploit the labeled and unlabeled information of nodes. Meanwhile, contrastive learning has been developed as an effective method in graph domain to fully utilize the information of nodes. Therefore, we propose a new graph-based model for text classification named CGA2TC, which introduces contrastive learning with an adaptive augmentation strategy into obtaining more robust node representation. First, we explore word co-occurrence and document word relationships to construct a text graph. Then, we design an adaptive augmentation strategy for the text graph with noise to generate two contrastive views that effectively solve the noise problem and preserve essential structure. Specifically, we design noise-based and centrality-based augmentation strategies on the topological structure of text graph to disturb the unimportant connections and thus highlight the relatively important edges. As for the labeled nodes, we take the nodes with same label as multiple positive samples and assign them to anchor node, while we employ consistency training on unlabeled nodes to constrain model predictions. Finally, to reduce the resource consumption of contrastive learning, we adopt a random sample method to select some nodes to calculate contrastive loss. The experimental results on several benchmark datasets can demonstrate the effectiveness of CGA2TC on the text classification task.  相似文献   

12.
张景素  魏明珠 《情报科学》2022,40(10):164-170
【目的/意义】旨在研究少量标注样本构建古文断句模型,减少在模型训练过程中样本标注所需的成本,为 探索数字技术与人文学科的融合发展提供崭新的思路。【方法/过程】从古文样本的不确定性和多样性出发,提出一 种加权多策略选样方法,有效结合了 BERT-BiLSTM-CRF、BERT-CRF等古文断句模型。通过引入信息熵和相 似性等概念,深入分析古籍文本的不确定性和多样性,运用加权计算评估古文样本对模型训练的价值高低,对加权 多策略方法所筛选的有价值样本进行人工标注,同时更新到训练集进行模型迭代训练。【结果/结论】以古籍《宋史》 为例进行研究,所提出的方法分别在 BERT-BiLSTM-CRF、BERT-CRF等古文断句模型训练过程中减少原来训 练样本量的50%、55%,进一步验证了方法的有效性。【创新/局限】加权多策略选样的方法为古文断句模型训练提供 了一种新思路,未来将探索该方法在古籍整理中其他任务的适用性。  相似文献   

13.
为了提高最小距离分类器的性能,在其基础上提出了一种改进MDC——NN-MDC:它先对训练样本进行修剪,根据每个样本与其最近邻类标的异同决定其取舍,然后再用剩余的训练样本训练得到分类器。采用UCI标准数据集实验,结果表明本文所提出的NN-MDC与MDC相比具有较高的分类精度。  相似文献   

14.
This paper is concerned with the quality of training data in learning to rank for information retrieval. While many data selection techniques have been proposed to improve the quality of training data for classification, the study on the same issue for ranking appears to be insufficient. As pointed out in this paper, it is inappropriate to extend technologies for classification to ranking, and the development of novel technologies is sorely needed. In this paper, we study the development of such technologies. To begin with, we propose the concept of “pairwise preference consistency” (PPC) to describe the quality of a training data collection from the ranking point of view. PPC takes into consideration the ordinal relationship between documents as well as the hierarchical structure on queries and documents, which are both unique properties of ranking. Then we select a subset of the original training documents, by maximizing the PPC of the selected subset. We further propose an efficient solution to the maximization problem. Empirical results on the LETOR benchmark datasets and a web search engine dataset show that with the subset of training data selected by our approach, the performance of the learned ranking model can be significantly improved.  相似文献   

15.
林萍  吕健超 《情报科学》2023,41(2):135-142
【目的/意义】提出基于Stacking集成学习的问答信息采纳行为识别策略,促进在线健康社区问答的精准化推送、助推数字化医疗服务高质量发展。【方法/过程】构建以集成学习方法和非集成学习方法为基学习器、以逻辑回归算法(LR)为元学习器的Stacking集成学习模型,比较单预测模型、同类预测模型组合、不同类预测模型组合的Stacking集成学习模型预测精度,选取“寻医问药”平台的慢性病问答构建数据集验证模型的优越性,并选取“快速问医生有问必答120”平台数据验证模型的可移植性。【结果/结论】Stacking集成模型相比于单预测模型能够更精准识别被采纳问答信息,模型具有较强的泛化性,可以适用于不同的在线健康社区。【创新/局限】本文基于Stacking集成思想构建两阶段预测模型,并借助机器学习构建最佳预测模型组合,显著提高在线健康社区问答信息采纳识别精度,但伴随问答信息积累,在线健康社区问答模式不断发展变化,考虑结合历史数据和每日更新数据的动态预测方法是未来研究工作重点。  相似文献   

16.
关于结构力学课程教学三类重要问题的探讨   总被引:2,自引:0,他引:2  
本文就土木工程类专业结构力学的教学内容调整、教学环节改进、新的教学方式与手段的引入等进行了有益的探讨。本文认为,调整结构力学现行课程内容势在必行,有必要将结构力学内容分为三个模块,教学中应由“纲”到“目”逐步展开课程内容;课堂教学、课外答疑、综合训练、学习效果考核是结构力学教学四个重要的环节,为改进教学效果,目前特别需要对课外答疑、综合训练、学习效果考核这三个方面进行改进;结构力学已是相对成熟的课程,要提升教学效果,引入启发式教学方式与计算机辅助教学手段是极为必要的。  相似文献   

17.
In this article, we focus on Chinese word segmentation by systematically incorporating non-local information based on latent variables and word-level features. Differing from previous work which captures non-local information by using semi-Markov models, we propose an alternative method for modeling non-local information: a latent variable word segmenter employing word-level features. In order to reduce computational complexity of learning non-local information, we further present an improved online training method, which can arrive the same objective optimum with a significantly accelerated training speed. We find that the proposed method can help the learning of long range dependencies and improve the segmentation quality of long words (for example, complicated named entities). Experimental results demonstrate that the proposed method is effective. With this improvement, evaluations on the data of the second SIGHAN CWS bakeoff show that our system is competitive with the state-of-the-art systems.  相似文献   

18.
Semi-supervised anomaly detection methods leverage a few anomaly examples to yield drastically improved performance compared to unsupervised models. However, they still suffer from two limitations: 1) unlabeled anomalies (i.e., anomaly contamination) may mislead the learning process when all the unlabeled data are employed as inliers for model training; 2) only discrete supervision information (such as binary or ordinal data labels) is exploited, which leads to suboptimal learning of anomaly scores that essentially take on a continuous distribution. Therefore, this paper proposes a novel semi-supervised anomaly detection method, which devises contamination-resilient continuous supervisory signals. Specifically, we propose a mass interpolation method to diffuse the abnormality of labeled anomalies, thereby creating new data samples labeled with continuous abnormal degrees. Meanwhile, the contaminated area can be covered by new data samples generated via combinations of data with correct labels. A feature learning-based objective is added to serve as an optimization constraint to regularize the network and further enhance the robustness w.r.t. anomaly contamination. Extensive experiments on 11 real-world datasets show that our approach significantly outperforms state-of-the-art competitors by 20%–30% in AUC-PR and obtains more robust and superior performance in settings with different anomaly contamination levels and varying numbers of labeled anomalies.  相似文献   

19.
Sequential minimal optimization (SMO) is quite an efficient algorithm for training the support vector machine. The most important step of this algorithm is the selection of the working set, which greatly affects the training speed. The feasible direction strategy for the working set selection can decrease the objective function, however, may augment to the total calculation for selecting the working set in each of the iteration. In this paper, a new candidate working set (CWS) Strategy is presented considering the cost on the working set selection and cache performance. This new strategy can select several greatest violating samples from Cache as the iterative working sets for the next several optimizing steps, which can improve the efficiency of the kernel cache usage and reduce the computational cost related to the working set selection. The results of the theory analysis and experiments demonstrate that the proposed method can reduce the training time, especially on the large-scale datasets.  相似文献   

20.
In this paper, we introduce a new collection selection strategy to be operated in search engines with document partitioned indexes. Our method involves the selection of those document partitions that are most likely to deliver the best results to the formulated queries, reducing the number of queries that are submitted to each partition. This method employs learning algorithms that are capable of ranking the partitions, maximizing the probability of recovering documents with high gain. The method operates by building vector representations of each partition on the term space that is spanned by the queries. The proposed method is able to generalize to new queries and elaborate document lists with high precision for queries not considered during the training phase. To update the representations of each partition, our method employs incremental learning strategies. Beginning with an inversion test of the partition lists, we identify queries that contribute with new information and add them to the training phase. The experimental results show that our collection selection method favorably compares with state-of-the-art methods. In addition our method achieves a suitable performance with low parameter sensitivity making it applicable to search engines with hundreds of partitions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号