首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Multi-label classification (MLC) has attracted many researchers in the field of machine learning as it has a straightforward problem statement with varied solution approaches. Multi-label classifiers predict multiple labels for a single instance. The problem becomes challenging with the increasing number of features, especially when there are many features and labels which depend on each other. It requires dimensionality reduction before applying any multi-label learning method. This paper introduces a method named FS-MLC (Feature Selection forMulti-Label classification using Clustering in feature-space). It is a wrapper feature selection method that uses clustering to find the similarity among features and example-based precision and recall as the metrics for feature rankings to improve the performance of the associated classifier in terms of sample-based measures. First, clusters are created for features considering them as instances then features from different clusters are selected as the representative of all the features for that cluster. It reduces the number of features as a single feature represents multiple features within a cluster. It neither requires any parameter tuning nor the user threshold for the number of features selected. Extensive experimentation is performed to evaluate the efficacy of these reduced features using nine benchmark MLC datasets on twelve performance measures. The results show an impressive improvement in terms of sample-based precision, recall, and f1-score with up to 23%-93% discarded features.  相似文献   

2.
Contextual feature selection for text classification   总被引:1,自引:0,他引:1  
We present a simple approach for the classification of “noisy” documents using bigrams and named entities. The approach combines conventional feature selection with a contextual approach to filter out passages around selected features. Originally designed for call for tender documents, the method can be useful for other web collections that also contain non-topical contents. Experiments are conducted on our in-house collection as well as on the 4-Universities data set, Reuters 21578 and 20 Newsgroups. We find a significant improvement on our collection and the 4-Universities data set (10.9% and 4.1%, respectively). Although the best results are obtained by combining bigrams and named entities, the impact of the latter is not found to be significant.  相似文献   

3.
Artificial intelligence (AI) is rapidly becoming the pivotal solution to support critical judgments in many life-changing decisions. In fact, a biased AI tool can be particularly harmful since these systems can contribute to or demote people’s well-being. Consequently, government regulations are introducing specific rules to prohibit the use of sensitive features (e.g., gender, race, religion) in the algorithm’s decision-making process to avoid unfair outcomes. Unfortunately, such restrictions may not be sufficient to protect people from unfair decisions as algorithms can still behave in a discriminatory manner. Indeed, even when sensitive features are omitted (fairness through unawareness), they could be somehow related to other features, named proxy features. This study shows how to unveil whether a black-box model, complying with the regulations, is still biased or not. We propose an end-to-end bias detection approach exploiting a counterfactual reasoning module and an external classifier for sensitive features. In detail, the counterfactual analysis finds the minimum cost variations that grant a positive outcome, while the classifier detects non-linear patterns of non-sensitive features that proxy sensitive characteristics. The experimental evaluation reveals the proposed method’s efficacy in detecting classifiers that learn from proxy features. We also scrutinize the impact of state-of-the-art debiasing algorithms in alleviating the proxy feature problem.  相似文献   

4.
Due to the particularity of Chinese word formation, the Chinese Named Entity Recognition (NER) task has attracted extensive attention over recent years. Recently, some researchers have tried to solve this problem by using a multimodal method combining acoustic features and text features. However, the text-speech data pairs required by the above methods are lacking in real-world scenarios, making it difficult to apply widely. To address this, we proposed a multimodal Chinese NER method called USAF, which uses synthesized acoustic features instead of actual human speech. USAF aligns text and acoustic features through unique position embeddings and uses a multi-head attention mechanism to fuse the features of the two modalities, which stably improves the performance of Chinese named entity recognition. To evaluate USAF, we implemented USAF on three Chinese NER datasets. Experimental results show that USAF witnesses a stable improvement compare to text-only methods on each dataset, and outperforms SOTA external-vocabulary-based method on two datasets. Specifically, compared to the SOTA external-vocabulary-based method, the F1 score of USAF is improved by 1.84 and 1.24 on CNERTA and Aishell3-NER, respectively.  相似文献   

5.
The demand to detect opinionated spam, using opinion mining applications to prevent their damaging effects on e-commerce reputations is on the rise in many business sectors globally. The existing spam detection techniques in use nowadays, only consider one or two types of spam entities such as review, reviewer, group of reviewers, and product. Besides, they use a limited number of features related to behaviour, content and the relation of entities which reduces the detection's accuracy. Accordingly, these techniques mostly exploit synthetic datasets to analyse their model and are not able to be applied in the context of the real-world environment. As such, a novel graph-based model called “Multi-iterative Graph-based opinion Spam Detection” (MGSD) in which all various types of entities are considered simultaneously within a unified structure is proposed. Using this approach, the model reveals both implicit (i.e., similar entity's) and explicit (i.e., different entities’) relationships. The MGSD model is able to evaluate the ‘spamicity’ effects of entities more efficiently given it applies a novel multi-iterative algorithm which considers different sets of factors to update the spamicity score of entities. To enhance the accuracy of the MGSD detection model, a higher number of existing weighted features along with the novel proposed features from different categories were selected using a combination of feature fusion techniques and machine learning (ML) algorithms. The MGSD model can also be generalised and applied in various opinionated documents due to employing domain independent features. The output of the MGSD model showed that our feature selection and feature fusion techniques showed a remarkable improvement in detecting spam. The findings of this study showed that MGSD could improve the accuracy of state-of-the-art ML and graph-based techniques by around 5.6% and 4.8%, respectively, also achieving an accuracy of 93% for the detection of spam detection in our synthetic crowdsourced dataset and 95.3% for Ott's crowdsourced dataset.  相似文献   

6.
Many Chinese NER models only focus on lexical and radical information, ignoring the fact that there are also certain rules for the pronunciation of Chinese entities. In this paper, we propose VisPhone, which incorporates Chinese characters’ Phonetic features into Transformer Encoder along with the Lattice and Visual features. We present the common rules for the pronunciation of Chinese entities and explore the most appropriate method to encode it. VisPhone uses two identical cross transformer encoders to fuse the visual and phonetic features of the input characters with the text embedding. A selective fusion module is used to get the final features. We conducted experiments on four well-known Chinese NER benchmark datasets: OntoNotes4.0, MSRA, Resume, and Weibo, with F1 scores of 82.63%, 96.07%, 96.26%, 70.79% respectively, improving the performance by 0.79%, 0.32%, 0.39%, and 3.47%. Our ablation experiments have also demonstrated the effectiveness of VisPhone.  相似文献   

7.
Named Entity Recognition (NER) aims to automatically extract specific entities from the unstructured text. Compared with performing NER in English, Chinese NER is more challenging in recognizing entity boundaries because there are no explicit delimiters between Chinese characters. However, most previous researches focused on the semantic information of the Chinese language on the character level but ignored the importance of the phonetic characteristics. To address these issues, we integrated phonetic features of Chinese characters with the lexicon information to help disambiguate the entity boundary recognition by fully exploring the potential of Chinese as a pictophonetic language. In addition, a novel multi-tagging-scheme learning method was proposed, based on the multi-task learning paradigm, to alleviate the data sparsity and error propagation problems that occurred in the previous tagging schemes, by separately annotating the segmentation information of entities and their corresponding entity types. Extensive experiments performed on four Chinese NER benchmark datasets: OntoNotes4.0, MSRA, Resume, and Weibo, show that our proposed method consistently outperforms the existing state-of-the-art baseline models. The ablation experiments further demonstrated that the introduction of the phonetic feature and the multi-tagging-scheme has a significant positive effect on the improvement of the Chinese NER task.  相似文献   

8.
In this paper, we focus on the problem of discovering internally connected communities in event-based social networks (EBSNs) and propose a community detection method by utilizing social influences between users. Different from traditional social network, EBSNs contain different types of entities and links, and users in EBSNs have more complex behaviours. This leads to poor performance of the traditional social influence computation method in EBSNs. Therefore, to quantify the pairwise social influence accurately in EBSNs, we first propose to compute two types of social influences, i.e., structure-based social influence and behaviour-based social influence, by utilizing the online social network structure and offline social behaviours of users. In particular, based on the specific features of EBSNs, the similarities of user preference on three aspects (i.e., topics, regions and organizers) are utilized to measure the behaviour-based social influence. Then, we obtain the unified pairwise social influence by combining these two types of social influences through a weight function. Next, we present a social influence based community detection algorithm which is referred to as SICD. In SICD, inspired by the nonlinear feature learning ability of the autoencoder, we first devise a neighborhood based deep autoencoder algorithm to obtain nonlinear community-oriented latent representations of users, and then utilize the k-means algorithm for community detection. Experimental results conducted on real-world dataset show the effectiveness of our proposed algorithm.  相似文献   

9.
The exploration of legal documents in the Brazilian Judiciary context lacks reliable annotated corpus to support the development of new Natural Language Process (NLP) applications. Therefore, this paper presents a step toward exploring legal decisions with Named Entity Recognition (NER) in the Brazilian Supreme Court (STF) context. We aim to present a case study on the fine-grained annotation task of legal decisions, performed by law students as annotators where two levels of nested legal entities were annotated. Nested entities mapped in a preliminary study composed of four coarser legal named entities and twenty-four nested ones (fine-grained). The final result is a corpus of 594 decisions published by the STF annotated by the 76 law students, those with the highest average inter-annotator agreement score. We also present two baselines for NER based on Conditional Random Fields (CRFs) and Bidirectional Long-Short Term Memory Networks (BiLSTMs). This corpus is the first of its kind, the most extensive corpus known in Portuguese dedicated for legal named entity recognition, open and available to better support further research studies in a similar context.  相似文献   

10.
Most existing search engines focus on document retrieval. However, information needs are certainly not limited to finding relevant documents. Instead, a user may want to find relevant entities such as persons and organizations. In this paper, we study the problem of related entity finding. Our goal is to rank entities based on their relevance to a structured query, which specifies an input entity, the type of related entities and the relation between the input and related entities. We first discuss a general probabilistic framework, derive six possible retrieval models to rank the related entities, and then compare these models both analytically and empirically. To further improve performance, we study the problem of feedback in the context of related entity finding. Specifically, we propose a mixture model based feedback method that can utilize the pseudo feedback entities to estimate an enriched model for the relation between the input and related entities. Experimental results over two standard TREC collections show that the derived relation generation model combined with a relation feedback method performs better than other models.  相似文献   

11.
丁晟春  方振  王楠 《现代情报》2009,40(3):103-110
[目的/意义] 为解决目前网络公开平台的多源异构的企业数据的散乱、无序、碎片化问题,提出Bi-LSTM-CRF深度学习模型进行商业领域中的命名实体识别工作。[方法/过程] 该方法包括对企业全称实体、企业简称实体与人名实体3类命名实体识别。[结果/结论] 实验结果显示对企业全称实体、企业简称实体与人名实体3类命名实体识别的识别率平均F值为90.85%,验证了所提方法的有效性,证明了本研究有效地改善了商业领域中的命名实体识别效率。  相似文献   

12.
Most previous works of feature selection emphasized only the reduction of high dimensionality of the feature space. But in cases where many features are highly redundant with each other, we must utilize other means, for example, more complex dependence models such as Bayesian network classifiers. In this paper, we introduce a new information gain and divergence-based feature selection method for statistical machine learning-based text categorization without relying on more complex dependence models. Our feature selection method strives to reduce redundancy between features while maintaining information gain in selecting appropriate features for text categorization. Empirical results are given on a number of dataset, showing that our feature selection method is more effective than Koller and Sahami’s method [Koller, D., & Sahami, M. (1996). Toward optimal feature selection. In Proceedings of ICML-96, 13th international conference on machine learning], which is one of greedy feature selection methods, and conventional information gain which is commonly used in feature selection for text categorization. Moreover, our feature selection method sometimes produces more improvements of conventional machine learning algorithms over support vector machines which are known to give the best classification accuracy.  相似文献   

13.
Knowledge graphs are widely used in retrieval systems, question answering systems (QA), hypothesis generation systems, etc. Representation learning provides a way to mine knowledge graphs to detect missing relations; and translation-based embedding models are a popular form of representation model. Shortcomings of translation-based models however, limits their practicability as knowledge completion algorithms. The proposed model helps to address some of these shortcomings.The similarity between graph structural features of two entities was found to be correlated to the relations of those entities. This correlation can help to solve the problem caused by unbalanced relations and reciprocal relations. We used Node2vec, a graph embedding algorithm, to represent information related to an entity's graph structure, and we introduce a cascade model to incorporate graph embedding with knowledge embedding into a unified framework. The cascade model first refines feature representation in the first two stages (Local Optimization Stage), and then uses backward propagation to optimize parameters of all the stages (Global Optimization Stage). This helps to enhance the knowledge representation of existing translation-based algorithms by taking into account both semantic features and graph features and fusing them to extract more useful information. Besides, different cascade structures are designed to find the optimal solution to the problem of knowledge inference and retrieval.The proposed model was verified using three mainstream knowledge graphs: WIN18, FB15K and BioChem. Experimental results were validated using the hit@10 rate entity prediction task. The proposed model performed better than TransE, giving an average improvement of 2.7% on WN18, 2.3% on FB15k and 28% on BioChem. Improvements were particularly marked where there were problems with unbalanced relations and reciprocal relations. Furthermore, the stepwise-cascade structure is proved to be more effective and significantly outperforms other baselines.  相似文献   

14.
Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. These data are often collected and stored across different institutions (banks, hospitals, etc.), or termed cross-silo. In this context, cross-silo federated learning has become prominent to tackle the privacy issues, where only model updates will be transmitted from institutions to servers without revealing institutions’ private information. In this paper, we propose a cross-silo federated XGBoost approach to solve the federated anomaly detection problem, which aims to identify abnormalities from extremely unbalanced datasets (e.g., credit card fraud detection) and can be considered a special classification problem. We design two privacy-preserving mechanisms that are tailored to the federated XGBoost: anonymity based data aggregation and local differential privacy. In the anonymity based data aggregation scenario, we cluster data into different groups and using a cluster-level data feature to train the model. In the local differential privacy scenario, we design a federated XGBoost framework by incorporate differential privacy in parameter transmission. Our experimental results over two datasets show the effectiveness of our proposed schemes compared with existing methods.  相似文献   

15.
Webpages are mainly distinguished by their topic (e.g., politics, sports etc.) and genre (e.g., blogs, homepages, e-shops, etc.). Automatic detection of webpage genre could considerably enhance the ability of modern search engines to focus on the requirements of the user’s information need. In this paper, we present an approach to webpage genre detection based on a fully-automated extraction of the feature set that represents the style of webpages. The features we propose (character n-grams of variable length and HTML tags) are language-independent and easily-extracted while they can be adapted to the properties of the still evolving web genres and the noisy environment of the web. Experiments based on two publicly-available corpora show that the performance of the proposed approach is superior in comparison to previously reported results. It is also shown that character n-grams are better features than words when the dimensionality increases while the binary representation is more effective than the term-frequency representation for both feature types. Moreover, we perform a series of cross-check experiments (e.g., training using a genre palette and testing using a different genre palette as well as using the features extracted from one corpus to discriminate the genres of the other corpus) to illustrate the robustness of our approach and its ability to capture the general stylistic properties of genre categories even when the feature set is not optimized for the given corpus.  相似文献   

16.
In this paper, we address the problem of relation extraction of multiple arguments where the relation of entities is framed by multiple attributes. Such complex relations are successfully extracted using a syntactic tree-based pattern matching method. While induced subtree patterns are typically used to model the relations of multiple entities, we argue that hard pattern matching between a pattern database and instance trees cannot allow us to examine similar tree structures. Thus, we explore a tree alignment-based soft pattern matching approach to improve the coverage of induced patterns. Our pattern learning algorithm iteratively searches the most influential dependency tree patterns as well as a control parameter for each pattern. The resulting method outperforms two baselines, a pairwise approach with the tree-kernel support vector machine and a hard pattern matching method, on two standard datasets for a complex relation extraction task.  相似文献   

17.
This study addresses the usage of different features to complement synset-based and bag-of-words representations of texts in the context of using classical ML approaches for spam filtering (Ferrara, 2019). Despite the existence of a large number of complementary features, in order to improve the applicability of this study, we have selected only those that can be computed regardless of the communication channel used to distribute content. Feature evaluation has been performed using content distributed through different channels (social networks and email) and classifiers (Adaboost, Flexible Bayes, Naïve Bayes, Random Forests, and SVMs). The results have revealed the usefulness of detecting some non-textual entities (such as URLs, Uniform Resource Locators) in the addressed distribution channels. Moreover, we also found that compression properties and/or information regarding the probability of correctly guessing the language of target texts could be successfully used to improve the classification in a wide range of situations. Finally, we have also detected features that are influenced by specific fashions and habits of users of certain Internet services (e.g. the existence of words written in capital letters) that are not useful for spam filtering.  相似文献   

18.
Unsupervised feature selection is very attractive in many practical applications, as it needs no semantic labels during the learning process. However, the absence of semantic labels makes the unsupervised feature selection more challenging, as the method can be affected by the noise, redundancy, or missing in the originally extracted features. Currently, most methods either consider the influence of noise for sparse learning or think over the internal structure information of the data, leading to suboptimal results. To relieve these limitations and improve the effectiveness of unsupervised feature selection, we propose a novel method named Adaptive Dictionary and Structure Learning (ADSL) that conducts spectral learning and sparse dictionary learning in a unified framework. Specifically, we adaptively update the dictionary based on sparse dictionary learning. And, we also introduce the spectral learning method of adaptive updating affinity matrix. While removing redundant features, the intrinsic structure of the original data can be retained. In addition, we adopt matrix completion in our framework to make it competent for fixing the missing data problem. We validate the effectiveness of our method on several public datasets. Experimental results show that our model not only outperforms some state-of-the-art methods on complete datasets but also achieves satisfying results on incomplete datasets.  相似文献   

19.
《Research Policy》2019,48(10):103557
Complex societal or environmental problems require fast and substantial socio-technical transitions. For instance, in the case of climate change, these transitions need to take place in the energy, transport and several industry sectors. To induce and accelerate such transitions, numerous policy interventions are required, which interact with each other in policy mixes. While several conceptual studies on policy mixes have been published recently, there is very little empirical research apart from single case or small-n studies. It has been prominently argued that the debate about policy mixes has reached an impasse partly due to this lack of empirical work. This paper addresses this gap by providing a first analysis of the temporal dynamics of complex policy mixes. To do so, we develop a conceptualization and measurement of policy mix balance across instrument types as well as policy mix design features (in the form of intensity as a general and technology specificity as a technology-focused design feature). This allows us to answer the question how temporal dynamics of policy mixes differ between countries regarding their balance and design features. Our measurement approach is developed bottom-up, i.e., policies are assessed individually and then aggregated systematically at the policy mix level. This enables overcoming the ‘dependent variable problem in the study of policy change’, i.e., the problem of measuring policy output. More specifically, we develop a comparative dataset of 522 renewable energy policies in nine OECD countries. Our analysis shows that countries’ policy mix dynamics vary strongly regarding some variables (e.g., technology specificity) but less regarding others (e.g., balance). As a validity check, we also test the effects of these mix dynamics on policy outcome in the form of renewable energy technology diffusion. We reflect our findings in light of the theoretical debates around policy mixes and policy design and discuss how our results provoke an agenda for the new generation of research on policy mixes. We specifically discuss avenues for future research with a particular focus on the ‘politics of policy mixes’.  相似文献   

20.
Gene ontology (GO) consists of three structured controlled vocabularies, i.e., GO domains, developed for describing attributes of gene products, and its annotation is crucial to provide a common gateway to access different model organism databases. This paper explores an effective application of text categorization methods to this highly practical problem in biology. As a first step, we attempt to tackle the automatic GO annotation task posed in the Text Retrieval Conference (TREC) 2004 Genomics Track. Given a pair of genes and an article reference where the genes appear, the task simulates assigning GO domain codes. We approach the problem with careful consideration of the specialized terminology and pay special attention to various forms of gene synonyms, so as to exhaustively locate the occurrences of the target gene. We extract the words around the spotted gene occurrences and used them to represent the gene for GO domain code annotation. We regard the task as a text categorization problem and adopt a variant of kNN with supervised term weighting schemes, making our method among the top-performing systems in the TREC official evaluation. Furthermore, we investigate different feature selection policies in conjunction with the treatment of terms associated with negative instances. Our experiments reveal that round-robin feature space allocation with eliminating negative terms substantially improves performance as GO terms become specific.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号