首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We address the problem of finding similar historical questions that are semantically equivalent or relevant to an input query question in community question-answering (CQA) sites. One of the main challenges for this task is that questions are usually too long and often contain peripheral information in addition to the main goals of the question. To address this problem, we propose an end-to-end Hierarchical Compare Aggregate (HCA) model that can handle this problem without using any task-specific features. We first split questions into sentences and compare every sentence pair of the two questions using a proposed Word-Level-Compare-Aggregate model called WLCA-model and then the comparison results are aggregated with a proposed Sentence-Level-Compare-Aggregate model to make the final decision. To handle the insufficient training data problem, we propose a sequential transfer learning approach to pre-train the WLCA-model on a large paraphrase detection dataset. Our experiments on two editions of the Semeval benchmark datasets and the domain-specific AskUbuntu dataset show that our model outperforms the state-of-the-art models.  相似文献   

2.
Paraphrase detection is an important task in text analytics with numerous applications such as plagiarism detection, duplicate question identification, and enhanced customer support helpdesks. Deep models have been proposed for representing and classifying paraphrases. These models, however, require large quantities of human-labeled data, which is expensive to obtain. In this work, we present a data augmentation strategy and a multi-cascaded model for improved paraphrase detection in short texts. Our data augmentation strategy considers the notions of paraphrases and non-paraphrases as binary relations over the set of texts. Subsequently, it uses graph theoretic concepts to efficiently generate additional paraphrase and non-paraphrase pairs in a sound manner. Our multi-cascaded model employs three supervised feature learners (cascades) based on CNN and LSTM networks with and without soft-attention. The learned features, together with hand-crafted linguistic features, are then forwarded to a discriminator network for final classification. Our model is both wide and deep and provides greater robustness across clean and noisy short texts. We evaluate our approach on three benchmark datasets and show that it produces a comparable or state-of-the-art performance on all three.  相似文献   

3.
In this era, the proliferating role of social media in our lives has popularized the posting of the short text. The short texts contain limited context with unique characteristics which makes them difficult to handle. Every day billions of short texts are produced in the form of tags, keywords, tweets, phone messages, messenger conversations social network posts, etc. The analysis of these short texts is imperative in the field of text mining and content analysis. The extraction of precise topics from large-scale short text documents is a critical and challenging task. The conventional approaches fail to obtain word co-occurrence patterns in topics due to the sparsity problem in short texts, such as text over the web, social media like Twitter, and news headlines. Therefore, in this paper, the sparsity problem is ameliorated by presenting a novel fuzzy topic modeling (FTM) approach for short text through fuzzy perspective. In this research, the local and global term frequencies are computed through a bag-of-words (BOW) model. To remove the negative impact of high dimensionality on the global term weighting, the principal component analysis is adopted; thereafter the fuzzy c-means algorithm is employed to retrieve the semantically relevant topics from the documents. The experiments are conducted over the three real-world short text datasets: the snippets dataset is in the category of small dataset whereas the other two datasets, Twitter and questions, are the bigger datasets. Experimental results show that the proposed approach discovered the topics more precisely and performed better as compared to other state-of-the-art baseline topic models such as GLTM, CSTM, LTM, LDA, Mix-gram, BTM, SATM, and DREx+LDA. The performance of FTM is also demonstrated in classification, clustering, topic coherence and execution time. FTM classification accuracy is 0.95, 0.94, 0.91, 0.89 and 0.87 on snippets dataset with 50, 75, 100, 125 and 200 number of topics. The classification accuracy of FTM on questions dataset is 0.73, 0.74, 0.70, 0.68 and 0.78 with 50, 75, 100, 125 and 200 number of topics. The classification accuracies of FTM on snippets and questions datasets are higher than state-of-the-art baseline topic models.  相似文献   

4.
Stance detection identifies a person’s evaluation of a subject, and is a crucial component for many downstream applications. In application, stance detection requires training a machine learning model on an annotated dataset and applying the model on another to predict stances of text snippets. This cross-dataset model generalization poses three central questions, which we investigate using stance classification models on 7 publicly available English Twitter datasets ranging from 297 to 48,284 instances. (1) Are stance classification models generalizable across datasets? We construct a single dataset model to train/test dataset-against-dataset, finding models do not generalize well (avg F1=0.33). (2) Can we improve the generalizability by aggregating datasets? We find a multi dataset model built on the aggregation of datasets has an improved performance (avg F1=0.69). (3) Given a model built on multiple datasets, how much additional data is required to fine-tune it? We find it challenging to ascertain a minimum number of data points due to the lack of pattern in performance. Investigating possible reasons for the choppy model performance we find that texts are not easily differentiable by stances, nor are annotations consistent within and across datasets. Our observations emphasize the need for an aggregated dataset as well as consistent labels for the generalizability of models.  相似文献   

5.
Incorporating topic information can help response generation models to produce informative responses for chat-bots. Previous work only considers the individual semantic of each topic, ignoring its specific dialog context, which may result in inaccurate topic representation and hurt response coherence. Besides, as an important feature of multi-turn conversation, dynamic topic transitions have not been well-studied. We propose a Context-Controlled Topic-Aware neural response generation model, i.e., CCTA, which makes dialog context interact with the process of topic representing and transiting to achieve balanced improvements on response informativeness and contextual coherence. CCTA focuses on capturing the semantical relations within topics as well as their corresponding contextual information in conversation, to produce context-dependent topic representations at the word-level and turn-level. Besides, CCTA introduces a context-controlled topic transition strategy, utilizing contextual topics to yield relevant transition words. Extensive experimental results on two benchmark multi-turn conversation datasets validate the superiority of our proposal on generating coherent and informative responses against the state-of-the-art baselines. We also find that topic transition modeling can work as an auxiliary learning task to boost the response generation.  相似文献   

6.
Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference.  相似文献   

7.
Stress and depression detection on social media aim at the analysis of stress and identification of depression tendency from social media posts, which provide assistance for the early detection of mental health conditions. Existing methods mainly model the mental states of the post speaker implicitly. They also lack the ability to mentalise for complex mental state reasoning. Besides, they are not designed to explicitly capture class-specific features. To resolve the above issues, we propose a mental state Knowledge–aware and Contrastive Network (KC-Net). In detail, we first extract mental state knowledge from a commonsense knowledge base COMET, and infuse the knowledge using Gated Recurrent Units (GRUs) to explicitly model the mental states of the speaker. Then we propose a knowledge–aware mentalisation module based on dot-product attention to accordingly attend to the most relevant knowledge aspects. A supervised contrastive learning module is also utilised to fully leverage label information for capturing class-specific features. We test the proposed methods on a depression detection dataset Depression_Mixed with 3165 Reddit and blog posts, a stress detection dataset Dreaddit with 3553 Reddit posts, and a stress factors recognition dataset SAD with 6850 SMS-like messages. The experimental results show that our method achieves new state-of-the-art results on all datasets: 95.4% of F1 scores on Depression_Mixed, 83.5% on Dreaddit and 77.8% on SAD, with 2.07% average improvement. Factor-specific analysis and ablation study prove the effectiveness of all proposed modules, while UMAP analysis and case study visualise their mechanisms. We believe our work facilitates detection and analysis of depression and stress on social media data, and shows potential for applications on other mental health conditions.  相似文献   

8.
With the rapid development in mobile computing and Web technologies, online hate speech has been increasingly spread in social network platforms since it's easy to post any opinions. Previous studies confirm that exposure to online hate speech has serious offline consequences to historically deprived communities. Thus, research on automated hate speech detection has attracted much attention. However, the role of social networks in identifying hate-related vulnerable community is not well investigated. Hate speech can affect all population groups, but some are more vulnerable to its impact than others. For example, for ethnic groups whose languages have few computational resources, it is a challenge to automatically collect and process online texts, not to mention automatic hate speech detection on social media. In this paper, we propose a hate speech detection approach to identify hatred against vulnerable minority groups on social media. Firstly, in Spark distributed processing framework, posts are automatically collected and pre-processed, and features are extracted using word n-grams and word embedding techniques such as Word2Vec. Secondly, deep learning algorithms for classification such as Gated Recurrent Unit (GRU), a variety of Recurrent Neural Networks (RNNs), are used for hate speech detection. Finally, hate words are clustered with methods such as Word2Vec to predict the potential target ethnic group for hatred. In our experiments, we use Amharic language in Ethiopia as an example. Since there was no publicly available dataset for Amharic texts, we crawled Facebook pages to prepare the corpus. Since data annotation could be biased by culture, we recruit annotators from different cultural backgrounds and achieved better inter-annotator agreement. In our experimental results, feature extraction using word embedding techniques such as Word2Vec performs better in both classical and deep learning-based classification algorithms for hate speech detection, among which GRU achieves the best result. Our proposed approach can successfully identify the Tigre ethnic group as the highly vulnerable community in terms of hatred compared with Amhara and Oromo. As a result, hatred vulnerable group identification is vital to protect them by applying automatic hate speech detection model to remove contents that aggravate psychological harm and physical conflicts. This can also encourage the way towards the development of policies, strategies, and tools to empower and protect vulnerable communities.  相似文献   

9.
10.
Due to the harmful impact of fabricated information on social media, many rumor verification techniques have been introduced in recent years. Advanced techniques like multi-task learning (MTL), shared-private models suffer from many strategic limitations that restrict their capability of veracity identification on social media. These models are often reliant on multiple tasks for the primary targeted objective. Even the most recent deep neural network (DNN) models like VRoC, Hierarchical-PSV, StA-HiTPLAN etc. based on VAE, GCN, Transformer respectively with improved modification are able to perform good on veracity identification task but with the help of additional auxiliary information, mostly. However, their rise is still not substantial with respect to the proposed model even though the proposed model is not using any additional information. To come up with an improved DNN model architecture, we introduce globally Discrete Attention Representations from Transformers (gDART). Discrete-Attention mechanism in gDART is capable of capturing multifarious correlations veiled among the sequence of words which existing DNN models including Transformer often overlook. Our proposed framework uses a Branch-CoRR Attention Network to extract highly informative features in branches, and employs Feature Fusion Network Component to identify deep embedded features and use them to make enhanced identification of veracity of an unverified claim. Moreover, to achieve its goal, gDART is not dependent on any costly auxiliary resource but on an unsupervised learning process. Extensive experiments reveal that gDART marks a considerable performance gain in veracity identification task over state-of-the-art models on two real world rumor datasets. gDART reports a gain of 36.76%, 40.85% on standard benchmark metrics.  相似文献   

11.
One of the most time-critical challenges for the Natural Language Processing (NLP) community is to combat the spread of fake news and misinformation. Existing approaches for misinformation detection use neural network models, statistical methods, linguistic traits, fact-checking strategies, etc. However, the menace of fake news seems to grow more vigorous with the advent of humongous and unusually creative language models. Relevant literature reveals that one major characteristic of the virality of fake news is the presence of an element of surprise in the story, which attracts immediate attention and invokes strong emotional stimulus in the reader. In this work, we leverage this idea and propose textual novelty detection and emotion prediction as the two tasks relating to automatic misinformation detection. We re-purpose textual entailment for novelty detection and use the models trained on large-scale datasets of entailment and emotion to classify fake information. Our results correlate with the idea as we achieve state-of-the-art (SOTA) performance (7.92%, 1.54%, 17.31% and 8.13% improvement in terms of accuracy) on four large-scale misinformation datasets. We hope that our current probe will motivate the community to explore further research on misinformation detection along this line. The source code is available at the GitHub.2  相似文献   

12.
Controllable response generation is an attractive and valuable task to the success of conversational systems. However, controlling both pattern and content of the response has not been well studied in existing models since they are mainly based on matching mechanisms. To tackle the problem, we first design a pattern model to automatically learn and extract speech patterns from words. The pattern is then integrated into the encoder–decoder model to control the response pattern. Second, a sentence sampling algorithm is built to directly insert or delete words in the generated response, so that the content is controlled. In this two-stage framework, the response could be explicitly controlled by the pattern and content, without any human annotation of the post-response dataset. Experiments show the proposed framework achieves better performance in response controllability than the state-of-the-art.  相似文献   

13.
Detecting suicidal tendencies and preventing suicides is an important social goal. The rise and continuance of emotion, the emotion category, and the intensity of the emotion are important clues about suicidal tendencies. The three determinants of emotion, viz. Valence, Arousal, and Dominance (VAD) can help determine a person’s exact emotion(s) and its intensity. This paper introduces an end-to-end VAD-assisted transformer-based multi-task network for detecting emotion (primary task) and its intensity (auxiliary task) in suicide notes. As part of this research, we expand the utility of the emotion-annotated benchmark dataset of suicide notes, CEASE-v2.0, by annotating all its sentences with emotion intensity labels. Empirical results show that our multi-task method performs better than the corresponding single-task systems, with the best attained overall Mean Recall (MR) of 65.25% on the emotion task. On a similar task, we improved MR by 8.78% over the existing state-of-the-art system. We evaluated our approach on three benchmark datasets for three different tasks. We observed that the introduced method consistently outperformed existing state-of-the-art approaches on the studied datasets, demonstrating its capacity to generalize to other downstream correlated tasks. We qualitatively examined our model’s output by comparing it to the labeling of a psychiatrist.  相似文献   

14.
Argument mining (AM) aims to automatically generate a graph that represents the argument structure of a document. Most previous AM models only pay attention to a single argument component (AC) to classify the type of the AC or a pair of ACs to identify and classify the argumentative relation (AR) between the two ACs. These models ignore the impact of global argument structure of the documents, which is important, especially in some highly structured genres such as scientific papers, where the process of argumentation is relatively fixed. Inspired by this, we propose a novel two-stage model which leverages global structure information to support AM. The first stage uses a multi-turn question-answering model to incrementally generate an initial argumentative graph that identifies relations among ACs. At each turn, all ACs related to the query AC are generated simultaneously, such that the sibling global information between the answer ACs is considered. In addition, the partially constructed graph is used as global structure information to support the extension of the graph with additional ACs. After the whole initial graph structure has been determined, the second stage assigns semantic types to both the ACs and ARs among them, leveraging information from this initial graph as global structure information. We test the proposed methods on two scientific datasets (one is the AbstRCT dataset including 659 abstracts about cancer research and the other is the SciARG dataset that consists of 225 computer linguistic abstracts and 285 biomedical abstracts) and a student essay dataset PE with 402 essays. Our experiments show that our model improves the state-of-the-art performance on two scientific datasets for different AM subtasks, with average improvements of 1%, 2.41%, 1.1% for the ACC, ARI and ARC task respectively on the AbstRCT dataset, and 2.36%, 1.84%, 8.87% for the ACC, ARI and ARC task on the SciARG dataset. Our model also achieves comparative results on the PE datasets: 87.7% of F1 scores for the ACC task, 81.4% for the ARI task and 78.8% for the ARC task.  相似文献   

15.
We propose a CNN-BiLSTM-Attention classifier to classify online short messages in Chinese posted by users on government web portals, so that a message can be directed to one or more government offices. Our model leverages every bit of information to carry out multi-label classification, to make use of different hierarchical text features and the labels information. In particular, our designed method extracts label meaning, the CNN layer extracts local semantic features of the texts, the BiLSTM layer fuses the contextual features of the texts and the local semantic features, and the attention layer selects the most relevant features for each label. We evaluate our model on two public large corpuses, and our high-quality handcraft e-government multi-label dataset, which is constructed by the text annotation tool doccano and consists of 29920 data points. Experimental results show that our proposed method is effective under common multi-label evaluation metrics, achieving micro-f1 of 77.22%, 84.42%, 87.52%, and marco-f1 of 77.68%, 73.37%, 83.57% on these three datasets respectively, confirming that our classifier is robust. We conduct ablation study to evaluate our label embedding method and attention mechanism. Moreover, case study on our handcraft e-government multi-label dataset verifies that our model integrates all types of semantic information of short messages based on different labels to achieve text classification.  相似文献   

16.
17.
Aesthetic assessment evaluates the quality of a given image using subjective annotations, commonly user ratings, as a knowledge base. Rating complexity is usually relaxed in state-of-the-art works by employing a binary high/low quality label computed from the mean value of rating votes. Nevertheless, this approach introduces uncertainty to average-quality images, which may affect the performance of machine learning models trained from annotated data.In this work, we present a novel approach to aesthetic assessment based on redefining the rating-based groundtruths present in most datasets. Our intent is twofold: to reduce the rating uncertainty and to automatically group them into clusters reflecting high and low quality patterns, thus avoiding an arbitrary threshold like 5 in 1–10 ratings. The experimentation uses the well-known AVA dataset, which consists of more than 255,000 images, and we train several CNN models to test our new groundtruths against the baseline ones. The results show that our approach achieves significant performance gains, between 3% and 9% more balanced accuracy than the baseline groundtruths.  相似文献   

18.
Dictionary-based classifiers are an essential group of approaches in the field of time series classification. Their distinctive characteristic is that they transform time series into segments made of symbols (words) and then classify time series using these words. Dictionary-based approaches are suitable for datasets containing time series of unequal length. The prevalence of dictionary-based methods inspired the research in this paper. We propose a new dictionary-based classifier called SAFE. The new approach transforms the raw numeric data into a symbolic representation using the Simple Symbolic Aggregate approXimation (SAX) method. We then partition the symbolic time series into a sequence of words. Then we employ the word embedding neural model known in Natural Language Processing to train the classifying mechanism. The proposed scheme was applied to classify 30 benchmark datasets and compared with a range of state-of-the-art time series classifiers. The name SAFE comes from our observation that this method is safe to use. Empirical experiments have shown that SAFE gives excellent results: it is always in the top 5%–10% when we rank the classification accuracy of state-of-the-art algorithms for various datasets. Our method ranks third in the list of state-of-the-art dictionary-based approaches (after the WEASEL and BOSS methods).  相似文献   

19.
Graph neural networks (GNN) have emerged as a new state-of-the-art for learning knowledge graph representations. Although they have shown impressive performance in recent studies, how to efficiently and effectively aggregate neighboring features is not well designed. To tackle this challenge, we propose the simplifying heterogeneous graph neural network (SHGNet), a generic framework that discards the two standard operations in GNN, including the transformation matrix and nonlinear activation. SHGNet, in particular, adopts only the essential component of neighborhood aggregation in GNN and incorporates relation features into feature propagation. Furthermore, to capture complex structures, SHGNet utilizes a hierarchical aggregation architecture, including node aggregation and relation weighting. Thus, the proposed model can treat each relation differently and selectively aggregate informative features. SHGNet has been evaluated for link prediction tasks on three real-world benchmark datasets. The experimental results show that SHGNet significantly promotes efficiency while maintaining superior performance, outperforming all the existing models in 3 out of 4 metrics on NELL-995 and in 4 out of 4 metrics on FB15k-237 dataset.  相似文献   

20.
Efficient topic modeling is needed to support applications that aim at identifying main themes from a collection of documents. In the present paper, a reduced vector embedding representation and particle swarm optimization (PSO) are combined to develop a topic modeling strategy that is able to identify representative themes from a large collection of documents. Documents are encoded using a reduced, contextual vector embedding from a general-purpose pre-trained language model (sBERT). A modified PSO algorithm (pPSO) that tracks particle fitness on a dimension-by-dimension basis is then applied to these embeddings to create clusters of related documents. The proposed methodology is demonstrated on two datasets. The first dataset consists of posts from the online health forum r/Cancer and the second dataset is a standard benchmark for topic modeling which consists of a collection of messages posted to 20 different news groups. When compared to the state-of-the-art generative document models (i.e., ETM and NVDM), pPSO is able to produce interpretable clusters. The results indicate that pPSO is able to capture both common topics as well as emergent topics. Moreover, the topic coherence of pPSO is comparable to that of ETM and its topic diversity is comparable to NVDM. The assignment parity of pPSO on a document completion task exceeded 90% for the 20NewsGroups dataset. This rate drops to approximately 30% when pPSO is applied to the same Skip-Gram embedding derived from a limited, corpus-specific vocabulary which is used by ETM and NVDM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号