首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the context of social media, users usually post relevant information corresponding to the contents of events mentioned in a Web document. This information posses two important values in that (i) it reflects the content of an event and (ii) it shares hidden topics with sentences in the main document. In this paper, we present a novel model to capture the nature of relationships between document sentences and post information (comments or tweets) in sharing hidden topics for summarization of Web documents by utilizing relevant post information. Unlike previous methods which are usually based on hand-crafted features, our approach ranks document sentences and user posts based on their importance to the topics. The sentence-user-post relation is formulated in a share topic matrix, which presents their mutual reinforcement support. Our proposed matrix co-factorization algorithm computes the score of each document sentence and user post and extracts the top ranked document sentences and comments (or tweets) as a summary. We apply the model to the task of summarization on three datasets in two languages, English and Vietnamese, of social context summarization and also on DUC 2004 (a standard corpus of the traditional summarization task). According to the experimental results, our model significantly outperforms the basic matrix factorization and achieves competitive ROUGE-scores with state-of-the-art methods.  相似文献   

2.
We deal with the task of authorship attribution, i.e. identifying the author of an unknown document, proposing the use of Part Of Speech (POS) tags as features for language modeling. The experimentation is carried out on corpora untypical for the task, i.e., with documents edited by non-professional writers, such as movie reviews or tweets. The former corpus is homogeneous with respect to the topic making the task more challenging, The latter corpus, puts language models into a framework of a continuously and fast evolving language, unique and noisy writing style, and limited length of social media messages. While we find that language models based on POS tags are competitive in only one of the corpora (movie reviews), they generally provide efficiency benefits and robustness against data sparsity. Furthermore, we experiment with model fusion, where language models based on different modalities are combined. By linearly combining three language models, based on characters, words, and POS trigrams, respectively, we achieve the best generalization accuracy of 96% on movie reviews, while the combination of language models based on characters and POS trigrams provides 54% accuracy on the Twitter corpus. In fusion, POS language models are proven essential effective components.  相似文献   

3.
Unstructured tweet feeds are becoming the source of real-time information for various events. However, extracting actionable information in real-time from this unstructured text data is a challenging task. Hence, researchers are employing word embedding approach to classify unstructured text data. We set our study in the contexts of the 2014 Ebola and 2016 Zika outbreaks and probed the accuracy of domain-specific word vectors for identifying crisis-related actionable tweets. Our findings suggest that relatively smaller domain-specific input corpora from the Twitter corpus are better in extracting meaningful semantic relationship than generic pre-trained Word2Vec (contrived from Google News) or GloVe (of Stanford NLP group). However, domain-specific quality tweet corpora during the early stages of outbreaks are normally scant, and identifying actionable tweets during early stages is crucial to stemming the proliferation of an outbreak. To overcome this challenge, we consider scholarly abstracts, related to Ebola and Zika virus, from PubMed and probe the efficiency of cross-domain resource utilization for word vector generation. Our findings demonstrate that the relevance of PubMed abstracts for the training purpose when Twitter data (as input corpus) would be scant during the early stages of the outbreak. Thus, this approach can be implemented to handle future outbreaks in real time. We also explore the accuracy of our word vectors for various model architectures and hyper-parameter settings. We observe that Skip-gram accuracies are better than CBOW, and higher dimensions yield better accuracy.  相似文献   

4.
One main challenge of Named Entities Recognition (NER) for tweets is the insufficient information in a single tweet, owing to the noisy and short nature of tweets. We propose a novel system to tackle this challenge, which leverages redundancy in tweets by conducting two-stage NER for multiple similar tweets. Particularly, it first pre-labels each tweet using a sequential labeler based on the linear Conditional Random Fields (CRFs) model. Then it clusters tweets to put tweets with similar content into the same group. Finally, for each cluster it refines the labels of each tweet using an enhanced CRF model that incorporates the cluster level information, i.e., the labels of the current word and its neighboring words across all tweets in the cluster. We evaluate our method on a manually annotated dataset, and show that our method boosts the F1 of the baseline without collectively labeling from 75.4% to 82.5%.  相似文献   

5.
In this paper, we compile and review several experiments measuring cross-lingual information retrieval (CLIR) performance as a function of the following resources: bilingual term lists, parallel corpora, machine translation (MT), and stemmers. Our CLIR system uses a simple probabilistic language model; the studies used TREC test corpora over Chinese, Spanish and Arabic. Our findings include:
  • •One can achieve an acceptable CLIR performance using only a bilingual term list (70–80% on Chinese and Arabic corpora).
  • •However, if a bilingual term list and parallel corpora are available, CLIR performance can rival monolingual performance.
  • •If no parallel corpus is available, pseudo-parallel texts produced by an MT system can partially overcome the lack of parallel text.
  • •While stemming is useful normally, with a very large parallel corpus for Arabic–English, stemming hurt performance in our empirical studies with Arabic, a highly inflected language.
  相似文献   

6.
The application of natural language processing (NLP) to financial fields is advancing with an increase in the number of available financial documents. Transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) have been successful in NLP in recent years. These cutting-edge models have been adapted to the financial domain by applying financial corpora to existing pre-trained models and by pre-training with the financial corpora from scratch. In Japanese, by contrast, financial terminology cannot be applied from a general vocabulary without further processing. In this study, we construct language models suitable for the financial domain. Furthermore, we compare methods for adapting language models to the financial domain, such as pre-training methods and vocabulary adaptation. We confirm that the adaptation of a pre-training corpus and tokenizer vocabulary based on a corpus of financial text is effective in several downstream financial tasks. No significant difference is observed between pre-training with the financial corpus and continuous pre-training from the general language model with the financial corpus. We have released our source code and pre-trained models.  相似文献   

7.
This paper proposes a new deep learning approach to better understand how optimistic and pessimistic feelings are conveyed in Twitter conversations about COVID-19. A pre-trained transformer embedding is used to extract the semantic features and several network architectures are compared. Model performance is evaluated on two new, publicly available Twitter corpora of crisis-related posts. The best performing pessimism and optimism detection models are based on bidirectional long- and short-term memory networks.Experimental results on four periods of the COVID-19 pandemic show how the proposed approach can model optimism and pessimism in the context of a health crisis. There is a total of 150,503 tweets and 51,319 unique users. Conversations are characterised in terms of emotional signals and shifts to unravel empathy and support mechanisms. Conversations with stronger pessimistic signals denoted little emotional shift (i.e. 62.21% of these conversations experienced almost no change in emotion). In turn, only 10.42% of the conversations laying more on the optimistic side maintained the mood. User emotional volatility is further linked with social influence.  相似文献   

8.
Five hundred million tweets are posted daily, making Twitter a major social media platform from which topical information on events can be extracted. These events are represented by three main dimensions: time, location and entity-related information. The focus of this paper is location, which is an essential dimension for geo-spatial applications, either when helping rescue operations during a disaster or when used for contextual recommendations. While the first type of application needs high recall, the second is more precision-oriented. This paper studies the recall/precision trade-off, combining different methods to extract locations. In the context of short posts, applying tools that have been developed for natural language is not sufficient given the nature of tweets which are generally too short to be linguistically correct. Also bearing in mind the high number of posts that need to be handled, we hypothesize that predicting whether a post contains a location or not could make the location extractors more focused and thus more effective. We introduce a model to predict whether a tweet contains a location or not and show that location prediction is a useful pre-processing step for location extraction. We define a number of new tweet features and we conduct an intensive evaluation. Our findings are that (1) combining existing location extraction tools is effective for precision-oriented or recall-oriented results, (2) enriching tweet representation is effective for predicting whether a tweet contains a location or not, (3) words appearing in a geography gazetteer and the occurrence of a preposition just before a proper noun are the two most important features for predicting the occurrence of a location in tweets, and (4) the accuracy of location extraction improves when it is possible to predict that there is a location in a tweet.  相似文献   

9.
The exploration of legal documents in the Brazilian Judiciary context lacks reliable annotated corpus to support the development of new Natural Language Process (NLP) applications. Therefore, this paper presents a step toward exploring legal decisions with Named Entity Recognition (NER) in the Brazilian Supreme Court (STF) context. We aim to present a case study on the fine-grained annotation task of legal decisions, performed by law students as annotators where two levels of nested legal entities were annotated. Nested entities mapped in a preliminary study composed of four coarser legal named entities and twenty-four nested ones (fine-grained). The final result is a corpus of 594 decisions published by the STF annotated by the 76 law students, those with the highest average inter-annotator agreement score. We also present two baselines for NER based on Conditional Random Fields (CRFs) and Bidirectional Long-Short Term Memory Networks (BiLSTMs). This corpus is the first of its kind, the most extensive corpus known in Portuguese dedicated for legal named entity recognition, open and available to better support further research studies in a similar context.  相似文献   

10.
Text simplification and text summarisation are related, but different sub-tasks in Natural Language Generation. Whereas summarisation attempts to reduce the length of a document, whilst keeping the original meaning, simplification attempts to reduce the complexity of a document. In this work, we combine both tasks of summarisation and simplification using a novel hybrid architecture of abstractive and extractive summarisation called HTSS. We extend the well-known pointer generator model for the combined task of summarisation and simplification. We have collected our parallel corpus from the simplified summaries written by domain experts published on the science news website EurekaAlert (www.eurekalert.org). Our results show that our proposed HTSS model outperforms neural text simplification (NTS) on SARI score and abstractive text summarisation (ATS) on the ROUGE score. We further introduce a new metric (CSS1) which combines SARI and Rouge and demonstrates that our proposed HTSS model outperforms NTS and ATS on the joint task of simplification and summarisation by 38.94% and 53.40%, respectively. We provide all code, models and corpora to the scientific community for future research at the following URL: https://github.com/slab-itu/HTSS/.  相似文献   

11.
Information filtering has been a major task of study in the field of information retrieval (IR) for a long time, focusing on filtering well-formed documents such as news articles. Recently, more interest was directed towards applying filtering tasks to user-generated content such as microblogs. Several earlier studies investigated microblog filtering for focused topics. Another vital filtering scenario in microblogs targets the detection of posts that are relevant to long-standing broad and dynamic topics, i.e., topics spanning several subtopics that change over time. This type of filtering in microblogs is essential for many applications such as social studies on large events and news tracking of temporal topics. In this paper, we introduce an adaptive microblog filtering task that focuses on tracking topics of broad and dynamic nature. We propose an entirely-unsupervised approach that adapts to new aspects of the topic to retrieve relevant microblogs. We evaluated our filtering approach using 6 broad topics, each tested on 4 different time periods over 4 months. Experimental results showed that, on average, our approach achieved 84% increase in recall relative to the baseline approach, while maintaining an acceptable precision that showed a drop of about 8%. Our filtering method is currently implemented on TweetMogaz, a news portal generated from tweets. The website compiles the stream of Arabic tweets and detects the relevant tweets to different regions in the Middle East to be presented in the form of comprehensive reports that include top stories and news in each region.  相似文献   

12.
13.
Cross-genre author profiling aims to build generalized models for predicting profile traits of authors that can be helpful across different text genres for computer forensics, marketing, and other applications. The cross-genre author profiling task becomes challenging when dealing with low-resourced languages due to the lack of availability of standard corpora and methods. The task becomes even more challenging when the data is code-switched, which is informal and unstructured. In previous studies, the problem of cross-genre author profiling has been mainly explored for mono-lingual texts in highly resourced languages (English, Spanish, etc.). However, it has not been thoroughly explored for the code-switched text which is widely used for communication over social media. To fulfill this gap, we propose a transfer learning-based solution for the cross-genre author profiling task on code-switched (English–RomanUrdu) text using three widely known genres, Facebook comments/posts, Tweets, and SMS messages. In this article, firstly, we experimented with the traditional machine learning, deep learning and pre-trained transfer learning models (MBERT, XLMRoBERTa, ULMFiT, and XLNET) for the same-genre and cross-genre gender identification task. We then propose a novel Trans-Switch approach that focuses on the code-switching nature of the text and trains on specialized language models. In addition, we developed three RomanUrdu to English translated corpora to study the impact of translation on author profiling tasks. The results show that the proposed Trans-Switch model outperforms the baseline deep learning and pre-trained transfer learning models for cross-genre author profiling task on code-switched text. Further, the experimentation also shows that the translation of RomanUrdu text does not improve results.  相似文献   

14.
Rumour stance classification, defined as classifying the stance of specific social media posts into one of supporting, denying, querying or commenting on an earlier post, is becoming of increasing interest to researchers. While most previous work has focused on using individual tweets as classifier inputs, here we report on the performance of sequential classifiers that exploit the discourse features inherent in social media interactions or ‘conversational threads’. Testing the effectiveness of four sequential classifiers – Hawkes Processes, Linear-Chain Conditional Random Fields (Linear CRF), Tree-Structured Conditional Random Fields (Tree CRF) and Long Short Term Memory networks (LSTM) – on eight datasets associated with breaking news stories, and looking at different types of local and contextual features, our work sheds new light on the development of accurate stance classifiers. We show that sequential classifiers that exploit the use of discourse properties in social media conversations while using only local features, outperform non-sequential classifiers. Furthermore, we show that LSTM using a reduced set of features can outperform the other sequential classifiers; this performance is consistent across datasets and across types of stances. To conclude, our work also analyses the different features under study, identifying those that best help characterise and distinguish between stances, such as supporting tweets being more likely to be accompanied by evidence than denying tweets. We also set forth a number of directions for future research.  相似文献   

15.
16.
The emergence of social media and the huge amount of opinions that are posted everyday have influenced online reputation management. Reputation experts need to filter and control what is posted online and, more importantly, determine if an online post is going to have positive or negative implications towards the entity of interest. This task is challenging, considering that there are posts that have implications on an entity's reputation but do not express any sentiment. In this paper, we propose two approaches for propagating sentiment signals to estimate reputation polarity of tweets. The first approach is based on sentiment lexicons augmentation, whereas the second is based on direct propagation of sentiment signals to tweets that discuss the same topic. In addition, we present a polar fact filter that is able to differentiate between reputation-bearing and reputation-neutral tweets. Our experiments indicate that weakly supervised annotation of reputation polarity is feasible and that sentiment signals can be propagated to effectively estimate the reputation polarity of tweets. Finally, we show that learning PMI values from the training data is the most effective approach for reputation polarity analysis.  相似文献   

17.
Stance is defined as the expression of a speaker's standpoint towards a given target or entity. To date, the most reliable method for measuring stance is opinion surveys. However, people's increased reliance on social media makes these online platforms an essential source of complementary information about public opinion. Our study contributes to the discussion surrounding replicable methods through which to conduct reliable stance detection by establishing a rule-based model, which we replicated for several targets independently. To test our model, we relied on a widely used dataset of annotated tweets - the SemEval Task 6A dataset, which contains 5 targets with 4,163 manually labelled tweets. We relied on “off-the-shelf” sentiment lexica to expand the scope of our custom dictionaries, while also integrating linguistic markers and using word-pairs dependency information to conduct stance classification. While positive and negative evaluative words are the clearest markers of expression of stance, we demonstrate the added value of linguistic markers to identify the direction of the stance more precisely. Our model achieves an average classification accuracy of 75% (ranging from 67% to 89% across targets). This study is concluded by discussing practical implications and outlooks for future research, while highlighting that each target poses specific challenges to stance detection.  相似文献   

18.
Depression is a widespread and intractable problem in modern society, which may lead to suicide ideation and behavior. Analyzing depression or suicide based on the posts of social media such as Twitter or Reddit has achieved great progress in recent years. However, most work focuses on English social media and depression prediction is typically formalized as being present or absent. In this paper, we construct a human-annotated dataset for depression analysis via Chinese microblog reviews which includes 6,100 manually-annotated posts. Our dataset includes two fine-grained tasks, namely depression degree prediction and depression cause prediction. The object of the former task is to classify a Microblog post into one of 5 categories based on the depression degree, while the object of the latter one is selecting one or multiple reasons that cause the depression from 7 predefined categories. To set up a benchmark, we design a neural model for joint depression degree and cause prediction, and compare it with several widely-used neural models such as TextCNN, BiLSTM and BERT. Our model outperforms the baselines and achieves at most 65+% F1 for depression degree prediction, 70+% F1 and 90+% AUC for depression cause prediction, which shows that neural models achieve promising results, but there is still room for improvement. Our work can extend the area of social-media-based depression analyses, and our annotated data and code can also facilitate related research.  相似文献   

19.
Up to now, negation is a challenging problem in the context of Sentiment Analysis. The study of negation implies the correct identification of negation markers, the scope and the interpretation of how negation affects the words that are within it, that is, whether it modifies their meaning or not and if so, whether it reverses, reduces or increments their polarity value. In addition, if we are interested in managing reviews in languages other than English, the issue becomes even more problematic due to the lack of resources. The present work shows the validity of the SFU ReviewSP-NEG corpus, which we annotated at negation level, for training supervised polarity classification systems in Spanish. The assessment has involved the comparison of different supervised models. The results achieved show the validity of the corpus and allow us to state that the annotation of how negation affects the words that are within its scope is important. Therefore, we propose to add a new phase to tackle negation in polarity classification systems (phase iii): i) identification of negation cues, ii) determination of the scope of negation, iii) identification of how negation affects the words that are within its scope, and iv) polarity classification taking into account negation.  相似文献   

20.
Social networks like Twitter are good means for people to express themselves and ask for help in times of crisis. However, to provide help, authorities need to identify informative posts on the network from the vast amount of non-informative ones to better know what is actually happening. Traditional methods for identifying informative posts put emphasis on the presence or absence of certain words which has limitations for classifying these posts. In contrast, in this paper, we propose to consider the (overall) distribution of words in the post. To do this, based on the distributional hypothesis in linguistics, we assume that each tweet is a distribution from which we have drawn a sample of words. Building on recent developments in learning methods, namely learning on distributions, we propose an approach which identifies informative tweets by using distributional assumption. Extensive experiments have been performed on Twitter data from more than 20 crisis incidents of nearly all types of incidents. These experiments show the superiority of the proposed approach in a number of real crisis incidents. This implies that better modelling of the content of a tweet based on recent advances in estimating distributions and using domain-specific knowledge for various types of crisis incidents such as floods or earthquakes, may help to achieve higher accuracy in the task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号