首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While geographical metadata referring to the originating locations of tweets provides valuable information to perform effective spatial analysis in social networks, scarcity of such geotagged tweets imposes limitations on their usability. In this work, we propose a content-based location prediction method for tweets by analyzing the geographical distribution of tweet texts using Kernel Density Estimation (KDE). The primary novelty of our work is to determine different settings of kernel functions for every term in tweets based on the location indicativeness of these terms. Our proposed method, which we call locality-adapted KDE, uses information-theoretic metrics and does not require any parameter tuning for these settings. As a further enhancement on the term-level distribution model, we describe an analysis of spatial point patterns in tweet texts in order to identify bigrams that exhibit significant deviation from the underlying unigram patterns. We present an expansion of feature space using the selected bigrams and show that it eventually yields further improvement in prediction accuracy of our locality-adapted KDE. We demonstrate that our expansion results in a limited increase in the size of feature space and it does not hinder online localization of tweets. The methods we propose rely purely on statistical approaches without requiring any language-specific setting. Experiments conducted on three tweet sets from different countries show that our proposed solution outperforms existing state-of-the-art techniques, yielding significantly more accurate predictions.  相似文献   

2.
3.
Politicians’ tweets can have important political and economic implications. However, limited context makes it hard for readers to instantly and precisely understand them, especially from a causal perspective. The triggers for these tweets may have been reported in news prior to the tweets, but simply finding similar news articles would not serve the purpose, given the following reasons. First, readers may only be interested in finding the reasons and contexts (we call causal backgrounds) for a certain part of a tweet. Intuitively, such content would be politically relevant and accord with public’s recent attention, which is not usually reflected within the context. Besides, the content should be human-readable, while the noisy and informal nature of tweets hinders regular Open Information Extraction systems. Second, similarity does not capture causality and the causality between tweet contents and news contents is beyond the scopes of causality extraction tools. Meanwhile, it will be non-trivial to construct a high-quality tweet-to-intent dataset.We propose the first end-to-end framework for discovering causal backgrounds of politicians’ tweets by: 1. Designing an Open IE system considering rule-free representations for tweets; 2. Introducing sources like Wikipedia linkage and edit history to identify focal contents; 3. Finding implicit causalities between different contexts using explicit causalities learned elsewhere. We curate a comprehensive dataset of interpretations from political journalists for 533 tweets from 5 US politicians. On average, we obtain the correct answers within top-2 recommendations. We make our dataset and framework code publicly available.  相似文献   

4.
Unstructured tweet feeds are becoming the source of real-time information for various events. However, extracting actionable information in real-time from this unstructured text data is a challenging task. Hence, researchers are employing word embedding approach to classify unstructured text data. We set our study in the contexts of the 2014 Ebola and 2016 Zika outbreaks and probed the accuracy of domain-specific word vectors for identifying crisis-related actionable tweets. Our findings suggest that relatively smaller domain-specific input corpora from the Twitter corpus are better in extracting meaningful semantic relationship than generic pre-trained Word2Vec (contrived from Google News) or GloVe (of Stanford NLP group). However, domain-specific quality tweet corpora during the early stages of outbreaks are normally scant, and identifying actionable tweets during early stages is crucial to stemming the proliferation of an outbreak. To overcome this challenge, we consider scholarly abstracts, related to Ebola and Zika virus, from PubMed and probe the efficiency of cross-domain resource utilization for word vector generation. Our findings demonstrate that the relevance of PubMed abstracts for the training purpose when Twitter data (as input corpus) would be scant during the early stages of the outbreak. Thus, this approach can be implemented to handle future outbreaks in real time. We also explore the accuracy of our word vectors for various model architectures and hyper-parameter settings. We observe that Skip-gram accuracies are better than CBOW, and higher dimensions yield better accuracy.  相似文献   

5.
Journalists, emergency responders, and the general public use Twitter during disasters as an effective means to disseminate emergency information. However, there is a growing concern about the credibility of disaster tweets. This concern negatively influences Twitter users’ decisions about whether to retweet information, which can delay the dissemination of accurate—and sometimes essential—communications during a crisis. Although verifying information credibility is often a time-consuming task requiring considerable cognitive effort, researchers have yet to explore how people manage this task while using Twitter during disaster situations.To address this, we adopt the Heuristic-Systematic Model of information processing to understand how Twitter users make retweet decisions by categorizing tweet content as systematically processed information and a Twitter user’s profile as heuristically processed information. We then empirically examine tweet content and Twitter user profiles, as well as how they interact to verify the credibility of tweets collected during two disaster events: the 2011 Queensland floods, and the 2013 Colorado floods. Our empirical results suggest that using a Twitter profile as source-credibility information makes it easier for Twitter users to assess the credibility of disaster tweets. Our study also reveals that the Twitter user profile is a reliable source of credibility information and enhances our understanding of timely communication on Twitter during disasters.  相似文献   

6.
Five hundred million tweets are posted daily, making Twitter a major social media platform from which topical information on events can be extracted. These events are represented by three main dimensions: time, location and entity-related information. The focus of this paper is location, which is an essential dimension for geo-spatial applications, either when helping rescue operations during a disaster or when used for contextual recommendations. While the first type of application needs high recall, the second is more precision-oriented. This paper studies the recall/precision trade-off, combining different methods to extract locations. In the context of short posts, applying tools that have been developed for natural language is not sufficient given the nature of tweets which are generally too short to be linguistically correct. Also bearing in mind the high number of posts that need to be handled, we hypothesize that predicting whether a post contains a location or not could make the location extractors more focused and thus more effective. We introduce a model to predict whether a tweet contains a location or not and show that location prediction is a useful pre-processing step for location extraction. We define a number of new tweet features and we conduct an intensive evaluation. Our findings are that (1) combining existing location extraction tools is effective for precision-oriented or recall-oriented results, (2) enriching tweet representation is effective for predicting whether a tweet contains a location or not, (3) words appearing in a geography gazetteer and the occurrence of a preposition just before a proper noun are the two most important features for predicting the occurrence of a location in tweets, and (4) the accuracy of location extraction improves when it is possible to predict that there is a location in a tweet.  相似文献   

7.
Users’ ability to retweet information has made Twitter one of the most prominent social media platforms for disseminating emergency information during disasters. However, few studies have examined how Twitter’s features can support the different communication patterns that occur during different phases of disaster events. Based on the literature of disaster communication and Media Synchronicity Theory, we identify distinct disaster phases and the two communication types—crisis communication and risk communication—that occur during those phases. We investigate how Twitter’s representational features, including words, URLs, hashtags, and hashtag importance, influence the average retweet time—that is, the average time it takes for retweet to occur—as well as how such effects differ depending on the type of disaster communication. Our analysis of tweets from the 2013 Colorado floods found that adding more URLs to tweets increases the average retweet time more in risk-related tweets than it does in crisis-related tweets. Further, including key disaster-related hashtags in tweets contributed to faster retweets in crisis-related tweets than in risk-related tweets. Our findings suggest that the influence of Twitter’s media capabilities on rapid tweet propagation during disasters may differ based on the communication processes.  相似文献   

8.
The widespread popularity and worldwide application of social networks have raised interest in the analysis of content created on the networks. One such analytical application and aspect of social networks, including Twitter, is identifying the location of various political and social events, natural disasters and so on. The present study focuses on the localization of traffic accidents. Outdated and inaccurate information in user profiles, the absence of location data in tweet texts, and the limited number of geotagged posts are among the challenges tackled by location estimation. Adopting the Dempster–Shafer Evidence Theory, the present study estimates the location of accidents using a combination of user profiles, tweet texts, and the place attachments in tweets. The results indicate improved performance regarding error distance and average error distance compared to previously developed methods. The proposed method in this study resulted in a reduced error distance of 26%.  相似文献   

9.
Stance is defined as the expression of a speaker's standpoint towards a given target or entity. To date, the most reliable method for measuring stance is opinion surveys. However, people's increased reliance on social media makes these online platforms an essential source of complementary information about public opinion. Our study contributes to the discussion surrounding replicable methods through which to conduct reliable stance detection by establishing a rule-based model, which we replicated for several targets independently. To test our model, we relied on a widely used dataset of annotated tweets - the SemEval Task 6A dataset, which contains 5 targets with 4,163 manually labelled tweets. We relied on “off-the-shelf” sentiment lexica to expand the scope of our custom dictionaries, while also integrating linguistic markers and using word-pairs dependency information to conduct stance classification. While positive and negative evaluative words are the clearest markers of expression of stance, we demonstrate the added value of linguistic markers to identify the direction of the stance more precisely. Our model achieves an average classification accuracy of 75% (ranging from 67% to 89% across targets). This study is concluded by discussing practical implications and outlooks for future research, while highlighting that each target poses specific challenges to stance detection.  相似文献   

10.
Climate change has become one of the most significant crises of our time. Public opinion on climate change is influenced by social media platforms such as Twitter, often divided into believers and deniers. In this paper, we propose a framework to classify a tweet’s stance on climate change (denier/believer). Existing approaches to stance detection and classification of climate change tweets either have paid little attention to the characteristics of deniers’ tweets or often lack an appropriate architecture. However, the relevant literature reveals that the sentimental aspects and time perspective of climate change conversations on Twitter have a major impact on public attitudes and environmental orientation. Therefore, in our study, we focus on exploring the role of temporal orientation and sentiment analysis (auxiliary tasks) in detecting the attitude of tweets on climate change (main task). Our proposed framework STASY integrates word- and sentence-based feature encoders with the intra-task and shared-private attention frameworks to better encode the interactions between task-specific and shared features. We conducted our experiments on our novel curated climate change CLiCS dataset (2465 denier and 7235 believer tweets), two publicly available climate change datasets (ClimateICWSM-2022 and ClimateStance-2022), and two benchmark stance detection datasets (SemEval-2016 and COVID-19-Stance). Experiments show that our proposed approach improves stance detection performance (with an average improvement of 12.14% on our climate change dataset, 15.18% on ClimateICWSM-2022, 12.94% on ClimateStance-2022, 19.38% on SemEval-2016, and 35.01% on COVID-19-Stance in terms of average F1 scores) by benefiting from the auxiliary tasks compared to the baseline methods.  相似文献   

11.
Coronavirus related discussions have spiraled at an exponential rate since its initial outbreak. By the end of May, more than 6 million people were diagnosed with this infection. Twitter witnessed an outpouring of anxious tweets through messages associated with the spread of the virus. Government and health officials replied to the troubling tweets, reassuring the public with regular alerts on the virus's progress and information to defend against the virus. We observe that social media users are worried about Covid 19-related crisis and we identify three separate conversations on virus contagion, prevention, and the economy. We analyze the tone of officials’ tweet text as alarming and reassuring and capture the response of Twitter users to official communications. Such studies can provide insights to health officials and government agencies for crisis management, specifically regarding communicating emergency information to the public via social media for establishing reassurance.  相似文献   

12.
We propose a CNN-BiLSTM-Attention classifier to classify online short messages in Chinese posted by users on government web portals, so that a message can be directed to one or more government offices. Our model leverages every bit of information to carry out multi-label classification, to make use of different hierarchical text features and the labels information. In particular, our designed method extracts label meaning, the CNN layer extracts local semantic features of the texts, the BiLSTM layer fuses the contextual features of the texts and the local semantic features, and the attention layer selects the most relevant features for each label. We evaluate our model on two public large corpuses, and our high-quality handcraft e-government multi-label dataset, which is constructed by the text annotation tool doccano and consists of 29920 data points. Experimental results show that our proposed method is effective under common multi-label evaluation metrics, achieving micro-f1 of 77.22%, 84.42%, 87.52%, and marco-f1 of 77.68%, 73.37%, 83.57% on these three datasets respectively, confirming that our classifier is robust. We conduct ablation study to evaluate our label embedding method and attention mechanism. Moreover, case study on our handcraft e-government multi-label dataset verifies that our model integrates all types of semantic information of short messages based on different labels to achieve text classification.  相似文献   

13.
User location data is valuable for diverse social media analytics. In this paper, we address the non-trivial task of estimating a worldwide city-level Twitter user location considering only historical tweets. We propose a purely unsupervised approach that is based on a synthetic geographic sampling of Google Trends (GT) city-level frequencies of tweet nouns and three clustering algorithms. The approach was validated empirically by using a recently collected dataset, with 3,268 worldwide city-level locations of Twitter users, obtaining competitive results when compared with a state-of-the-art Word Distribution (WD) user location estimation method. The best overall results were achieved by the GT noun DBSCAN (GTN-DB) method, which is computationally fast, and correctly predicts the ground truth locations of 15%, 23%, 39% and 58% of the users for tolerance distances of 250 km, 500 km, 1,000 km and 2,000 km.  相似文献   

14.
Information filtering has been a major task of study in the field of information retrieval (IR) for a long time, focusing on filtering well-formed documents such as news articles. Recently, more interest was directed towards applying filtering tasks to user-generated content such as microblogs. Several earlier studies investigated microblog filtering for focused topics. Another vital filtering scenario in microblogs targets the detection of posts that are relevant to long-standing broad and dynamic topics, i.e., topics spanning several subtopics that change over time. This type of filtering in microblogs is essential for many applications such as social studies on large events and news tracking of temporal topics. In this paper, we introduce an adaptive microblog filtering task that focuses on tracking topics of broad and dynamic nature. We propose an entirely-unsupervised approach that adapts to new aspects of the topic to retrieve relevant microblogs. We evaluated our filtering approach using 6 broad topics, each tested on 4 different time periods over 4 months. Experimental results showed that, on average, our approach achieved 84% increase in recall relative to the baseline approach, while maintaining an acceptable precision that showed a drop of about 8%. Our filtering method is currently implemented on TweetMogaz, a news portal generated from tweets. The website compiles the stream of Arabic tweets and detects the relevant tweets to different regions in the Middle East to be presented in the form of comprehensive reports that include top stories and news in each region.  相似文献   

15.
In the context of social media, users usually post relevant information corresponding to the contents of events mentioned in a Web document. This information posses two important values in that (i) it reflects the content of an event and (ii) it shares hidden topics with sentences in the main document. In this paper, we present a novel model to capture the nature of relationships between document sentences and post information (comments or tweets) in sharing hidden topics for summarization of Web documents by utilizing relevant post information. Unlike previous methods which are usually based on hand-crafted features, our approach ranks document sentences and user posts based on their importance to the topics. The sentence-user-post relation is formulated in a share topic matrix, which presents their mutual reinforcement support. Our proposed matrix co-factorization algorithm computes the score of each document sentence and user post and extracts the top ranked document sentences and comments (or tweets) as a summary. We apply the model to the task of summarization on three datasets in two languages, English and Vietnamese, of social context summarization and also on DUC 2004 (a standard corpus of the traditional summarization task). According to the experimental results, our model significantly outperforms the basic matrix factorization and achieves competitive ROUGE-scores with state-of-the-art methods.  相似文献   

16.
As COVID-19 swept over the world, people discussed facts, expressed opinions, and shared sentiments about the pandemic on social media. Since policies such as travel restriction and lockdown in reaction to COVID-19 were made at different levels of the society (e.g., schools and employers) and the government, we build a large geo-tagged Twitter dataset titled UsaGeoCov19 and perform an exploratory analysis by geographic location. Specifically, we collect 650,563 unique geo-tagged tweets across the United States covering the date range from January 25 to May 10, 2020. Tweet locations enable us to conduct region-specific studies such as tweeting volumes and sentiment, sometimes in response to local regulations and reported COVID-19 cases. During this period, many people started working from home. The gap between workdays and weekends in hourly tweet volumes inspire us to propose algorithms to estimate work engagement during the COVID-19 crisis. This paper also summarizes themes and topics of tweets in our dataset using both social media exclusive tools (i.e., #hashtags, @mentions) and the latent Dirichlet allocation model. We welcome requests for data sharing and conversations for more insights.UsaGeoCov19 link: http://yunhefeng.me/geo-tagged_twitter_datasets/.  相似文献   

17.
The aim in multi-label text classification is to assign a set of labels to a given document. Previous classifier-chain and sequence-to-sequence models have been shown to have a powerful ability to capture label correlations. However, they rely heavily on the label order, while labels in multi-label data are essentially an unordered set. The performance of these approaches is therefore highly variable depending on the order in which the labels are arranged. To avoid being dependent on label order, we design a reasoning-based algorithm named Multi-Label Reasoner (ML-Reasoner) for multi-label classification. ML-Reasoner employs a binary classifier to predict all labels simultaneously and applies a novel iterative reasoning mechanism to effectively utilize the inter-label information, where each instance of reasoning takes the previously predicted likelihoods for all labels as additional input. This approach is able to utilize information between labels, while avoiding the issue of label-order sensitivity. Extensive experiments demonstrate that our method outperforms state-of-the art approaches on the challenging AAPD dataset. We also apply our reasoning module to a variety of strong neural-based base models and show that it is able to boost performance significantly in each case.  相似文献   

18.
Social networks like Twitter are good means for people to express themselves and ask for help in times of crisis. However, to provide help, authorities need to identify informative posts on the network from the vast amount of non-informative ones to better know what is actually happening. Traditional methods for identifying informative posts put emphasis on the presence or absence of certain words which has limitations for classifying these posts. In contrast, in this paper, we propose to consider the (overall) distribution of words in the post. To do this, based on the distributional hypothesis in linguistics, we assume that each tweet is a distribution from which we have drawn a sample of words. Building on recent developments in learning methods, namely learning on distributions, we propose an approach which identifies informative tweets by using distributional assumption. Extensive experiments have been performed on Twitter data from more than 20 crisis incidents of nearly all types of incidents. These experiments show the superiority of the proposed approach in a number of real crisis incidents. This implies that better modelling of the content of a tweet based on recent advances in estimating distributions and using domain-specific knowledge for various types of crisis incidents such as floods or earthquakes, may help to achieve higher accuracy in the task.  相似文献   

19.
Modeling discussions on social networks is a challenging task, especially if we consider sensitive topics, such as politics or healthcare. However, the knowledge hidden in these debates helps to investigate trends and opinions and to identify the cohesion of users when they deal with a specific topic. To this end, we propose a general multilayer network approach to investigate discussions on a social network. In order to prove the validity of our model, we apply it on a Twitter dataset containing tweets concerning opinions on COVID-19 vaccines. We extract a set of relevant hashtags (i.e., gold-standard hashtags) for each line of thought (i.e., pro-vaxxer, neutral, and anti-vaxxer). Then, thanks to our multilayer network model, we figure out that the anti-vaxxers tend to have ego networks denser (+14.39%) and more cohesive (+64.2%) than the ones of pro-vaxxer, which leads to a higher number of interactions among anti-vaxxers than pro-vaxxers (+393.89%). Finally, we report a comparison between our approach and one based on single networks analysis. We prove the effectiveness of our model to extract influencers having ego networks with more nodes (+40.46%), edges (+39.36%), and interactions with their neighbors (+28.56%) with respect to the other approach. As a result, these influential users are much more important to analyze and can provide more valuable information.  相似文献   

20.
Recently, geolocalisation of tweets has become important for a wide range of real-time applications, including real-time event detection, topic detection or disaster and emergency analysis. However, the number of relevant geotagged tweets available to enable such tasks remains insufficient. To overcome this limitation, predicting the location of non-geotagged tweets, while challenging, can increase the sample of geotagged data and has consequences for a wide range of applications. In this paper, we propose a location inference method that utilises a ranking approach combined with a majority voting of tweets, where each vote is weighted based on evidence gathered from the ranking. Using geotagged tweets from two cities, Chicago and New York (USA), our experimental results demonstrate that our method (statistically) significantly outperforms state-of-the-art baselines in terms of accuracy and error distance, in both cities, with the cost of decreased coverage. Finally, we investigated the applicability of our method in a real-time scenario by means of a traffic incident detection task. Our analysis shows that our fine-grained geolocalisation method can overcome the limitations of geotagged tweets and precisely map incident-related tweets at the real location of the incident.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号