首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
There is a strong interest among academics and practitioners in studying branding issues in the big data era. In this article, we examine the sentiments toward a brand, via brand authenticity, to identify the reasons for positive or negative sentiments on social media. Moreover, in order to increase precision, we investigate sentiment polarity on a five-point scale. From a database containing 2,282,912 English tweets with the keyword ‘Starbucks’, we use a set of 2204 coded tweets both for analyzing brand authenticity and sentiment polarity. First, we examine the tweets qualitatively to gain insights about brand authenticity sentiments. Then we analyze the data quantitatively to establish a framework in which we predict both the brand authenticity dimensions and their sentiment polarity. Through three qualitative studies, we discuss several tweets from the dataset that can be classified under the quality commitment, heritage, uniqueness, and symbolism categories. Using latent semantic analysis (LSA), we extract the common words in each category. We verify the robustness of previous findings with an in-lab experiment. Results from the support vector machine (SVM), as the quantitative research method, illustrate the effectiveness of the proposed procedure of brand authenticity sentiment analysis. It shows high accuracy for both the brand authenticity dimensions’ predictions and their sentiment polarity. We then discuss the theoretical and managerial implications of the studies.  相似文献   

2.
3.
Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4–5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.  相似文献   

4.
In reputation management, knowing what impact a tweet has on the reputation of a brand or company is crucial. The reputation polarity of a tweet is a measure of how the tweet influences the reputation of a brand or company. We consider the task of automatically determining the reputation polarity of a tweet. For this classification task, we propose a feature-based model based on three dimensions: the source of the tweet, the contents of the tweet and the reception of the tweet, i.e., how the tweet is being perceived. For evaluation purposes, we make use of the RepLab 2012 and 2013 datasets. We study and contrast three training scenarios. The first is independent of the entity whose reputation is being managed, the second depends on the entity at stake, but has over 90% fewer training samples per model, on average. The third is dependent on the domain of the entities. We find that reputation polarity is different from sentiment and that having less but entity-dependent training data is significantly more effective for predicting the reputation polarity of a tweet than an entity-independent training scenario. Features related to the reception of a tweet perform significantly better than most other features.  相似文献   

5.
Misinformation has captured the interest of academia in recent years with several studies looking at the topic broadly with inconsistent results. In this research, we attempt to bridge the gap in the literature by examining the impacts of user-, time-, and content-based characteristics that affect the virality of real versus misinformation during a crisis event. Using a big data-driven approach, we collected over 42 million tweets during Hurricane Harvey and obtained 3589 original verified real or false tweets by cross-checking with fact-checking websites and a relevant federal agency. Our results show that virality is higher for misinformation, novel tweets, and tweets with negative sentiment or lower lexical density. In addition, we reveal the opposite impacts of sentiment on the virality of real news versus misinformation. We also find that tweets on the environment are less likely to go viral than the baseline religious news, while real social news tweets are more likely to go viral than misinformation on social news.  相似文献   

6.
王洪伟  郑丽娟  尹裴  史伟 《情报科学》2012,(8):1263-1271,1276
对在线评论情感极性分类的研究现状与进展进行了总结。首先对情感类型的划分进行归纳,并针对在线评论中所涉及到的肯定和否定两种情感,从粗粒度、细粒度和实证研究三方面展开评述。为研究情感极性分类的商业价值,对在线评论将如何影响消费者的购买行为以及如何影响商家的销售绩效的工作进行整理和评述。最后对今后的研究方向进行展望。  相似文献   

7.
Every day millions of news articles and (micro)blogs that contain financial information are posted online. These documents often include insightful financial aspects with associated sentiments. In this paper, we predict financial aspect classes and their corresponding polarities (sentiment) within sentences. We use data from the Financial Question & Answering (FiQA) challenge, more precisely the aspect-based financial sentiment analysis task. We incorporate the hierarchical structure of the data by using the parent aspect class predictions to improve the child aspect class prediction (two-step model). Furthermore, we incorporate model output from the child aspect class prediction when predicting the polarity. We improve the F1 score by 7.6% using the two-step model for aspect classification over direct aspect classification in the test set. Furthermore, we improve the state-of-the-art test F1 score of the original aspect classification challenge from 0.46 to 0.70. The model that incorporates output from the child aspect classification performs up to par in polarity classification with our plain RoBERTa model. In addition, our plain RoBERTa model outperforms all the state-of-the-art models, lowering the MSE score by at least 28% and 33% for the cross-validation set and the test set, respectively.  相似文献   

8.
In this study, we use a hybrid sentiment analysis approach to identify and assess service quality dimensions (i.e., technical and functional quality) in unstructured textual reviews and employ econometric methods to theorize and test the impact of these service quality signals on online physician selection. By analyzing a data set with 246,294 reviews based on 5,452 physicians in 8 disease markets from an online health consultation platform, this study reports that technical and functional quality cues are important market signals used by patients to select online physicians. More importantly, this study provides strong evidence that the effects of these signals on online physician selection are substantially contingent on disease market competition intensity. Furthermore, this market competition effect is moderated by the level of disease risk. This study contributes insight into patients' physician choice behavior by introducing the role of market-level characteristics. It also guides online health consultation platforms in developing operational strategies to increase patient engagement.  相似文献   

9.
The digital currency has taken the financial markets by storm ever since its inception. Academia and industry are focussing on Artificial intelligence (AI) tools and techniques to study and gain an understanding of how businesses can draw insights from the large-scale data available online. As the market is driven by public opinions, and social media today provides an encouraging platform to share ideas and views; organizations and policy-makers could use the natural language processing (NLP) technology of AI to analyze public sentiments. Recently, a new and moderately unconventional instrument known as non-fungible tokens (NFTs) is emerging as an upcoming business market. Unlike the stock market, no precise quantitative parameters exist for the price determination of NFTs. Instead, NFT markets are driven more by public opinion, expectations, the perception of buyers, and the goodwill of creators. This study evaluates human emotions on the social media platforms Twitter posted by the public relating to NFTs. Additionally, this study conducts secondary market analysis to determine the reasons for the growing acceptance of NFTs through sentiment and emotion analysis. We segregate tweets using Pearson Product-Moment Correlation Coefficient (PPMCC) and study 8-scale emotions (Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, and Trust) along with Positive and Negative sentiments. Tweets majorly contained positive sentiment (~ 72%), and positive emotions like anticipation and trust were found to be predominant all over the world. This is the first of its kind financial and emotional analysis of tweets pertaining to NFTs to the best of our understanding.  相似文献   

10.
Sentiment lexicons are essential tools for polarity classification and opinion mining. In contrast to machine learning methods that only leverage text features or raw text for sentiment analysis, methods that use sentiment lexicons embrace higher interpretability. Although a number of domain-specific sentiment lexicons are made available, it is impractical to build an ex ante lexicon that fully reflects the characteristics of the language usage in endless domains. In this article, we propose a novel approach to simultaneously train a vanilla sentiment classifier and adapt word polarities to the target domain. Specifically, we sequentially track the wrongly predicted sentences and use them as the supervision instead of addressing the gold standard as a whole to emulate the life-long cognitive process of lexicon learning. An exploration-exploitation mechanism is designed to trade off between searching for new sentiment words and updating the polarity score of one word. Experimental results on several popular datasets show that our approach significantly improves the sentiment classification performance for a variety of domains by means of improving the quality of sentiment lexicons. Case-studies also illustrate how polarity scores of the same words are discovered for different domains.  相似文献   

11.
Electronic word of mouth (eWOM) is prominent and abundant in consumer domains. Both consumers and product/service providers need help in understanding and navigating the resulting information spaces, which are vast and dynamic. The general tone or polarity of reviews, blogs or tweets provides such help. In this paper, we explore the viability of automatic sentiment analysis (SA) for assessing the polarity of a product or a service review. To do so, we examine the potential of the major approaches to sentiment analysis, along with star ratings, in capturing the true sentiment of a review. We further model contextual factors (specifically, product type and review length) as two moderators affecting SA accuracy. The results of our analysis of 900 reviews suggest that different tools representing the main approaches to SA display differing levels of accuracy, yet overall, SA is very effective in detecting the underlying tone of the analyzed content, and can be used as a complement or an alternative to star ratings. The results further reveal that contextual factors such as product type and review length, play a role in affecting the ability of a technique to reflect the true sentiment of a review.  相似文献   

12.
The polarity shift problem is a major factor that affects classification performance of machine-learning-based sentiment analysis systems. In this paper, we propose a three-stage cascade model to address the polarity shift problem in the context of document-level sentiment classification. We first split each document into a set of subsentences and build a hybrid model that employs rules and statistical methods to detect explicit and implicit polarity shifts, respectively. Secondly, we propose a polarity shift elimination method, to remove polarity shift in negations. Finally, we train base classifiers on training subsets divided by different types of polarity shifts, and use a weighted combination of the component classifiers for sentiment classification. The results on a range of experiments illustrate that our approach significantly outperforms several alternative methods for polarity shift detection and elimination.  相似文献   

13.
Stance is defined as the expression of a speaker's standpoint towards a given target or entity. To date, the most reliable method for measuring stance is opinion surveys. However, people's increased reliance on social media makes these online platforms an essential source of complementary information about public opinion. Our study contributes to the discussion surrounding replicable methods through which to conduct reliable stance detection by establishing a rule-based model, which we replicated for several targets independently. To test our model, we relied on a widely used dataset of annotated tweets - the SemEval Task 6A dataset, which contains 5 targets with 4,163 manually labelled tweets. We relied on “off-the-shelf” sentiment lexica to expand the scope of our custom dictionaries, while also integrating linguistic markers and using word-pairs dependency information to conduct stance classification. While positive and negative evaluative words are the clearest markers of expression of stance, we demonstrate the added value of linguistic markers to identify the direction of the stance more precisely. Our model achieves an average classification accuracy of 75% (ranging from 67% to 89% across targets). This study is concluded by discussing practical implications and outlooks for future research, while highlighting that each target poses specific challenges to stance detection.  相似文献   

14.
黄立赫  石映昕 《情报杂志》2022,41(2):146-154
[研究目的]从视频弹幕的视角出发,挖掘网络舆情事件的话题漂移规律,提升网络舆情事件的视频情感检索精度。[研究方法]通过对视频弹幕进行主题与情感分析,提升网络舆情事件在线监测精准度,并在此基础上提出并构建弹幕迁移指数,建立一种基于弹幕迁移指数的情感监测方法,该方法首先基于BTM主题模型抽取视频弹幕的话题信息,并基于情感词典与颜文字词典计算不同时间窗口下的话题情感类别与情感强度,建立面向视频弹幕的网络舆情事件监测模型,再从话题内容的变化与视频兴趣热度两个角度构建话题迁移指数,并利用话题的情感强度变化,构建情感迁移指数。最终,基于话题迁移指数与情感迁移指数,得到加权后的弹幕迁移指数,实现网络舆情事件的在线监测。[研究结论]通过视频弹幕社区的真实数据,从逻辑层面验证了本模型的合理性,结果表明该方法能够较为准确地识别网络舆情事件迁移的关键时间窗口,为实现视频分享平台的情感可视化提供了切实可行的理论探索。  相似文献   

15.
Climate change has become one of the most significant crises of our time. Public opinion on climate change is influenced by social media platforms such as Twitter, often divided into believers and deniers. In this paper, we propose a framework to classify a tweet’s stance on climate change (denier/believer). Existing approaches to stance detection and classification of climate change tweets either have paid little attention to the characteristics of deniers’ tweets or often lack an appropriate architecture. However, the relevant literature reveals that the sentimental aspects and time perspective of climate change conversations on Twitter have a major impact on public attitudes and environmental orientation. Therefore, in our study, we focus on exploring the role of temporal orientation and sentiment analysis (auxiliary tasks) in detecting the attitude of tweets on climate change (main task). Our proposed framework STASY integrates word- and sentence-based feature encoders with the intra-task and shared-private attention frameworks to better encode the interactions between task-specific and shared features. We conducted our experiments on our novel curated climate change CLiCS dataset (2465 denier and 7235 believer tweets), two publicly available climate change datasets (ClimateICWSM-2022 and ClimateStance-2022), and two benchmark stance detection datasets (SemEval-2016 and COVID-19-Stance). Experiments show that our proposed approach improves stance detection performance (with an average improvement of 12.14% on our climate change dataset, 15.18% on ClimateICWSM-2022, 12.94% on ClimateStance-2022, 19.38% on SemEval-2016, and 35.01% on COVID-19-Stance in terms of average F1 scores) by benefiting from the auxiliary tasks compared to the baseline methods.  相似文献   

16.
Health misinformation has become an unfortunate truism of social media platforms, where lies could spread faster than truth. Despite considerable work devoted to suppressing fake news, health misinformation, including low-quality health news, persists and even increases in recent years. One promising approach to fighting bad information is studying the temporal and sentiment effects of health news stories and how they are discussed and disseminated on social media platforms like Twitter. As part of the effort of searching for innovative ways to fight health misinformation, this study analyzes a dataset of more than 1600 objectively and independently reviewed health news stories published over a 10-year span and nearly 50,000 Twitter posts responding to them. Specifically, it examines the source credibility of health news circulated on Twitter and the temporal, sentiment features of the tweets containing or responding to the health news reports. The results show that health news stories that are rated low by experts are discussed more, persist longer, and produce stronger sentiments than highly rated ones in the tweetosphere. However, the highly rated stories retained a fresh interest in the form of new tweets for a longer period. An in-depth understanding of the characteristics of health news distribution and discussion is the first step toward mitigating the surge of health misinformation. The findings provide insights into understanding the mechanism of health information dissemination on social media and practical implications to fight and mitigate health misinformation on digital media platforms.  相似文献   

17.
[目的/意义] 随着"互联网+"在医疗服务行业的应用与发展,积累了大量的医疗评价信息,利用情感分析技术可以对其进行有效地挖掘和利用,从而为医疗管理提供决策参考。[方法/过程] 基于框架语义理论建立医疗情感语义分类词典;采用词典和规则相结合的方法进行在线医疗评论的情感语义分析,标注情感类别、情感主题、极性和强度等信息。[结果/结论] 通过在线医疗评论数据测试,验证了研究方法的有效性和科学性,是情感分析向医疗健康领域纵深发展的一次有益探索。  相似文献   

18.
To improve the effect of multimodal negative sentiment recognition of online public opinion on public health emergencies, we constructed a novel multimodal fine-grained negative sentiment recognition model based on graph convolutional networks (GCN) and ensemble learning. This model comprises BERT and ViT-based multimodal feature representation, GCN-based feature fusion, multiple classifiers, and ensemble learning-based decision fusion. Firstly, the image-text data about COVID-19 is collected from Sina Weibo, and the text and image features are extracted through BERT and ViT, respectively. Secondly, the image-text fused features are generated through GCN in the constructed microblog graph. Finally, AdaBoost is trained to decide the final sentiments recognized by the best classifiers in image, text, and image-text fused features. The results show that the F1-score of this model is 84.13% in sentiment polarity recognition and 82.06% in fine-grained negative sentiment recognition, improved by 4.13% and 7.55% compared to the optimal recognition effect of image-text feature fusion, respectively.  相似文献   

19.
In this work, we propose BERT-WMAL, a hybrid model that brings together information coming from data through the recent transformer deep learning model and those obtained from a polarized lexicon. The result is a model for sentence polarity that manages to have performances comparable with those at the state-of-the-art, but with the advantage of being able to provide the end-user with an explanation regarding the most important terms involved with the provided prediction. The model has been evaluated on three polarity detection Italian dataset, i.e., SENTIPOLC, AGRITREND and ABSITA. While the first contains 7,410 tweets released for training and 2,000 for testing, the second and the third respectively include 1,000 tweets without splitting , and 2,365 reviews for training, 1,171 for testing. The use of lexicon-based information proves to be effective in terms of the F1 measure since it shows an improvement of F1 score on all the observed dataset: from 0.664 to 0.669 (i.e, 0.772%) on AGRITREND, from 0.728 to 0.734 (i.e., 0.854%) on SENTIPOLC and from 0.904 to 0.921 (i.e, 1.873%) on ABSITA. The usefulness of this model not only depends on its effectiveness in terms of the F1 measure, but also on its ability to generate predictions that are more explainable and especially convincing for the end-users. We evaluated this aspect through a user study involving four native Italian speakers, each evaluating 64 sentences with associated explanations. The results demonstrate the validity of this approach based on a combination of weights of attention extracted from the deep learning model and the linguistic knowledge stored in the WMAL lexicon. These considerations allow us to regard the approach provided in this paper as a promising starting point for further works in this research area.  相似文献   

20.
While geographical metadata referring to the originating locations of tweets provides valuable information to perform effective spatial analysis in social networks, scarcity of such geotagged tweets imposes limitations on their usability. In this work, we propose a content-based location prediction method for tweets by analyzing the geographical distribution of tweet texts using Kernel Density Estimation (KDE). The primary novelty of our work is to determine different settings of kernel functions for every term in tweets based on the location indicativeness of these terms. Our proposed method, which we call locality-adapted KDE, uses information-theoretic metrics and does not require any parameter tuning for these settings. As a further enhancement on the term-level distribution model, we describe an analysis of spatial point patterns in tweet texts in order to identify bigrams that exhibit significant deviation from the underlying unigram patterns. We present an expansion of feature space using the selected bigrams and show that it eventually yields further improvement in prediction accuracy of our locality-adapted KDE. We demonstrate that our expansion results in a limited increase in the size of feature space and it does not hinder online localization of tweets. The methods we propose rely purely on statistical approaches without requiring any language-specific setting. Experiments conducted on three tweet sets from different countries show that our proposed solution outperforms existing state-of-the-art techniques, yielding significantly more accurate predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号