首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Zero-shot stance detection via multi-perspective contrastive learning with unlabeled data
Institution:1. College of Big Data and Intelligent Engineering, Yangtze Normal University, Chongqing 408100, China;2. Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;3. College of Computer and Information Science, Southwest University, Chongqing 400715, China;1. Business School, Yangzhou University, Yangzhou, 225000, PR. China;2. School of Economics and Management, Nanjing Technology University, Nanjing, 210000, PR. China;3. Department of Computer Science and Engineering, School of Sciences, European University Cyprus, Nicosia 1516, Cyprus;4. Positive Computing Research Group, Institute of Autonomous Systems, Department of Computer & Information Sciences, Universiti Teknologi Petronas, 32610, Bandar Seri Iskandar, Perak, Malaysia;5. Institute of IR4.0 (IIR4.0), Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia
Abstract:Stance detection is to distinguish whether the text’s author supports, opposes, or maintains a neutral stance towards a given target. In most real-world scenarios, stance detection needs to work in a zero-shot manner, i.e., predicting stances for unseen targets without labeled data. One critical challenge of zero-shot stance detection is the absence of contextual information on the targets. Current works mostly concentrate on introducing external knowledge to supplement information about targets, but the noisy schema-linking process hinders their performance in practice. To combat this issue, we argue that previous studies have ignored the extensive target-related information inhabited in the unlabeled data during the training phase, and propose a simple yet efficient Multi-Perspective Contrastive Learning Framework for zero-shot stance detection. Our framework is capable of leveraging information not only from labeled data but also from extensive unlabeled data. To this end, we design target-oriented contrastive learning and label-oriented contrastive learning to capture more comprehensive target representation and more distinguishable stance features. We conduct extensive experiments on three widely adopted datasets (from 4870 to 33,090 instances), namely SemEval-2016, WT-WT, and VAST. Our framework achieves 53.6%, 77.1%, and 72.4% macro-average F1 scores on these three datasets, showing 2.71% and 0.25% improvements over state-of-the-art baselines on the SemEval-2016 and WT-WT datasets and comparable results on the more challenging VAST dataset.
Keywords:Stance detection  Contrastive learning  Unlabeled data  Zero-shot
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号