首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Fake news detection via knowledgeable prompt learning
Institution:1. College of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, China 450002;2. School of Cyber Science and Engineering, Wuhan University, Wuhan, China 430079;3. School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, China 450045;4. Henan Key Laboratory of Cyberspace Situation Awareness, Zhengzhou 450001;1. Institute of Environmental Science and Technology, Universitat Autònoma de Barcelona, Spain;2. Graduate School of Economics and Management, Ural Federal University, Yekaterinburg, Russian Federation;1. School of Computing and Information, University of Pittsburgh, Pittsburgh, Pennsylvania, USA;2. School of Information Science, University of Kentucky, Lexington, Kentucky, USA;3. School of Nursing, The University of Texas at Austin, Austin, Texas, USA;4. School of Information, The University of Texas at Austin, Austin, Texas, USA;1. School of Information, Florida State University, Tallahassee, Florida USA;2. College of Medicine, Florida State University, Tallahassee, Florida USA;3. Department of Statistics, Florida State University, Tallahassee, Florida USA;4. Department of Computer Science, Florida State University, Tallahassee, Florida USA;5. Department of Psychology, Florida State University, Tallahassee, Florida USA;6. Department of Psychology, University of Central Florida, Orlando, Florida USA;7. Department of Psychology, The University of Alabama, Tuscaloosa, Alabama USA
Abstract:The spread of fake news has become a significant social problem, drawing great concern for fake news detection (FND). Pretrained language models (PLMs), such as BERT and RoBERTa can benefit this task much, leading to state-of-the-art performance. The common paradigm of utilizing these PLMs is fine-tuning, in which a linear classification layer is built upon the well-initialized PLM network, resulting in an FND mode, and then the full model is tuned on a training corpus. Although great successes have been achieved, this paradigm still involves a significant gap between the language model pretraining and target task fine-tuning processes. Fortunately, prompt learning, a new alternative to PLM exploration, can handle the issue naturally, showing the potential for further performance improvements. To this end, we propose knowledgeable prompt learning (KPL) for this task. First, we apply prompt learning to FND, through designing one sophisticated prompt template and the corresponding verbal words carefully for the task. Second, we incorporate external knowledge into the prompt representation, making the representation more expressive to predict the verbal words. Experimental results on two benchmark datasets demonstrate that prompt learning is better than the baseline fine-tuning PLM utilization for FND and can outperform all previous representative methods. Our final knowledgeable model (i.e, KPL) can provide further improvements. In particular, it achieves an average increase of 3.28% in F1 score under low-resource conditions compared with fine-tuning.
Keywords:Fake news detection  Prompt learning  Pretrained language model  Knowledge utilization
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号