首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   0篇
教育   1篇
科学研究   3篇
信息传播   1篇
  2023年   1篇
  2022年   1篇
  2020年   1篇
  2018年   1篇
  2009年   1篇
排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
基于BERT嵌入BiLSTM-CRF模型的中文专业术语抽取研究   总被引:2,自引:0,他引:2  
专业术语的识别与自动抽取对于提升专业信息检索精度,构建领域知识图谱发挥着重要基础性作用。为进一步提升中文专业术语识别的精确率和召回率,提出一种端到端的不依赖人工特征选择和领域知识,基于谷歌BERT预训练语言模型及中文预训练字嵌入向量,融合BiLSTM和CRF的中文专业术语抽取模型。以自建的1278条深度学习语料数据为实验对象,该模型对术语提取的F1值为92.96%,相对于传统的浅层机器学习模型(如左右熵与互信息算法、word2vec相似词算法等)和BiLSTM-CRF深度神经网络模型的性能有较为显著的提升。本文也给出了模型应用的具体流程,能够为中文专业术语库的构建提供实践指南。  相似文献   
2.
We propose a CNN-BiLSTM-Attention classifier to classify online short messages in Chinese posted by users on government web portals, so that a message can be directed to one or more government offices. Our model leverages every bit of information to carry out multi-label classification, to make use of different hierarchical text features and the labels information. In particular, our designed method extracts label meaning, the CNN layer extracts local semantic features of the texts, the BiLSTM layer fuses the contextual features of the texts and the local semantic features, and the attention layer selects the most relevant features for each label. We evaluate our model on two public large corpuses, and our high-quality handcraft e-government multi-label dataset, which is constructed by the text annotation tool doccano and consists of 29920 data points. Experimental results show that our proposed method is effective under common multi-label evaluation metrics, achieving micro-f1 of 77.22%, 84.42%, 87.52%, and marco-f1 of 77.68%, 73.37%, 83.57% on these three datasets respectively, confirming that our classifier is robust. We conduct ablation study to evaluate our label embedding method and attention mechanism. Moreover, case study on our handcraft e-government multi-label dataset verifies that our model integrates all types of semantic information of short messages based on different labels to achieve text classification.  相似文献   
3.
[研究目的]针对主流话题发现模型存在数据稀疏、维度高等问题,提出了一种基于突发词对主题模型(BBTM)改进的微博热点话题发现方法(BiLSTM-HBBTM),以期在微博热点话题挖掘中获得更好的效果。[研究方法]首先,通过引入微博传播值、词项H指数和词对突发概率,从文档层面和词语层面进行特征选择,解决数据稀疏和高维度的问题。其次,通过双向长短期记忆(BiLSTM)训练词语之间的关系,结合词语的逆文档频率作为词对的先验知识,考虑了词之间的关系,解决忽略词之间关系的问题。再次,利用基于密度的方法自适应选择BBTM的最优话题数目,解决了传统的主题模型需要人工指定话题数目的问题。最后,利用真实微博数据集在热点话题发现准确度、话题质量、一致性三个方面进行验证。[研究结论]实验表明,BiLSTM-HBBTM在多种评价指标上都优于对比模型,实验结果验证了所提模型的有效性及可行性。  相似文献   
4.
Opinion mining in a multilingual and multi-domain environment as YouTube requires models to be robust across domains as well as languages, and not to rely on linguistic resources (e.g. syntactic parsers, POS-taggers, pre-defined dictionaries) which are not always available in many languages. In this work, we i) proposed a convolutional N-gram BiLSTM (CoNBiLSTM) word embedding which represents a word with semantic and contextual information in short and long distance periods; ii) applied CoNBiLSTM word embedding for predicting the type of a comment, its polarity sentiment (positive, neutral or negative) and whether the sentiment is directed toward the product or video; iii) evaluated the efficiency of our model on the SenTube dataset, which contains comments from two domains (i.e. automobile, tablet) and two languages (i.e. English, Italian). According to the experimental results, CoNBiLSTM generally outperforms the approach using SVM with shallow syntactic structures (STRUCT) – the current state-of-the-art sentiment analysis on the SenTube dataset. In addition, our model achieves more robustness across domains than the STRUCT (e.g. 7.47% of the difference in performance between the two domains for our model vs. 18.8% for the STRUCT)  相似文献   
5.
农业短文本中包含词数较少,导致语义获取不充分和分类效果下降。利用 Attention 机制加强关键词在分类时的权重,并结合 BiLSTM 设计 LSTM-Attention 模型。对 30 000 份原始数据经过中文分词、句法分析、文本向量化后,将 LSTM-Attention 模型训练成一个 LSTM-Attention 分类器,解决分类器对待分类文本数据敏感的问题。利用 30 000 份标准数据和加 30%干扰信息的复杂数据测试分类器分类效果,结果表明,LSTM-Attention 模型分类正确率达 98.59%,比传统 LSTM 模型高 3.72%,比 BiLSTM 模型高 1.61%,说明使用 BiLSTM 结 合 Attention 机制能够有效提升农业短文本分类效果。利用不同测试数据对 LSTM-Attention 分类器测试发现,LSTM-Attention 分类器具有良好收敛性,其分类效果不依赖于分类数据特征,分类效果稳定性佳。  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号