首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 194 毫秒
1.
命名实体是文本中基本的信息元素,是正确理解文本的基础。命名实体识别就是要判断一个文本串是否代表一个命名实体,并确定它的类别,即发现命名实体和标注命名实体。利用了隐马尔可夫模型(HMM,Hidden Markov Model)和改进的隐马尔可夫模型进行英文命名实体的识别。  相似文献   

2.
丁浩  孔令圆  刘清  胡广伟 《现代情报》2023,(11):135-145
[目的/意义]本文针对农业领域提出一种基于融合多重特征词嵌入模型的农业命名实体识别方法,以提高识别准确度。[方法/过程]通过使用结合字符、位置语义、领域知识字典特征等多重特征向量作为嵌入层,充分考虑字符的位置信息和上下文语义信息,并根据农业领域的中文实体的特点改进了单一字符向量嵌入,获得更多的农业实体特征,同时采用双向长短时记忆网络BiLSTM和多头注意力机制来学习文本的长距离依赖信息,再利用条件随机场CRF获得全局最优标注序列。[结果/结论]本文在农业领域中文实体语料数据集中与9种基于基线方法进行对比实验,模型的Precision为92.2%,Recall为92.0%,F1值为92.11%,均优于其他基线模型,说明本文模型对于中文农业命名实体识别更精确。  相似文献   

3.
命名实体识别(Named Entity Recognition)是指识别文本中具有特定意义的实体,主要包括人名、地名、机构名、专有名词等.首先对我国关于命名实体识别研究的文献进行了总结,给出命名实体识别的主要方法及模型.并针对文献中对这些命名实体识别方法的效果进行统计分析,探讨各种识别对象、识别模型的效果及适用性.经过对现有研究文献的统计,结论为:在不考虑运行效率的情况下,对于机构名,识别效果最好的是层叠CRF模型;对于地名,测试效果最好的是CRF方法与专家知识相结合;人名识别方法中表现不错的则是边界模版和局部统计相结合的模型.  相似文献   

4.
丁晟春  方振  王楠 《现代情报》2009,40(3):103-110
[目的/意义] 为解决目前网络公开平台的多源异构的企业数据的散乱、无序、碎片化问题,提出Bi-LSTM-CRF深度学习模型进行商业领域中的命名实体识别工作。[方法/过程] 该方法包括对企业全称实体、企业简称实体与人名实体3类命名实体识别。[结果/结论] 实验结果显示对企业全称实体、企业简称实体与人名实体3类命名实体识别的识别率平均F值为90.85%,验证了所提方法的有效性,证明了本研究有效地改善了商业领域中的命名实体识别效率。  相似文献   

5.
王仁武  孟现茹  孔琦 《现代情报》2018,38(10):57-64
[目的/意义]研究利用深度学习的循环神经网络GRU结合条件随机场CRF对标注的中文文本序列进行预测,来抽取在线评论文本中的实体-属性。[方法/过程]首先根据设计好的文本序列标注规范,对评论语料分词后进行实体及其属性的命名实体标注,得到单词序列、词性序列和标注序列;然后将单词序列、词性序列转为分布式词向量表示并用于GRU循环神经网络的输入;最后输出层采用条件随机场CRF,输出标签即是实体或属性。[结果/结论]实验结果表明,本文的方法将实体-属性抽取简化为命名实体标注,并利用深度学习的GRU捕获输入数据的上下文语义以及条件随机场CRF获取输出标签的前后关系,比传统的基于规则或一般的机器学习方法具有较大的应用优势。  相似文献   

6.
在分析工程文本中命名实体实际特征的基础上,提出一种基于CRF与规则相结合的工程领域命名实体识别方法。在完善用户词典并对文本进行分词后,以短语级的粒度为原则从中确定特征,将文本交由CRF算法进行处理;分析CRF的处理结果,根据语言学规律及工程文本特点编写规则,对CRF处理结果进行优化。实验表明,该方法的全局F1值能够达到93.45。  相似文献   

7.
潘国巍  吉久明  李楠  郑荣廷 《现代情报》2011,31(11):163-165
与基于词典和基于规则的识别方法相比,统计机器学习方法更加适合被应用到命名实体的识别工作中来。本文主要在中文化学物质名称的识别工作中,考察两类统计机器学习模型识别效果及识别效率的优劣,实验结果表明,在所取训练语料与测试语料相同的情况下,以CRF模型为代表的条件概率模型可以展现出更好的实验性能。  相似文献   

8.
科研机构是海量数字资源中的主要科研实体之一,对科研机构命名进行识别是开展机构评价的前提。我国科研机构在沿革过程中出现了许多命名问题,这些为我国科研机构的评价工作带来了困扰。通过分析国内外机构命名识别方法研究的现状,结合知识评价目标下机构命名识别的特点,在分析科研机构命名规则的基础上,提出一种基于规则与向量空间模型相结合的科研机构命名识别方法及步骤。  相似文献   

9.
首先分析了互联网文本中命名实体分布特征;然后使用UIMASDK构建一个文本分析引擎在文档中寻找命名实体,将结果写入抽取信息数据库EIDB中;最后对文本中包含的命名实体的强关联关系进行了关联分析。实验证明该框架非常有效。  相似文献   

10.
刘佳  边俊伊 《现代情报》2023,(11):37-46
[目的/意义]针对藏医古籍知识组织与开发不足的问题,利用混合深度学习方法构建面向藏医古籍的命名实体识别模型,为藏医古籍知识的深度开发与利用提供方法支持。[方法/过程]根据藏医古籍知识特点,构建ALBERT-BiLSTM-CRF模型。以《四部医典》为数据集,在人工标注与文本预处理的基础上,进行命名实体识别实验,并将实验结果与其他3种常见模型进行对比分析。[结果/结论]ALBERT-BiLSTM-CRF模型对藏医古籍实体识别效果最好,F1-score达到96.28%,与其他方法相比提升约7个百分点。  相似文献   

11.
One strategy to recognize nested entities is to enumerate overlapped entity spans for classification. However, current models independently verify every entity span, which ignores the semantic dependency between spans. In this paper, we first propose a planarized sentence representation to represent nested named entities. Then, a bi-directional two-dimensional recurrent operation is implemented to learn semantic dependencies between spans. Our method is evaluated on seven public datasets for named entity recognition. It achieves competitive performance in named entity recognition. The experimental results show that our method is effective to resolve nested named entities and learn semantic dependencies between them.  相似文献   

12.
[目的/意义]为了帮助情报学学科背景的就业人员掌握市场对情报学人才的具体需要,为情报学的教育者拟定情报学的教育体系和人才培养的目标提供指导。[方法/过程]采集国内各大招聘网站情报学相关职位招聘公告,构建情报学招聘语料库,基于CRF机器学习模型和Bi-LSTM-CRF、BERT、BERT-Bi-LSTM-CRF深度学习模型,从语料库中抽取5类情报学招聘实体进行挖掘分析。[结果/结论]通过在已有2000篇经过标注的职位招聘公告语料库上开展情报学招聘实体自动抽取对比实验,识别效果最佳的CRF模型的整体F值为85.07%,其中对"专业要求"实体的识别F值达到了91.67%。BERT模型在"专业要求"实体识别任务中更是取得了92.10%的F值。使用CRF模型对全部符合要求的5287篇招聘公告进行实体抽取,构建了情报学招聘实体社会网络,并通过信息计量分析与社会网络分析的方式挖掘隐含知识。  相似文献   

13.
Named entity recognition aims to detect pre-determined entity types in unstructured text. There is a limited number of studies on this task for low-resource languages such as Turkish. We provide a comprehensive study for Turkish named entity recognition by comparing the performances of existing state-of-the-art models on the datasets with varying domains to understand their generalization capability and further analyze why such models fail or succeed in this task. Our experimental results, supported by statistical tests, show that the highest weighted F1 scores are obtained by Transformer-based language models, varying from 80.8% in tweets to 96.1% in news articles. We find that Transformer-based language models are more robust to entity types with a small sample size and longer named entities compared to traditional models, yet all models have poor performance for longer named entities in social media. Moreover, when we shuffle 80% of words in a sentence to imitate flexible word order in Turkish, we observe more performance deterioration, 12% in well-written texts, compared to 7% in noisy text.  相似文献   

14.
Applying natural language processing for mining and intelligent information access to tweets (a form of microblog) is a challenging, emerging research area. Unlike carefully authored news text and other longer content, tweets pose a number of new challenges, due to their short, noisy, context-dependent, and dynamic nature. Information extraction from tweets is typically performed in a pipeline, comprising consecutive stages of language identification, tokenisation, part-of-speech tagging, named entity recognition and entity disambiguation (e.g. with respect to DBpedia). In this work, we describe a new Twitter entity disambiguation dataset, and conduct an empirical analysis of named entity recognition and disambiguation, investigating how robust a number of state-of-the-art systems are on such noisy texts, what the main sources of error are, and which problems should be further investigated to improve the state of the art.  相似文献   

15.
Narratives are comprised of stories that provide insight into social processes. To facilitate the analysis of narratives in a more efficient manner, natural language processing (NLP) methods have been employed in order to automatically extract information from textual sources, e.g., newspaper articles. Existing work on automatic narrative extraction, however, has ignored the nested character of narratives. In this work, we argue that a narrative may contain multiple accounts given by different actors. Each individual account provides insight into the beliefs and desires underpinning an actor’s actions. We present a pipeline for automatically extracting accounts, consisting of NLP methods for: (1) named entity recognition, (2) event extraction, and (3) attribution extraction. Machine learning-based models for named entity recognition were trained based on a state-of-the-art neural network architecture for sequence labelling. For event extraction, we developed a hybrid approach combining the use of semantic role labelling tools, the FrameNet repository of semantic frames, and a lexicon of event nouns. Meanwhile, attribution extraction was addressed with the aid of a dependency parser and Levin’s verb classes. To facilitate the development and evaluation of these methods, we constructed a new corpus of news articles, in which named entities, events and attributions have been manually marked up following a novel annotation scheme that covers over 20 event types relating to socio-economic phenomena. Evaluation results show that relative to a baseline method underpinned solely by semantic role labelling tools, our event extraction approach optimises recall by 12.22–14.20 percentage points (reaching as high as 92.60% on one data set). Meanwhile, the use of Levin’s verb classes in attribution extraction obtains optimal performance in terms of F-score, outperforming a baseline method by 7.64–11.96 percentage points. Our proposed approach was applied on news articles focused on industrial regeneration cases. This facilitated the generation of accounts of events that are attributed to specific actors.  相似文献   

16.
Named entity recognition (NER) is mostly formalized as a sequence labeling problem in which segments of named entities are represented by label sequences. Although a considerable effort has been made to investigate sophisticated features that encode textual characteristics of named entities (e.g. PEOPLE, LOCATION, etc.), little attention has been paid to segment representations (SRs) for multi-token named entities (e.g. the IOB2 notation). In this paper, we investigate the effects of different SRs on NER tasks, and propose a feature generation method using multiple SRs. The proposed method allows a model to exploit not only highly discriminative features of complex SRs but also robust features of simple SRs against the data sparseness problem. Since it incorporates different SRs as feature functions of Conditional Random Fields (CRFs), we can use the well-established procedure for training. In addition, the tagging speed of a model integrating multiple SRs can be accelerated equivalent to that of a model using only the most complex SR of the integrated model. Experimental results demonstrate that incorporating multiple SRs into a single model improves the performance and the stability of NER. We also provide the detailed analysis of the results.  相似文献   

17.
Named Entity Recognition (NER) aims to automatically extract specific entities from the unstructured text. Compared with performing NER in English, Chinese NER is more challenging in recognizing entity boundaries because there are no explicit delimiters between Chinese characters. However, most previous researches focused on the semantic information of the Chinese language on the character level but ignored the importance of the phonetic characteristics. To address these issues, we integrated phonetic features of Chinese characters with the lexicon information to help disambiguate the entity boundary recognition by fully exploring the potential of Chinese as a pictophonetic language. In addition, a novel multi-tagging-scheme learning method was proposed, based on the multi-task learning paradigm, to alleviate the data sparsity and error propagation problems that occurred in the previous tagging schemes, by separately annotating the segmentation information of entities and their corresponding entity types. Extensive experiments performed on four Chinese NER benchmark datasets: OntoNotes4.0, MSRA, Resume, and Weibo, show that our proposed method consistently outperforms the existing state-of-the-art baseline models. The ablation experiments further demonstrated that the introduction of the phonetic feature and the multi-tagging-scheme has a significant positive effect on the improvement of the Chinese NER task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号