首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The identification of knowledge graph entity mentions in textual content has already attracted much attention. The major assumption of existing work is that entities are explicitly mentioned in text and would only need to be disambiguated and linked. However, this assumption does not necessarily hold for social content where a significant portion of information is implied. The focus of our work in this paper is to identify whether textual social content include implicit mentions of knowledge graph entities or not, hence forming a two-class classification problem. To this end, we adopt the systemic functional linguistic framework that allows for capturing meaning expressed through language. Based on this theoretical framework we systematically introduce two classes of features, namely syntagmatic and paradigmatic features, for implicit entity recognition. In our experiments, we show the utility of these features for the task, report on ablation studies, measure the impact of each feature subset on each other and also provide a detailed error analysis of our technique.  相似文献   

2.
Most existing search engines focus on document retrieval. However, information needs are certainly not limited to finding relevant documents. Instead, a user may want to find relevant entities such as persons and organizations. In this paper, we study the problem of related entity finding. Our goal is to rank entities based on their relevance to a structured query, which specifies an input entity, the type of related entities and the relation between the input and related entities. We first discuss a general probabilistic framework, derive six possible retrieval models to rank the related entities, and then compare these models both analytically and empirically. To further improve performance, we study the problem of feedback in the context of related entity finding. Specifically, we propose a mixture model based feedback method that can utilize the pseudo feedback entities to estimate an enriched model for the relation between the input and related entities. Experimental results over two standard TREC collections show that the derived relation generation model combined with a relation feedback method performs better than other models.  相似文献   

3.
Among existing knowledge graph based question answering (KGQA) methods, relation supervision methods require labeled intermediate relations for stepwise reasoning. To avoid this enormous cost of labeling on large-scale knowledge graphs, weak supervision methods, which use only the answer entity to evaluate rewards as supervision, have been introduced. However, lacking intermediate supervision raises the issue of sparse rewards, which may result in two types of incorrect reasoning path: (1) incorrectly reasoned relations, even when the final answer entity may be correct; (2) correctly reasoned relations in a wrong order, which leads to an incorrect answer entity. To address these issues, this paper considers the multi-hop KGQA task as a Markov decision process, and proposes a model based on Reward Integration and Policy Evaluation (RIPE). In this model, an integrated reward function is designed to evaluate the reasoning process by leveraging both terminal and instant rewards. The intermediate supervision for each single reasoning hop is constructed with regard to both the fitness of the taken action and the evaluation of the unreasoned information remained in the updated question embeddings. In addition, to lead the agent to the answer entity along the correct reasoning path, an evaluation network is designed to evaluate the taken action in each hop. Extensive ablation studies and comparative experiments are conducted on four KGQA benchmark datasets. The results demonstrate that the proposed model outperforms the state-of-the-art approaches in terms of answering accuracy.  相似文献   

4.
Learning semantic representations of documents is essential for various downstream applications, including text classification and information retrieval. Entities, as important sources of information, have been playing a crucial role in assisting latent representations of documents. In this work, we hypothesize that entities are not monolithic concepts; instead they have multiple aspects, and different documents may be discussing different aspects of a given entity. Given that, we argue that from an entity-centric point of view, a document related to multiple entities shall be (a) represented differently for different entities (multiple entity-centric representations), and (b) each entity-centric representation should reflect the specific aspects of the entity discussed in the document.In this work, we devise the following research questions: (1) Can we confirm that entities have multiple aspects, with different aspects reflected in different documents, (2) can we learn a representation of entity aspects from a collection of documents, and a representation of document based on the multiple entities and their aspects as reflected in the documents, (3) does this novel representation improves algorithm performance in downstream applications, and (4) what is a reasonable number of aspects per entity? To answer these questions we model each entity using multiple aspects (entity facets1), where each entity facet is represented as a mixture of latent topics. Then, given a document associated with multiple entities, we assume multiple entity-centric representations, where each entity-centric representation is a mixture of entity facets for each entity. Finally, a novel graphical model, the Entity Facet Topic Model (EFTM), is proposed in order to learn entity-centric document representations, entity facets, and latent topics.Through experimentation we confirm that (1) entities are multi-faceted concepts which we can model and learn, (2) a multi-faceted entity-centric modeling of documents can lead to effective representations, which (3) can have an impact in downstream application, and (4) considering a small number of facets is effective enough. In particular, we visualize entity facets within a set of documents, and demonstrate that indeed different sets of documents reflect different facets of entities. Further, we demonstrate that the proposed entity facet topic model generates better document representations in terms of perplexity, compared to state-of-the-art document representation methods. Moreover, we show that the proposed model outperforms baseline methods in the application of multi-label classification. Finally, we study the impact of EFTM’s parameters and find that a small number of facets better captures entity specific topics, which confirms the intuition that on average an entity has a small number of facets reflected in documents.  相似文献   

5.
Recent developments have shown that entity-based models that rely on information from the knowledge graph can improve document retrieval performance. However, given the non-transitive nature of relatedness between entities on the knowledge graph, the use of semantic relatedness measures can lead to topic drift. To address this issue, we propose a relevance-based model for entity selection based on pseudo-relevance feedback, which is then used to systematically expand the input query leading to improved retrieval performance. We perform our experiments on the widely used TREC Web corpora and empirically show that our proposed approach to entity selection significantly improves ad hoc document retrieval compared to strong baselines. More concretely, the contributions of this work are as follows: (1) We introduce a graphical probability model that captures dependencies between entities within the query and documents. (2) We propose an unsupervised entity selection method based on the graphical model for query entity expansion and then for ad hoc retrieval. (3) We thoroughly evaluate our method and compare it with the state-of-the-art keyword and entity based retrieval methods. We demonstrate that the proposed retrieval model shows improved performance over all the other baselines on ClueWeb09B and ClueWeb12B, two widely used Web corpora, on the [email protected], and [email protected] metrics. We also show that the proposed method is most effective on the difficult queries. In addition, We compare our proposed entity selection with a state-of-the-art entity selection technique within the context of ad hoc retrieval using a basic query expansion method and illustrate that it provides more effective retrieval for all expansion weights and different number of expansion entities.  相似文献   

6.
柯佳 《情报科学》2021,39(10):165-169
【目的/意义】实体关系抽取是构建领域本体、知识图谱、开发问答系统的基础工作。远程监督方法将大规 模非结构化文本与已有的知识库实体对齐,自动标注训练样本,解决了有监督机器学习方法人工标注训练语料耗 时费力的问题,但也带来了数据噪声。【方法/过程】本文详细梳理了近些年远程监督结合深度学习技术,降低训练 样本噪声,提升实体关系抽取性能的方法。【结果/结论】卷积神经网络能更好的捕获句子局部、关键特征、长短时记 忆网络能更好的处理句子实体对远距离依赖关系,模型自动抽取句子词法、句法特征,注意力机制给予句子关键上 下文、单词更大的权重,在神经网络模型中融入先验知识能丰富句子实体对的语义信息,显著提升关系抽取性能。 【创新/局限】下一步的研究应考虑实体对重叠关系、实体对长尾语义关系的处理方法,更加全面的解决实体对关系 噪声问题。  相似文献   

7.
[目的/意义]为了帮助情报学学科背景的就业人员掌握市场对情报学人才的具体需要,为情报学的教育者拟定情报学的教育体系和人才培养的目标提供指导。[方法/过程]采集国内各大招聘网站情报学相关职位招聘公告,构建情报学招聘语料库,基于CRF机器学习模型和Bi-LSTM-CRF、BERT、BERT-Bi-LSTM-CRF深度学习模型,从语料库中抽取5类情报学招聘实体进行挖掘分析。[结果/结论]通过在已有2000篇经过标注的职位招聘公告语料库上开展情报学招聘实体自动抽取对比实验,识别效果最佳的CRF模型的整体F值为85.07%,其中对"专业要求"实体的识别F值达到了91.67%。BERT模型在"专业要求"实体识别任务中更是取得了92.10%的F值。使用CRF模型对全部符合要求的5287篇招聘公告进行实体抽取,构建了情报学招聘实体社会网络,并通过信息计量分析与社会网络分析的方式挖掘隐含知识。  相似文献   

8.
Overlapping entity relation extraction has received extensive research attention in recent years. However, existing methods suffer from the limitation of long-distance dependencies between entities, and fail to extract the relations when the overlapping situation is relatively complex. This issue limits the performance of the task. In this paper, we propose an end-to-end neural model for overlapping relation extraction by treating the task as a quintuple prediction problem. The proposed method first constructs the entity graphs by enumerating possible candidate spans, then models the relational graphs between entities via a graph attention model. Experimental results on five benchmark datasets show that the proposed model achieves the current best performance, outperforming previous methods and baseline systems by a large margin. Further analysis shows that our model can effectively capture the long-distance dependencies between entities in a long sentence.  相似文献   

9.
Knowledge management (KM) in project-based organizations has received substantial attention in recent years, as knowledge processes are insufficiently supported within the organization as a whole. This study specifically focuses on the project actor’s role in managing knowledge. From an actor’s perspective, the problems raised by knowledge embeddedness are identified as a key issue to link project knowledge and organizational knowledge. A conceptual framework is developed that addresses three different aspects of knowledge embeddedness: a relational dimension, a temporal dimension and a structural dimension. Three cases are studied, covering varying forms of organizations in different areas (a consulting firm, an R&D department and an industrial business unit). The results concerning the relational dimension indicate that project actors re-build the network of relationships supporting knowledge. Regarding the temporal dimension, and specifically in their professional field, actors frame professional knowledge related to their project experience. However, actors fail to surmount the problems raised by the structural dimension of knowledge embeddedness. The resulting recommendations for KM concern both Human Resource Management practices and organizational design.  相似文献   

10.
Narratives are comprised of stories that provide insight into social processes. To facilitate the analysis of narratives in a more efficient manner, natural language processing (NLP) methods have been employed in order to automatically extract information from textual sources, e.g., newspaper articles. Existing work on automatic narrative extraction, however, has ignored the nested character of narratives. In this work, we argue that a narrative may contain multiple accounts given by different actors. Each individual account provides insight into the beliefs and desires underpinning an actor’s actions. We present a pipeline for automatically extracting accounts, consisting of NLP methods for: (1) named entity recognition, (2) event extraction, and (3) attribution extraction. Machine learning-based models for named entity recognition were trained based on a state-of-the-art neural network architecture for sequence labelling. For event extraction, we developed a hybrid approach combining the use of semantic role labelling tools, the FrameNet repository of semantic frames, and a lexicon of event nouns. Meanwhile, attribution extraction was addressed with the aid of a dependency parser and Levin’s verb classes. To facilitate the development and evaluation of these methods, we constructed a new corpus of news articles, in which named entities, events and attributions have been manually marked up following a novel annotation scheme that covers over 20 event types relating to socio-economic phenomena. Evaluation results show that relative to a baseline method underpinned solely by semantic role labelling tools, our event extraction approach optimises recall by 12.22–14.20 percentage points (reaching as high as 92.60% on one data set). Meanwhile, the use of Levin’s verb classes in attribution extraction obtains optimal performance in terms of F-score, outperforming a baseline method by 7.64–11.96 percentage points. Our proposed approach was applied on news articles focused on industrial regeneration cases. This facilitated the generation of accounts of events that are attributed to specific actors.  相似文献   

11.
基于结构洞理论的波特五力模型分析   总被引:1,自引:0,他引:1  
王芬 《现代情报》2012,32(1):168-171
伯特提出的结构洞理论是社会网络的核心理论之一,该理论认为结构洞会为 "行动者" 带来信息利益和控制利益,从而为组织赢得竞争优势。文章在波特五力模型的框架下,从结构洞理论的角度阐述其对5种竞争力因素产生的影响,分析企业如何通过结构洞提高企业的竞争优势。  相似文献   

12.
Entity alignment is an important task for the Knowledge Graph (KG) completion, which aims to identify the same entities in different KGs. Most of previous works only utilize the relation structures of KGs, but ignore the heterogeneity of relations and attributes of KGs. However, these information can provide more feature information and improve the accuracy of entity alignment. In this paper, we propose a novel Multi-Heterogeneous Neighborhood-Aware model (MHNA) for KGs alignment. MHNA aggregates multi-heterogeneous information of aligned entities, including the entity name, relations, attributes and attribute values. An important contribution is to design a variant attention mechanism, which adds the feature information of relations and attributes to the calculation of attention coefficients. Extensive experiments on three well-known benchmark datasets show that MHNA significantly outperforms 12 state-of-the-art approaches, demonstrating that our approach has good scalability and superiority in both cross-language and monolingual KGs. An ablation study further supports the effectiveness of our variant attention mechanism.  相似文献   

13.
【目的/意义】旨在从网络舆情用户信息及文本内容视角出发,构建不同维度的网络舆情主题图谱,结合主 题图谱对网络舆情进行特征演化及可视化分析,为舆情管理提供参考。【方法/过程】本文以实体抽取和关系构建技 术为基础,构建了网络舆情主题图谱模型,并以“台风利奇马”事件为例,建立了三个不同维度的主题图谱,结合用 户和文本等多维度微观数据,对网络舆情特征演化进行分析。【结果/结论】在该事件中,用户影响力节点具备多元 化、相关性、官方主导性等特点;网络舆情演化对应台风事件发展存在一定的滞后性;PC终端存在传播媒介种类少、 发博数量多且用户集中等特点,移动终端存在传播媒介种类多、发博数量少且用户分布均匀等特点。【创新/局限】 本文借助主题图谱,构建了网络舆情用户节点和文本节点及其关联关系,从用户、账户、内容三个维度系统且全面 的展示了网络舆情特征的演化规律。  相似文献   

14.
Coreference resolution of geological entities is an important task in geological information mining. Although the existing generic coreference resolution models can handle geological texts, a dramatic decline in their performance can occur without sufficient domain knowledge. Due to the high diversity of geological terminology, coreference is intricately governed by the semantic and expressive structure of geological terms. In this paper, a framework CorefRoCNN based on RoBERTa and convolutional neural network (CNN) for end-to-end coreference resolution of geological entities is proposed. Firstly, the fine-tuned RoBERTa language model is used to transform words into dynamic vector representations with contextual semantic information. Second, a CNN-based multi-scale structure feature extraction module for geological terms is designed to capture the invariance of geological terms in length, internal structure, and distribution. Thirdly, we incorporate the structural feature and word embedding for further determinations of coreference relations. In addition, attention mechanisms are used to improve the ability of the model to capture valid information in geological texts with long sentence lengths. To validate the effectiveness of the model, we compared it with several state-of-the-art models on the constructed dataset. The results show that our model has the optimal performance with an average F1 value of 79.78%, which is a 1.22% improvement compared to the second-ranked method.  相似文献   

15.
With the development of information extraction, there have been an increasing number of large-scale knowledge bases available in different domains. In recent years, a great deal of approaches have been proposed for large-scale knowledge base alignment. Most of them are based on iterative matching. If a pair of entities has been aligned, their compatible neighbors are selected as candidate entity pairs. The limitation of these methods is that they discover candidate entity pairs depending on aligned relations, which cannot be used for aligning heterogeneous knowledge bases. Only few existing methods focus on aligning heterogeneous knowledge bases, which discover candidate entity pairs just for once by traditional blocking methods. However, the performance of these methods depends on blocking keys heavily, which are hard to select. In this paper, we present an approach for aligning heterogeneous knowledge bases via iterative blocking (AHAB) to improve the discovery and refinement of candidate entity pairs. AHAB iteratively utilizes different relations for blocking, and then matches block pairs based on matched entity pairs. The Cartesian product of unmatched entities in matched block pairs forms candidate entity pairs. By filtering out dissimilar candidate entity pairs, matched entity pairs will be found. The number of matched entity pairs proliferates with iterations, which in turn helps match block pairs in each iteration. Experiments on real-world heterogeneous knowledge bases demonstrate that AHAB is able to yield a competitive performance.  相似文献   

16.
Fact verification aims to retrieve relevant evidence from a knowledge base, e.g., Wikipedia, to verify the given claims. Existing methods only consider the sentence-level semantics for evidence representations, which typically neglect the importance of fine-grained features in the evidence-related sentences. In addition, the interpretability of the reasoning process has not been well studied in the field of fact verification. To address such issues, we propose an entity-graph based reasoning method for fact verification abbreviated as RoEG, which generates the fine-grained features of evidence at the entity-level and models the human reasoning paths based on an entity graph. In detail, to capture the semantic relations of retrieved evidence, RoEG introduces the entities as nodes and constructs the edges in the graph based on three linking strategies. Then, RoEG utilizes a selection gate to constrain the information propagation in the sub-graph of relevant entities and applies a graph neural network to propagate the entity-features for reasoning. Finally, RoEG employs an attention aggregator to gather the information of entities for label prediction. Experimental results on a large-scale benchmark dataset FEVER demonstrate the effectiveness of our proposal by beating the competitive baselines in terms of label accuracy and FEVER Score. In particular, for a task of multiple-evidence fact verification, RoEG produces 5.48% and 4.35% improvements in terms of label accuracy and FEVER Score against the state-of-the-art baseline. In addition, RoEG shows a better performance when more entities are involved for fact verification.  相似文献   

17.
Deep hashing has been an important research topic for using deep learning to boost performance of hash learning. Most existing deep supervised hashing methods mainly focus on how to effectively preserve the similarity in hash coding solely depending on pairwise supervision. However, such pairwise similarity-preserving strategy cannot fully explore the semantic information in most cases, which results in information loss. To address this problem, this paper proposes a discriminative dual-stream deep hashing (DDDH) method, which integrates the pairwise similarity loss and the classification loss into a unified framework to take full advantage of label information. Specifically, the pairwise similarity loss aims to preserve the similarity and structural information of high-dimensional original data. Meanwhile, the designed classification loss can enlarge the margin between different classes which improves the discrimination of learned binary codes. Moreover, an effective optimization algorithm is employed to train the hash code learning framework in an end-to-end manner. The results of extensive experiments on three image datasets demonstrate that our method is superior to several state-of-the-art deep and non-deep hashing methods. Ablation studies and analysis further show the effectiveness of introducing the classification loss in the overall hash learning framework.  相似文献   

18.
[目的/意义]旨在为加强反转网络舆情监管提供参考。[方法/过程]引入前景理论,构建反转网络舆情监管三方演化博弈模型,建立各主体收益感知矩阵,根据复制动态方程分析各种情况下知情者、媒体、政府策略选择。[结果/结论]该模型不存在稳定均衡策略,新闻核实成本、监管成本、处罚力度、公众媒介素养水平均会直接作用于系统演化方向。提出加强政府对网络舆论环境的监管力度,规范网民和媒体行为;加强媒体公信力建设,做好信息"把关人"的角色;提升网民媒介素养,理性表达观点等加强反转网络舆情监管的对策建议。  相似文献   

19.
Although the Knowledge Graph (KG) has been successfully applied to various applications, there is still a large amount of incomplete knowledge in the KG. This study proposes a Knowledge Graph Completion (KGC) method based on the Graph Attention Faded Mechanism (GAFM) to solve the problem of incomplete knowledge in KG. GAFM introduces a graph attention network that incorporates the information in multi-hop neighborhood nodes to embed the target entities into low dimensional space. To generate a more expressive entity representation, GAFM gives different weights to the neighborhood nodes of the target entity by adjusting the attention value of neighborhood nodes according to the variation of the path length. The attention value is adjusted by the attention faded coefficient, which decreases with the increase of the distance between the neighborhood node and the target entity. Then, considering that the capsule network has the ability to fit features, GAFM introduces the capsule network as the decoder to extract feature information from triple representations. To verify the effectiveness of the proposed method, we conduct a series of comparative experiments on public datasets (WN18RR and FB15k-237). Experimental results show that the proposed method outperforms baseline methods. The Hits@10 metric is improved by 8% compared with the second-place KBGAT method.  相似文献   

20.
通过对地产销售从业人员的实证调查,探究销售人员的"关系性"和"结构性"社会网络对其工作绩效的影响机理。实证研究发现:关系社会网对工作绩效的影响具有稳定性,但对工作绩效的不同维度的影响变量存在显著差异;结构社会网对工作绩效影响力度小,主要集中在情感网络程度中心性和情报网络中介中心性两个变量上;结构社会网在关系社会网与工作绩效关系中不起中介作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号