首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Learning semantic representations of documents is essential for various downstream applications, including text classification and information retrieval. Entities, as important sources of information, have been playing a crucial role in assisting latent representations of documents. In this work, we hypothesize that entities are not monolithic concepts; instead they have multiple aspects, and different documents may be discussing different aspects of a given entity. Given that, we argue that from an entity-centric point of view, a document related to multiple entities shall be (a) represented differently for different entities (multiple entity-centric representations), and (b) each entity-centric representation should reflect the specific aspects of the entity discussed in the document.In this work, we devise the following research questions: (1) Can we confirm that entities have multiple aspects, with different aspects reflected in different documents, (2) can we learn a representation of entity aspects from a collection of documents, and a representation of document based on the multiple entities and their aspects as reflected in the documents, (3) does this novel representation improves algorithm performance in downstream applications, and (4) what is a reasonable number of aspects per entity? To answer these questions we model each entity using multiple aspects (entity facets1), where each entity facet is represented as a mixture of latent topics. Then, given a document associated with multiple entities, we assume multiple entity-centric representations, where each entity-centric representation is a mixture of entity facets for each entity. Finally, a novel graphical model, the Entity Facet Topic Model (EFTM), is proposed in order to learn entity-centric document representations, entity facets, and latent topics.Through experimentation we confirm that (1) entities are multi-faceted concepts which we can model and learn, (2) a multi-faceted entity-centric modeling of documents can lead to effective representations, which (3) can have an impact in downstream application, and (4) considering a small number of facets is effective enough. In particular, we visualize entity facets within a set of documents, and demonstrate that indeed different sets of documents reflect different facets of entities. Further, we demonstrate that the proposed entity facet topic model generates better document representations in terms of perplexity, compared to state-of-the-art document representation methods. Moreover, we show that the proposed model outperforms baseline methods in the application of multi-label classification. Finally, we study the impact of EFTM’s parameters and find that a small number of facets better captures entity specific topics, which confirms the intuition that on average an entity has a small number of facets reflected in documents.  相似文献   

2.
Learning a continuous dense low-dimensional representation of knowledge graphs (KGs), known as knowledge graph embedding (KGE), has been viewed as the key to intelligent reasoning for deep learning (DL) and gained much attention in recent years. To address the problem that the current KGE models are generally ineffective on small-scale sparse datasets, we propose a novel method RelaGraph to improve the representation of entities and relations in KGs by introducing neighborhood relations. RelaGraph extends the neighborhood information during entity encoding, and adds the neighborhood relations to mine deeper level of graph structure information, so as to make up for the shortage of information in the generated subgraph. This method can well represent KG components in a vector space in a way that captures the structure of the graph, avoiding underlearning or overfitting. KGE based on RelaGraph is evaluated on a small-scale sparse graph KMHEO, and the MRR reached 0.49, which is 34 percentage points higher than that of the SOTA methods, as well as it does on several other datasets. Additionally, the vectors learned by RelaGraph is used to introduce DL into several KG-related downstream tasks, which achieved excellent results, verifying the superiority of KGE-based methods.  相似文献   

3.
Knowledge graphs are widely used in retrieval systems, question answering systems (QA), hypothesis generation systems, etc. Representation learning provides a way to mine knowledge graphs to detect missing relations; and translation-based embedding models are a popular form of representation model. Shortcomings of translation-based models however, limits their practicability as knowledge completion algorithms. The proposed model helps to address some of these shortcomings.The similarity between graph structural features of two entities was found to be correlated to the relations of those entities. This correlation can help to solve the problem caused by unbalanced relations and reciprocal relations. We used Node2vec, a graph embedding algorithm, to represent information related to an entity's graph structure, and we introduce a cascade model to incorporate graph embedding with knowledge embedding into a unified framework. The cascade model first refines feature representation in the first two stages (Local Optimization Stage), and then uses backward propagation to optimize parameters of all the stages (Global Optimization Stage). This helps to enhance the knowledge representation of existing translation-based algorithms by taking into account both semantic features and graph features and fusing them to extract more useful information. Besides, different cascade structures are designed to find the optimal solution to the problem of knowledge inference and retrieval.The proposed model was verified using three mainstream knowledge graphs: WIN18, FB15K and BioChem. Experimental results were validated using the hit@10 rate entity prediction task. The proposed model performed better than TransE, giving an average improvement of 2.7% on WN18, 2.3% on FB15k and 28% on BioChem. Improvements were particularly marked where there were problems with unbalanced relations and reciprocal relations. Furthermore, the stepwise-cascade structure is proved to be more effective and significantly outperforms other baselines.  相似文献   

4.
Although the Knowledge Graph (KG) has been successfully applied to various applications, there is still a large amount of incomplete knowledge in the KG. This study proposes a Knowledge Graph Completion (KGC) method based on the Graph Attention Faded Mechanism (GAFM) to solve the problem of incomplete knowledge in KG. GAFM introduces a graph attention network that incorporates the information in multi-hop neighborhood nodes to embed the target entities into low dimensional space. To generate a more expressive entity representation, GAFM gives different weights to the neighborhood nodes of the target entity by adjusting the attention value of neighborhood nodes according to the variation of the path length. The attention value is adjusted by the attention faded coefficient, which decreases with the increase of the distance between the neighborhood node and the target entity. Then, considering that the capsule network has the ability to fit features, GAFM introduces the capsule network as the decoder to extract feature information from triple representations. To verify the effectiveness of the proposed method, we conduct a series of comparative experiments on public datasets (WN18RR and FB15k-237). Experimental results show that the proposed method outperforms baseline methods. The Hits@10 metric is improved by 8% compared with the second-place KBGAT method.  相似文献   

5.
6.
The vector space model (VSM) is a textual representation method that is widely used in documents classification. However, it remains to be a space-challenging problem. One attempt to alleviate the space problem is by using dimensionality reduction techniques, however, such techniques have deficiencies such as losing some important information. In this paper, we propose a novel text classification method that neither uses VSM nor dimensionality reduction techniques. The proposed method is a space efficient method that utilizes the first order Markov model for hierarchical Arabic text classification. For each category and sub-category, a Markov chain model is prepared based on the neighboring characters sequences. The prepared models are then used for scoring documents for classification purposes. For evaluation, we used a hierarchical Arabic text data collection that contains 11,191 documents that belong to eight topics distributed into 3-levels. The experimental results show that the Markov chains based method significantly outperforms the baseline system that employs the latent semantic indexing (LSI) method. That is, the proposed method enhances the F1-measure by 3.47%. The novelty of this work lies on the idea of decomposing words into sequences of characters, which found to be a promising approach in terms of space and accuracy. Based on our best knowledge, this is the first attempt to conduct research for hierarchical Arabic text classification with such relatively large data collection.  相似文献   

7.
As an emerging task in opinion mining, End-to-End Multimodal Aspect-Based Sentiment Analysis (MABSA) aims to extract all the aspect-sentiment pairs mentioned in a pair of sentence and image. Most existing methods of MABSA do not explicitly incorporate aspect and sentiment information in their textual and visual representations and fail to consider the different contributions of visual representations to each word or aspect in the text. To tackle these limitations, we propose a multi-task learning framework named Cross-Modal Multitask Transformer (CMMT), which incorporates two auxiliary tasks to learn the aspect/sentiment-aware intra-modal representations and introduces a Text-Guided Cross-Modal Interaction Module to dynamically control the contributions of the visual information to the representation of each word in the inter-modal interaction. Experimental results demonstrate that CMMT consistently outperforms the state-of-the-art approach JML by 3.1, 3.3, and 4.1 absolute percentage points on three Twitter datasets for the End-to-End MABSA task, respectively. Moreover, further analysis shows that CMMT is superior to comparison systems in both aspect extraction (AE) and sentiment classification (SC), which would move the development of multimodal AE and SC algorithms forward with improved performance.  相似文献   

8.
框架表示是一种广泛地应用于人工智能系统中的知识表示方法,而目录服务器则是一种层次型数据库,以分层的方式将各种资源信息在目录树结构中分层存储。根据目录服务的层次结构特性,将目录服务用于基于框架的知识表示,可达到事半功倍的效果。并可利用LDAP所提供的开发包,实现对知识库的操作。  相似文献   

9.
Multi-document discourse parsing aims to automatically identify the relations among textual spans from different texts on the same topic. Recently, with the growing amount of information and the emergence of new technologies that deal with many sources of information, more precise and efficient parsing techniques are required. The most relevant theory to multi-document relationship, Cross-document Structure Theory (CST), has been used for parsing purposes before, though the results had not been satisfactory. CST has received many critics because of its subjectivity, which may lead to low annotation agreement and, consequently, to poor parsing performance. In this work, we propose a refinement of the original CST, which consists in (i) formalizing the relationship definitions, (ii) pruning and combining some relations based on their meaning, and (iii) organizing the relations in a hierarchical structure. The hypothesis for this refinement is that it will lead to better agreement in the annotation and consequently to better parsing results. For this aim, it was built an annotated corpus according to this refinement and it was observed an improvement in the annotation agreement. Based on this corpus, a parser was developed using machine learning techniques and hand-crafted rules. Specifically, hierarchical techniques were used to capture the hierarchical organization of the relations according to the proposed refinement of CST. These two approaches were used to identify the relations among texts spans and to generate multi-document annotation structure. Results outperformed other CST parsers, showing the adequacy of the proposed refinement in the theory.  相似文献   

10.
谭晓  李辉 《现代情报》2019,39(8):29-36
[目的/意义]面对科技创新演变的加剧和交叉融合加速的大环境,利用情报研究方法及其他学科方法准确识别科技前沿成为获取科技战略情报的重要任务。研究前沿不仅提供了对当前重点和未来趋势的预见,而且为政府决策提供了关键指标。[方法/过程]通过内容分析对当前研究前沿的识别框架、方法以及多元关系、深入内容层面分析方法的应用等现状进行总结,发现目前在研究前沿识别模型和方法中仍存在不足。[结果/结论]针对不足,结合多源数据进行知识融合初步设计了综合宏观和微观的前沿识别模型,将多实体和多关系融合应用到主题关联,利用图模型的社团结构识别和Clique所含信息进行主题表示;划分研究前沿类型并构建前瞻性指标体系,完成科技前沿的识别,以期更准确、高效、全面地识别科技前沿。  相似文献   

11.
丁晟春  方振  王楠 《现代情报》2009,40(3):103-110
[目的/意义] 为解决目前网络公开平台的多源异构的企业数据的散乱、无序、碎片化问题,提出Bi-LSTM-CRF深度学习模型进行商业领域中的命名实体识别工作。[方法/过程] 该方法包括对企业全称实体、企业简称实体与人名实体3类命名实体识别。[结果/结论] 实验结果显示对企业全称实体、企业简称实体与人名实体3类命名实体识别的识别率平均F值为90.85%,验证了所提方法的有效性,证明了本研究有效地改善了商业领域中的命名实体识别效率。  相似文献   

12.
13.
14.
The evaluation of exploratory search relies on the ongoing paradigm shift from focusing on the search algorithm to focusing on the interactive process. This paper proposes a model-driven formative evaluation approach, in which the goal is not the evaluation of a specific system, per se, but the exploration of new design possibilities. This paper gives an example of this approach where a model of sensemaking was used to inform the evaluation of a basic exploratory search system(s) in the context of a sensemaking task. The model suggested that, rather than just looking at simple search performance measures, we should examine closely the interwoven, interactive processes of both representation construction and information seeking. Participants were asked to make sense of an unfamiliar topic using an augmented query-based search system. The processes of representation construction and information seeking were captured and analyzed using data from experiment notes, interviews, and a system log. The data analysis revealed users’ sources of ideas for structuring representations and a tightly coupled relationship between search and representation construction in their exploratory searches. For example, users strategically used search to find useful structure ideas instead of just accumulating information facts. Implications for improving current search systems and designing new systems are discussed.  相似文献   

15.
柯佳 《情报科学》2021,39(10):165-169
【目的/意义】实体关系抽取是构建领域本体、知识图谱、开发问答系统的基础工作。远程监督方法将大规 模非结构化文本与已有的知识库实体对齐,自动标注训练样本,解决了有监督机器学习方法人工标注训练语料耗 时费力的问题,但也带来了数据噪声。【方法/过程】本文详细梳理了近些年远程监督结合深度学习技术,降低训练 样本噪声,提升实体关系抽取性能的方法。【结果/结论】卷积神经网络能更好的捕获句子局部、关键特征、长短时记 忆网络能更好的处理句子实体对远距离依赖关系,模型自动抽取句子词法、句法特征,注意力机制给予句子关键上 下文、单词更大的权重,在神经网络模型中融入先验知识能丰富句子实体对的语义信息,显著提升关系抽取性能。 【创新/局限】下一步的研究应考虑实体对重叠关系、实体对长尾语义关系的处理方法,更加全面的解决实体对关系 噪声问题。  相似文献   

16.
This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations.We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs, which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters.A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.  相似文献   

17.
Deep multi-view clustering (MVC) is to mine and employ the complex relationships among views to learn the compact data clusters with deep neural networks in an unsupervised manner. The more recent deep contrastive learning (CL) methods have shown promising performance in MVC by learning cluster-oriented deep feature representations, which is realized by contrasting the positive and negative sample pairs. However, most existing deep contrastive MVC methods only focus on the one-side contrastive learning, such as feature-level or cluster-level contrast, failing to integrating the two sides together or bringing in more important aspects of contrast. Additionally, most of them work in a separate two-stage manner, i.e., first feature learning and then data clustering, failing to mutually benefit each other. To fix the above challenges, in this paper we propose a novel joint contrastive triple-learning framework to learn multi-view discriminative feature representation for deep clustering, which is threefold, i.e., feature-level alignment-oriented and commonality-oriented CL, and cluster-level consistency-oriented CL. The former two submodules aim to contrast the encoded feature representations of data samples in different feature levels, while the last contrasts the data samples in the cluster-level representations. Benefiting from the triple contrast, the more discriminative representations of views can be obtained. Meanwhile, a view weight learning module is designed to learn and exploit the quantitative complementary information across the learned discriminative features of each view. Thus, the contrastive triple-learning module, the view weight learning module and the data clustering module with these fused features are jointly performed, so that these modules are mutually beneficial. The extensive experiments on several challenging multi-view datasets show the superiority of the proposed method over many state-of-the-art methods, especially the large improvement of 15.5% and 8.1% on Caltech-4V and CCV in terms of accuracy. Due to the promising performance on visual datasets, the proposed method can be applied into many practical visual applications such as visual recognition and analysis. The source code of the proposed method is provided at https://github.com/ShizheHu/Joint-Contrastive-Triple-learning.  相似文献   

18.
Among existing knowledge graph based question answering (KGQA) methods, relation supervision methods require labeled intermediate relations for stepwise reasoning. To avoid this enormous cost of labeling on large-scale knowledge graphs, weak supervision methods, which use only the answer entity to evaluate rewards as supervision, have been introduced. However, lacking intermediate supervision raises the issue of sparse rewards, which may result in two types of incorrect reasoning path: (1) incorrectly reasoned relations, even when the final answer entity may be correct; (2) correctly reasoned relations in a wrong order, which leads to an incorrect answer entity. To address these issues, this paper considers the multi-hop KGQA task as a Markov decision process, and proposes a model based on Reward Integration and Policy Evaluation (RIPE). In this model, an integrated reward function is designed to evaluate the reasoning process by leveraging both terminal and instant rewards. The intermediate supervision for each single reasoning hop is constructed with regard to both the fitness of the taken action and the evaluation of the unreasoned information remained in the updated question embeddings. In addition, to lead the agent to the answer entity along the correct reasoning path, an evaluation network is designed to evaluate the taken action in each hop. Extensive ablation studies and comparative experiments are conducted on four KGQA benchmark datasets. The results demonstrate that the proposed model outperforms the state-of-the-art approaches in terms of answering accuracy.  相似文献   

19.
The knowledge contained in academic literature is interesting to mine. Inspired by the idea of molecular markers tracing in the field of biochemistry, three named entities, namely, methods, datasets, and metrics, are extracted and used as artificial intelligence (AI) markers for AI literature. These entities can be used to trace the research process described in the bodies of papers, which opens up new perspectives for seeking and mining more valuable academic information. Firstly, the named entity recognition model is used to extract AI markers from large-scale AI literature. A multi-stage self-paced learning strategy (MSPL) is proposed to address the negative influence of hard and noisy samples on the model training. Secondly, original papers are traced for AI markers. Statistical and propagation analyses are performed based on the tracing results. Finally, the co-occurrences of AI markers are used to achieve clustering. The evolution within method clusters is explored. The above-mentioned mining based on AI markers yields many significant findings. For example, the propagation rate of the datasets gradually increases. The methods proposed by China in recent years have an increasing influence on other countries.  相似文献   

20.
Representation learning has recently been used to remove sensitive information from data and improve the fairness of machine learning algorithms in social applications. However, previous works that used neural networks are opaque and poorly interpretable, as it is difficult to intuitively determine the independence between representations and sensitive information. The internal correlation among data features has not been fully discussed, and it may be the key to improving the interpretability of neural networks. A novel fair representation algorithm referred to as FRC is proposed from this conjecture. It indicates how representations independent of multiple sensitive attributes can be learned by applying specific correlation constraints on representation dimensions. Specifically, dimensions of the representation and sensitive attributes are treated as statistical variables. The representation variables are divided into two parts related to and unrelated to the sensitive variables by adjusting their absolute correlation coefficient with sensitive variables. The potential impact of sensitive information on representations is concentrated in the related part. The unrelated part of the representation can be used in downstream tasks to yield fair results. FRC takes the correlation between dimensions as the key to solving the problem of fair representation. Empirical results show that our representations enhance the ability of neural networks to show fairness and achieve better fairness-accuracy tradeoffs than state-of-the-art works.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号