首页 | 本学科首页   官方微博 | 高级检索  
     检索      


An efficiency relation-specific graph transformation network for knowledge graph representation learning
Institution:1. Business School, Sichuan University, Chengdu, PR China;2. School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430070, PR China;3. Decision Sciences Department, LeBow College of Business, Drexel University, Philadelphia, PA 19104, USA;4. College of Architecture and Urban-Rural Planning, Sichuan Agricultural University, Dujiangyan, China
Abstract:Knowledge graph representation learning (KGRL) aims to infer the missing links between target entities based on existing triples. Graph neural networks (GNNs) have been introduced recently as one of the latest trendy architectures serves KGRL task using aggregations of neighborhood information. However, current GNN-based methods have fundamental limitations in both modelling the multi-hop distant neighbors and selecting relation-specific neighborhood information from vast neighbors. In this study, we propose a new relation-specific graph transformation network (RGTN) for the KGRL task. Specifically, the proposed RGTN is the first pioneer model that transforms a relation-based graph into a new path-based graph by generating useful paths that connect heterogeneous relations and multi-hop neighbors. Unlike the existing GNN-based methods, our approach is able to adaptively select the most useful paths for each specific relation and to effectively build path-based connections between unconnected distant entities. The transformed new graph structure opens a new way to model the arbitrary lengths of multi-hop neighbors which leads to more effective embedding learning. In order to verify the effectiveness of our proposed model, we conduct extensive experiments on three standard benchmark datasets, e.g., WN18RR, FB15k-237 and YAGO-10-DR. Experimental results show that the proposed RGTN achieves the promising results and even outperforms other state-of-the-art models on the KGRL task (e.g., compared to other state-of-the-art GNN-based methods, our model achieves 2.5% improvement using H@10 on WN18RR, 1.2% improvement using H@10 on FB15k-237 and 6% improvement using H@10 on YAGO3-10-DR).
Keywords:Knowledge graph  Graph transformation  Representation learning  Graph neural networks
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号