HyCubE: Efficient knowledge hypergraph 3D circular convolutional embedding

Author Identifier

Jianxin Li: https://orcid.org/0000-0002-9059-330X

Document Type

Journal Article

Publication Title

IEEE Transactions on Knowledge and Data Engineering

Publisher

IEEE

School

School of Business and Law

Funders

National Science and Technology Major Project of China (2020AAA0108504) / Key Research and Development Program of Ningxia Hui Autonomous Region (2023BEG02067) / National Natural Science Foundation of China (62472311) / Ant Group Research Fund (2023061517131) / Australian Research Council

Grant Number

ARC Number : LP180100750

Comments

Li, Z., Wang, X., Zhao, J., Guo, W., & Li, J. (2025). HyCubE: Efficient knowledge hypergraph 3D circular convolutional embedding. IEEE Transactions on Knowledge and Data Engineering, 37(4), 1902-1914. https://doi.org/10.1109/TKDE.2025.3531372

Abstract

Knowledge hypergraph embedding models are usually computationally expensive due to the inherent complex semantic information. However, existing works mainly focus on improving the effectiveness of knowledge hypergraph embedding, making the model architecture more complex and redundant. It is desirable and challenging for knowledge hypergraph embedding to reach a trade-off between model effectiveness and efficiency. In this paper, we propose an end-to-end efficient knowledge hypergraph embedding model, HyCubE, which designs a novel 3D circular convolutional neural network and the alternate mask stack strategy to enhance the interaction and extraction of feature information comprehensively. Furthermore, our proposed model achieves a better trade-off between effectiveness and efficiency by adaptively adjusting the 3D circular convolutional layer structure to handle n-ary knowledge tuples of different arities with fewer parameters. In addition, we use a knowledge hypergraph 1-N multilinear scoring way to accelerate the model training efficiency further. Finally, extensive experimental results on all datasets demonstrate that our proposed model consistently outperforms state-of-the-art baselines, with an average improvement of 8.22% and a maximum improvement of 33.82% across all metrics. Meanwhile, HyCubE is 6.12x faster, GPU memory usage is 52.67% lower, and the number of parameters is reduced by 85.21% compared with the average metric of the latest state-of-the-art baselines.

DOI

10.1109/TKDE.2025.3531372

Access Rights

subscription content

Share

 
COinS