LDCL: Large language model-driven dual-view contrastive learning for temporal knowledge graph completion

Author Identifier (ORCID)

Jianxin Li: https://orcid.org/0000-0002-9059-330X

Abstract

In real-world temporal knowledge graphs (TKGs), most entities are associated with only a limited number of historical snapshots, making it difficult for conventional graph neural network (GNN)-based encoders to capture rich temporal contextual semantics. This limitation often leads to performance bottlenecks in entity and relation prediction tasks. To address this challenge, we propose LDCL (Large Language Model-driven Dual-View Contrastive Learning for Temporal Knowledge Graph Completion), which enhances prediction accuracy by integrating query-relevant semantic information extracted from large language models (LLMs) with structural features learned by GNNs. Specifically, LLMs are used to capture the contextual semantics related to each query, serving as a complementary signal to the structure-aware representations produced by the GNN encoder, thereby compensating for implicit information that traditional methods struggle to capture. In addition, we introduce a dual-view semantic–structural contrastive alignment mechanism: the structural view is obtained from a structure-aware GNN encoder, while the semantic view is derived from the LLMs. This alignment mechanism emphasizes critical temporal signals while suppressing noise. Extensive experiments on four benchmark datasets—ICEWS14, ICEWS05-15, ICEWS18, and GDELT—demonstrate that LDCL achieves notable improvements in mean reciprocal rank (MRR), outperforming the strongest baseline methods by 7.72%, 2.50%, 6.17%, and 5.06%, respectively.

Keywords

Contrastive learning, large language models, temporal knowledge graph completion

Document Type

Journal Article

Date of Publication

6-15-2026

Volume

741

Publication Title

Information Sciences

Publisher

Elsevier

School

School of Business and Law

Funders

National Natural Science Foundation of China (62476247, 62402447, 62073295, 62072409, \u201CPioneer\u201D, \u201CLeading Goose\u201D) / R&D Program of Zhejiang (2024C0-1214) / Zhejiang Provincial Natural Science Foundation (LR21F020003, LQN25F020005) / Supcon Research Fund (KYY-HX-20230833) / Lantai Research Fund (KYY-HX-20240573, KYY-HX-20230365)

Comments

Zhang, Y., Kong, X., Liu, Y., Zheng, S., Shen, G., & Li, J. (2026). LDCL: Large language model-driven dual-view contrastive learning for temporal knowledge graph completion. Information Sciences, 741, 123273. https://doi.org/10.1016/j.ins.2026.123273

Copyright

subscription content

Share

 
COinS
 

Link to publisher version (DOI)

10.1016/j.ins.2026.123273