Author Identifier (ORCID)

Wei Ni: https://orcid.org/0000-0002-4933-594X

Abstract

Large Language Models (LLMs) can undergo hallucinations in specialized domains, and standard Retrieval-Augmented Generation (RAG) often falters due to general-purpose embeddings ill-suited for domain-specific terminology. Though domain-specific fine-tuning enhances retrieval, centralizing data introduces privacy risks. The use of federated learning (FL) can alleviate this to some extent, but faces challenges of data heterogeneity, poor personalization, and expensive training data generation. We propose pFedRAG, a novel Personalized Federated RAG framework, which enables efficient collaborative fine-tuning of embedding models to address these challenges. The key contribution is a new Depth-Adaptive Tiered Embedding (DATE) architecture, which comprises a Global Shared Layer, combined using FL to capture common knowledge, and a Personalized Layer with adjustable depth tailored for local data and training results of each client. The depth is locally controlled based on crafted metrics and scoring criteria. Also, pFedRAG incorporates a fully client-side pipeline leveraging local small LLMs and vector database filtering to construct high-quality query-document pairs. Experiments on diverse medical non-IID document datasets demonstrate that pFedRAG significantly reduces communication costs, handles data heterogeneity, and improves retrieval performance. Human evaluations confirm the enhanced response quality of pFedRAG.

Keywords

Federated learning, retrieval-augmented generation, domain-specific embeddings, personalization, medical information retrieval, data privacy

Document Type

Conference Proceeding

Date of Publication

1-1-2025

Publication Title

Findings of the Association for Computational Linguistics: EMNLP 2025

Publisher

Association for Computational Linguistics

School

School of Engineering

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Comments

He, H., Yuan, X., Wu, K., Liu, R. P., & Ni, W. (2025). pFedRAG: A personalized federated retrieval-augmented generation system with depth-adaptive tiered embedding tuning. In Findings of the Association for Computational Linguistics: EMNLP 2025 (pp. 14255–14268). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-emnlp.769

First Page

14255

Last Page

14268

Share

 
COinS
 

Link to publisher version (DOI)

10.18653/v1/2025.findings-emnlp.769