In pursuit of building open, intelligent, and efficient AI large models, we aim to address the challenges posed by diverse data and resources distributed across edge devices, which can significantly impact the performance and scalability of large models.
This paper introduces a novel method called FedPIN (Personalized Invariant Federated Learning with Shortcut-Averse Information-Theoretic Regularization) to address the out-of-distribution (OOD) generalization problem in personalized federated learning (PFL). By leveraging causal models and information-theoretic constraints, this approach aims to extract personalized invariant features while avoiding the pitfalls of spurious correlations.
With the rapid advancement of giant models, the paradigm of pre-training models followed by fine-tuning for specific downstream tasks has become increasingly popular. In response to the challenges faced by adapter-based fine-tuning due to insufficient data, and the scalability and inflexibility issues of existing federated fine-tuning solutions, we introduce Tomtit.
In this paper, we propose SwapPrompt, a novel framework that can effectively leverage the self-supervised contrastive learning to facilitate the test-time prompt adaptation. SwapPrompt employs a dual prompts paradigm, i.e., an online prompt and a target prompt that averaged from the online prompt to retain historical information.