Provably Robust Adaptation for Language-Empowered Foundation Models
arXiv:2510.08659v1 Announce Type: new Abstract: Language-empowered foundation models (LeFMs), such as CLIP and GraphCLIP, have transformed multimodal learning by aligning visual (or graph) features with textual representations, enabling powerful downstream capabilities like few-shot learning. However, the reliance on small, task-specific…
