Exploring Visual Prompts for Adapting Large-Scale Models

Hyojin Bahng
Ali Jahanian*
Swami Sankaranarayanan*
Phillip Isola
MIT CSAIL
[Paper]
[Code]


Abstract


We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the recent approach from prompt tuning and adversarial reprogramming, we learn a single image perturbation such that a frozen model prompted with this perturbation performs a new task. Through comprehensive experiments, we demonstrate that visual prompting is particularly effective for CLIP and robust to distribution shift, achieving performance competitive with standard linear probes. We further analyze properties of the downstream dataset, prompt design, and output transformation in regard to adaptation performance. The surprising effectiveness of visual prompting provides a new perspective on adapting pre-trained models in vision.



Method



Acknowledgements

We would like to thank Lucy Chai, Caroline Chan, Joanna Materzynska, Xavier Puig Fernandez, Minyoung Huh, Tongzhou Wang, and Yen-Chen Lin for proofreading the paper. We thank Judy Hoffman for helpful discussion and advice. This work was partially supported by funding from MIT STL and an MIT RSC award from the NEC fund.