Nikolaos Tsagkas1,2, Andreas Sochopoulos1, Duolikun Danier1, Chris Xiaoxuan Lu3, Oisin Mac Aodha1
1University of Edinburgh, 2Edinburgh Centre for Robotics, 3UCL,
Code will become available early June. Until then, read the paper and browse our project page.
The integration of pre-trained visual representations (PVRs) into visuo-motor robot learning has emerged as a promising alternative to training visual encoders from scratch. However, PVRs face critical challenges in the context of policy learning, including temporal entanglement and an inability to generalise even in the presence of minor scene perturbations. These limitations hinder performance in tasks requiring temporal awareness and robustness to scene changes. This work identifies these shortcomings and proposes solutions to address them. First, we augment PVR features with temporal perception and a sense of task completion, effectively disentangling them in time. Second, we introduce a module that learns to selectively attend to task-relevant local features, enhancing robustness when evaluated on out-of-distribution scenes. Our experiments demonstrate significant performance improvements, particularly in PVRs trained with masking objectives, and validate the effectiveness of our enhancements in addressing PVR-specific limitations.
Consider giving as a ⭐ to reveive a notification when the code becomes available in June. Also, if you found the paper useful for your research, consider citing the paper. Finally, consider citing the following works that made ours possible: For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal, R3M: A Universal Visual Representation for Robot Manipulation, The Unsurprising Effectiveness of Pre-Trained Vision Models for Control.
@article{tsagkas2025pretrainedvisualrepresentationsfall,
title={When Pre-trained Visual Representations Fall Short: Limitations in Visuo-Motor Robot Learning},
author={Nikolaos Tsagkas and Andreas Sochopoulos and Duolikun Danier and Chris Xiaoxuan Lu and Oisin Mac Aodha},
journal={arxiv preprint arXiv:2502.03270},
year={2025},
}