This repository contains the official implementation of the paper "EvRepSL: Event-Stream Representation via Self-Supervised Learning for Event-Based Vision" IEEE TIP, arXiv. EvRepSL introduces a novel self-supervised approach for generating event-stream representations, which significantly improves the quality of event-based vision tasks.
EvRepSL leverages a two-stage framework for self-supervised learning on event streams. The representation generator RepGen learns high-quality representations without requiring labeled data, making it versatile for downstream tasks such as classification and object detection in event-based vision. This repository includes the implementation of the core event representation methods EvRep and EvRepSL, along with the trained model weights for RepGen.
- event_representations.py: Contains the implementation of the proposed event representation methods, EvRep and EvRepSL, along with some common representations such as voxel grid, two-channel, four-channel, and TORE.
- models.py: Defines the architecture for RepGen, the representation generator trained using self-supervised learning.
- RepGen.pth: Pretrained weights for RepGen that can be directly used for high-quality feature generation. You can download it from Google Drive.
Make sure you have the following dependencies installed:
pip3 install torch numpy
python3 event_representation
@article{qu2024evrepsl, title={EvRepSL: Event-Stream Representation via Self-Supervised Learning for Event-Based Vision}, author={Qu, Qiang and Chen, Xiaoming and Chung, Yuk Ying and Shen, Yiran}, journal={IEEE Transactions on Image Processing}, year={2024}, publisher={IEEE} }