A unified and simple codebase for weakly-supervised temporal action localization, which currently contains the implementation of ASL(CVPR21), AICL(AAAI23), CASE(ICCV23)
- Download the features of THUMOS14 from dataset.zip.
- Place the features inside the
./data
folder.
- Train the CASE model by run
Train the ASL model by run
python main_case.py --exp_name CASE
Train the AICL model by runpython main_asl.py --exp_name ASL
python main_aicl.py --exp_name AICL
- The pre-trained model will be saved in the
./outputs
folder. You can evaluate the model by running the command below.python main_case.py --exp_name CASE --inference_only
python main_asl.py --exp_name ASL --inference_only
We provide our pre-trained checkpoints in checkpoints.zippython main_aicl.py --exp_name AICL --inference_only
- Code for ActivityNet
- Code for more methods, e.g., C3BN, BAS-Net