Please visit our video tutorial for tracking at YouTube.
Before tracking, you need to change the parameters in Tracking/AlphaTracker/setting.py (blue block in Figure 2). The meaning of the parameters can be found in the comments.
We will use a trained weight to track a demo video by default.
Change directory to the alphatracker folder and run the following command line to do tracking:
# if your current virtual environment is not alphatracker
# run this command first: conda activate alphatracker
python track.py
- Remember not to include any spaces or parentheses in your file names. Also, file names are case-sensitive.
- For training the parameter num_mouse must include the same number of items as the number of json files
that have annotated data. For example if you have one json file with annotated data for 3 animals then
num_mouse=[3]
if you have two json files with annoted data for 3 animals thennum_mouse=[3,3]
. sppe_lr
is the learning rate for the SAPE network. If your network is not performing well you can lower this number and try retrainingsppe_epoch
is the number of training epochs that the SAPE network does. More epochs will take longer but can potentially lead to better performance.
We have provided pretrained models. However, if you want to train your own models on your custom dataset, you can refer to the following steps.
Please visit our video tutorial for training at YouTube.
Labeled data is required to train the model. The code would read RGB images and json files of annotations to train the model. Our code is compatible with data annotated by the open source tool Sloth. Figure 1 shows an example of annotation json file. In this example, there only two images. Each image has two mice and each mouse has two keypoint annotated.
Note that point order matters. You must annotate all body parts in the same order for all frames. For example, all the first points represent the nose, all the second points represent the tail and etc. If the keypoint is not visible in one frame, then make the x,y of the keypoint to be -1.
Before training, you need to change the parameters in Tracking/AlphaTracker/setting.py (red block in Figure 2). The meaning of the parameters can be found in the comments.
Change directory to the alphatracker folder and use the following command line to train the model:
# if your current virtual environment is not alphatracker
# run this command first: conda activate alphatracker
python train.py
If you want to test AlphaTracker's training without annotating your own data, here we provide 600 frames of two unmarked mice interacting in a homecage annotated:
https://drive.google.com/file/d/1TYIXYYIkDDQQ6KRPqforrup_rtS0YetR/view?usp=sharing
There is a demo video in Tracking/Alphatracker/data that you can use for tracking. If you want to use the trained network we provide to track this video set exp_name=demo
in the Tracking/AlphaTracker/setting.py