Releases: spoonsso/dannce
dannce v1.2.0
This is a major update introducing new features and codebase improvements.
Contributors: Diego Aldarondo, Timothy Dunn
New Features
- Support for setups that use mirrors to provide multiple views with just a single camera.
- Multi-animal COM detection
- Support for training over very large motion capture datasets via
.npy
volume caching- Subsets of training frames can be sampled by setting
num_train_per_exp
- Subsets of training frames can be sampled by setting
- Support for video file chunks of variable length
- Multi-gpu training, set via the
multi_gpu_train
config parameter - Slurm scripts for automation, parallelization, and hyperparameter grid searches.
- Train/validation splits can now be set reproducibly using the
data_split_seed
config parameter - Ability to ignore specific landmarks during training via the
drop_landmark
config parameter - Support for validation over a dedicated recording via the
valid_exp
config parameter - Random view sampling augmentation (on by default)
Codebase Improvements
- Additional documentation and typing hints
- Faster loading of training frames
- Improvements to metadata saving
- Refactoring
interface.py
,inference.py
, andgenerator.py
- Reduced the repository download size by removing large files from the git history.
dannce v1.1.0
This release fixes bugs with metadata saving and implements a new mirror configuration mode, which instructs dannce to expect a single video with multiple views (with the aid of mirrors in the behavioral arena).
Mirror mode is selected by setting mirror
to True
in a config .yaml file, or with the cli. When using mirror mode, the params
camera parameters variable inside *dannce.mat
must indicate which views must be y-flipped, if any, using the m
field.
dannce v1.0.0
This is a major update changing the way the networks are configured. This includes a major change to the expected organization of required files. Including:
sync
,calibration
, andlabeling
folder content is now combined into a single master*dannce.mat
file that lives in a particular project directory.- COM predictions are now placed into this master
*dannce.mat
file after running COM prediction. The older types of COM prediction files are still generated in thecom_predict_dir
for legacy support, although they will likely be removed in a future release. config.yaml
file structure has been reorganized to depend on a relatively static base config (see/configs/
) containing parameters used across many recordings. This base config then points to a project-specificio.yaml
file that coordinates paths to required files and can contain any other project-specific config params you want to set.exp.yaml
files are no longer supported. Instead, use theexp
block insideio.yaml
train_DANNCE.py
,predict_DANNCE.py
,train_COMfinder.py
, andpredict_COMfinder.py
have been replaced by entry points that are created when runningsetup.py
:dannce-train
,dannce-predict
,com-train
,com-predict
.- These entry points can now be massed command line arguments to override or add to the parameters in the
config.yaml
files. To see a list of command line parameters, pas--help
to any of the above entry points, e.g.dannce-train --help
at the command line. - All of the underlying train/predict code can now be called as functions.
- Many parameters (e.g.
extension
) are now inferred automatically. - Many parameters now have default values and thus are no longer included in the config files. But these default values can be changed by adding additional parameters to the config files or by using the command line interface. Use
--help
to see a full list of parameters and their default values. - Some parameters have new names or have been combined. See
--help
of the DANNCE Wiki for more info. DANNCE will let you know if you are using an invalid parameter. - Added basic image 2D (COM) and 3D (DANNCE) image augmentations, which are off by default.
- Added support for true monochrome networks.
- Other general refactoring
Thanks @diegoaldarondo for co-authoring this release and creating the CLI.
dannce v0.2.0
dannce prediction now accelerated significantly via pytorch 3D volume construction
bug fixes
refactoring
A big thanks to Dr. Kyle Severson for authoring the new torch code.
dannce v0.1.0
This is the first working version of dannce, without volume generation optimization on the GPU. Major improvements and structural changes follow in the next release.