Track the anterior limb of the internal capsule using ensemble tractography (i.e. via parameter sweep)
This application produces a streamline-based model of the anterior limb of the internal capsule. It further divides its output into a superior (canonical) and inferior (non-canonical) components.
The resultant streamline-based model of the anterior limb of the internal capsule can be used to assess the morphology, trajectory, spatial occupancy, and connectivity of this structure.
There are three primary steps in this methodlology
-
- Identification of the anterior limb white matter volume. This is acheived by identifying anatomical landmarks withing the subject's brain (using the freesurfer Desikan-Killiany parcellation). This is performed by the app-track-between-multiple-regions/produce_aLIC_ROIs.py script.
-
- Performance of targeted, ensemble tractography. This algorithm iterates across parameter settings to create a broadly sampled tractogram. The current implementation is essentialy a copy of an existing app/resource (currently entitled "RACE-Track") developed and maintaned by Brent McPherson. This is performed by the app-track-between-multiple-regions/mrtrix3_tracking.sh script.
-
- Segmentation of the resultant tractogram, to produce a curated model of the anterior limb of the internal capsule. Although MRtrix3 and RACE-Track produce quality tractography models, further curation is needed to ensure adherence to constraints of biological plausibility and contemporary understanding of the structure's morphology. This is acheived via a White Matter Query Language (WMQL)-like method that has been used in previous publications and has been comprehensively described in the White Matter Segentation Education (WIMSE) resource (website here). It is performed by the segViaDocker/seg_aLIC_connections.py script.
This application requres the following input files/datatypes:
- A preprocessed DWI image (config.json key: "diff")
- Associated bvec and bval files (config.json keys: "bvec" and "bval")
- A preprocessed T1 anatomical image (config.json key: "anat")
- A freesurfer output directory (config.json key for "output" dir: "freesurfer")
NOTE: All of these should be in the same "reference space" and aligned to one another
This application has the following options for input parameters (listed names = config.json key):
- tensor_fit (type: number): If multi-shell data is passed, this selects the bval shell that will be extracted for application of a tensor fit. Otherwise, if single-shell data is passed, this is ignored.
- norm (type: boolean): Perform log-domain normalization of CSD data before tracking (multi-shell data only).
- min_length (type: number): The minimum length a streamline may be (in mm).
- max_length (type: number): The maximum length a streamline may be (in mm).
- ens_lmax (type: boolean): Whether to perform ensemble tracking on every lmax up to the maximum value passed.
- curvs (type: multiple numbers): The maximum curvature angle streamline can take during tracking. Multiple values results in iteration across these parameters.
- num_fibers (type: int): The number of streamlines to produce per parameter combination (thus the total number returned will be some substantial multiple of this).
- do_dtdt (type: boolean): Whether to perform tensor-based deterministic tractography.
- do_dtpb (type: boolean): Whether to perform tensor-based probabilistic tractography.
- do_detr (type: boolean): Whether to perform deterministic tractography.
- do_prb1 (type: boolean): Whether to perform mrtrix2 probabilistic tractography.
- do_prb2 (type: boolean): Whether to perform mrtrix3 probabilistic tractography
- do_fact (type: boolean): Whether to perform FACT tracking.
- fact_dirs (type: int): The number of directions to perform FACT tracking on (if requested).
- fact_fibs (type: number): The number of FACT fibers to track per lmax (if requested).
- premask (type: boolean): If the input anatomical T1s have already been skull stripped, check this to prevent 5ttgen from cutting off a portion of the brain. (This sets -premasked option for 5ttgens) (WARNING: current implementation does not handle this well; recomended to leave this as FALSE)
- step (type: number): Streamline internode distance
- imaxs (type: int(s)): The lmax(s) or (alternatively) maximum value to fit and create tractography data for. If not provided, the App will find the maximum possible lmax within the data and use that.
Check out the brainlife datatypes webpage for a cataloguing of relevant datatypes.
-
This application outputs a .tck file corresponding to the output of the mrtrix3_tracking.sh script.
-
This application also outputs a "White Matter Classification" (WMC). It contains two fields: names which corresponds to the names of the white matter structures identified, and index, which corresponds to the identity of each streamline in the associated tck file, with respect to the name vector. To illustrate it's meaning: a 1 at index location 4 indicates that streamline 4 in the associated tractogram is associated with the first structure listed in the name vector. More can be found on this in the earlier "WMC" link. This file is stored as a .mat (matlab) file. However, it can also be straightforwardly converted (with the inclusion of it's associated tck file) into a set of tck files (e.g. using this app or this code or a python dictionary object.
NOTE: Runtime for this application will vary in accordance with the number of streamlines requested. As more streamlines are requeseted, it may be necessary to change the TCKGEN__TIMEOUT parameter in the mrtrix3_tracking.sh scirpt.
- Dan Bullock's work is supported by the following sources:
- The University of Minnesota’s Neuroimaging Postdoctoral Fellowship Program, College of Science and Engineering's, and the Medical School's NIH funded T32 Neuroimaging Training Grant. NOGA: 1T32EB031512-01
- Sarah Heilbronner's work is supported by the following sources:
- The University of Minnesota’s Neurosciences' and Medical School's NIMH grant for the investigation of "Neural Basis of Psychopathology, Addictions and Sleep Disorders Study Section[NPAS]". NOGA: 5P50MH119569-02-04
- The University of Minnesota’s Neurosciences' Translational Neurophysiology grant. NOGA: 5R01MH118257-04
- The University of Minnesota’s Neurosciences' Addiction Connectome grant. NOGA: 1P30DA048742-01A1
(Provide citations that are directly relevant to this code implementation here)
(Provide citations that are indirectly relevant to this code implementation here)
GNU License
Below a description of how to use this code repository on the Brainlife platform, with docker/singularity, or simply in your local compute environment.
One characteristic that can apply across all of these use contexts is the config.json file.
The config.json file is a standard component of application functionality on the brainlife.io platform. However, even outside of the brainlife.io context (running this code in your local python environment), the config.json file is a clean and effective way of managing file and parameter inputs for this code. Typically, the "main" python file/script (main.py, is the default case) reads the .json file (using the json module) into a config dictionary object thusly:
with open('config.json') as config_json:
config = json.load(config_json)
Variables and parameters can then be read in from the dictionary using the relevant keys. An example config.json setup might look something like this:
{
"inputFile_1": "local/path/to/inputFile_1",
"inputFile_2": "local/path/to/inputFile_2",
"inputFile_3": "local/path/to/inputFile_3",
"parameter_1": parameter_1_value,
"parameter_2": parameter_2_value,
"parameter_3": parameter_3_value
}
Consider reviewing the json standard overview for help formatting this object.
The config.json file can provide a standard interface for controlling execution of the code, whether using brainlife.io, docker/singularity, or a local python environment.
Below you will find an example config.json for this app.
{
"tensor_fit": 1,
"norm": false,
"min_length": 10,
"max_length": 250,
"ens_lmax": true,
"curvs": "5 10 20 40 80",
"num_fibers": 2000,
"do_dtdt": false,
"do_dtpb": false,
"do_detr": false,
"do_prb1": false,
"do_prb2": true,
"do_fact": false,
"fact_dirs": 3,
"fact_fibs": 0,
"premask": false,
"step": 0.5,
"imaxs": 8,
"diff": "testdata/dwi.nii.gz",
"bvec": "testdata/dwi.bvecs",
"bval": "testdata/dwi.bvals",
"freesurfer": "testdata/output",
"anat": "testdata/t1.nii.gz"
}
Using this app on Brainlife.io
NOTE: for any given app on Brainlife.io, a link to the corresponding github repository (containing the code used to run the app) can be found just below the app name (in gray text) on the Apps "homepage".
This application requres the following input files/datatypes from the Brainlife.io platform:
- A preprocessed DWI image (config.json key: "diff")
- Associated bvec and bval files (config.json keys: "bvec" and "bval")
- A preprocessed T1 anatomical image (config.json key: "anat")
- A freesurfer output directory (config.json key for "output" dir: "freesurfer")
NOTE: All of these should be in the same "reference space" and aligned to one another
(parameter settings, including default/typical inputs/values, should be documented and provided on the brainlife App user interface page. However, use this section to also duplicate )
SEE "Necessary inputs and outputs" above.
(notes pertinent/specific to usage via the Execute interface)
(notes pertinent/specific to usage via the Pipeline/Rule interface)
LOCAL usage via docker/singularity
Although excution of python code/apps (under the current developmental framework) is typically controlled by the main.py file, for the purposes of portability and standardization, the execution of of this file is acheived via a bash script that sets up some environmental variables for local / HPC usage and then runs the python script in the relevant docker environment (using "singularity exec") in (roughly) the following fashion:
#!/bin/bash
#PBS -l nodes=1:ppn=16
#PBS -l walltime=02:00:00
# run the actual python code
singularity exec docker://organization/container:version python3 main.py
Local use of docker images/containers using singularity requires local installation of singularity (a non-trivial matter). The Sylabs Singularity documentation page provides an overview of installation processes, however, this may not cover all installation cases. For example a debian package has been provided by the neurodebian group. Additionally, Brainlife documention provides a guide to singularity installation/usage as well.
There are two major singularity properties to keep in mind when considering local Singularity configuration: the cache directory and the bind path
Docker images that are pulled from repositories like dockerhub are stored and "built" locally. Usage of a wide range of docker images can quite easily lead to a significant buildup of files in this directory, which can quickly occupy a great deal of harddrive. As such it is important to be mindful of where (e.g. which disk resource) this environmental variable points to.
The virtual environments instantiated when a docker image is run (via singularity) do not have full access to your local file system. Instead, only a narrow set of paths are made available by default. As such, any functions or files called by the code that ARE NOT on these paths will not be found, resulting in an error. There are two approaches to dealing with this:
-
- Ensure that all requisite files & functions are accessible on these paths. For requisite files (e.g. input data) in particular, this may mean storing these in a subdirectory of the directory from which the "main" bash script (e.g. where the singularity call is executed).
With the appropriate modules installed in your local os and python environment it is also possible to run this code. Ensure that the required modules (see below) are installed, that the config.json file is pointing to the correct files, and you should be ready to go.
- MRtrix3
- Dipy
- wmaPyTools (included as submodule)
- numpy
- nibabel
- nilearn (?)
(consider https://docs.python.org/3/library/modulefinder.html run on main)
NOTE: for any given app on Brainlife.io, a link to the corresponding github repository (containing the code used to run the app) can be found just below the app name (in gray text) on the Apps "homepage".
As noted earlier, the output of this application/code (a WMC/classification.mat file) can be converted to a collection of .tck files using this app or this code (with the additional provision of the source .tck file--"track.tck" in this case.
The following app can be used to obtain a density NIFTI for each of the white matter structures identified:
These apps can be used to provide visualizations for this output:
- Generate tract figures (wma_pyTools)
- Generate figures of white matter tracts overlaid on anatomical image
- WMC Figures (AFQ or WMA) (depricated)
These apps can be used to provide quantative analyses:
Work to better structure and document this code/application. Please feel free to create issues [using the "issues" tab above] to help foster clarity in documentation, or to suggest alterations to documentation/code more directly. Furthermore, it is acutely understood that many of the functionalities in this package may be redundant implementations of (likely more robust and reliable) functionalities from other packages. The identification of such instances (and their other-package correspondances) would be greatly appreciated. Feel free to create branches which implement these alternatives. Be sure to update the documentation as well to describe your changes and to ensure that your contributions are appropriately credited.