Skip to content

Commit

Permalink
[DOC] Improve t1-linear documentation (#1429)
Browse files Browse the repository at this point in the history
* Modify T1_Linear doc

* Fix snippet

* Try options in toggle-down

* From sections to list

* Fix new snippet

* tsv option

* Test1

* Test2

* Test3

* Test4

* Change working directory explanation

* Put back space
  • Loading branch information
AliceJoubert authored Feb 7, 2025
1 parent a89fcff commit 0321fd4
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 44 deletions.
27 changes: 11 additions & 16 deletions docs/Pipelines/T1_Linear.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This pipeline was designed as a prerequisite for the [`extract](https://clinicad

If you only installed the core of Clinica, this pipeline needs the installation of [ANTs](../Software/Third-party.md#ants) on your computer.

!!! tip
!!! tip "Using ANTsPy instead of ANTs"
Since clinica `0.9.0` you have the option to rely on [ANTsPy](https://antspyx.readthedocs.io/en/latest/index.html)
instead of [ANTs](../Software/Third-party.md#ants) to run this pipeline, which means that the installation of ANTs is not
required in this case. The ANTsPy package is installed with other Python dependencies of Clinica.
Expand All @@ -30,36 +30,31 @@ The pipeline can be run with the following command line:
clinica run t1-linear [OPTIONS] BIDS_DIRECTORY CAPS_DIRECTORY
```

where:
where :

- `BIDS_DIRECTORY` is the input folder containing the dataset in a [BIDS](../BIDS.md) hierarchy.
- `CAPS_DIRECTORY` is the output folder containing the results in a [CAPS](../CAPS/Introduction.md) hierarchy.

On default, cropped images (matrix size 169×208×179, 1 mm isotropic voxels) are generated to reduce the computing power required when training deep learning models.
Use the option `--uncropped_image` if you do not want to crop the image.

It is also possible to obtain a deterministic result by setting the value of the random seed used by ANTs with the option `--random_seed`. Default will lead to a non-deterministic result.
This option requires [ANTs](../Software/Third-party.md#ants) version `2.3.0` onwards. It is also compatible with [ANTsPy](https://antspyx.readthedocs.io/en/latest/index.html).

It is possible to specify the name of the CAPS dataset that will be created to store the outputs of the pipeline. This works if this CAPS dataset does not exist yet, otherwise the existing name will be kept.
This can be achieved with the `--caps-name` option. The provided name will appear in the `dataset_description.json` file, at the root of the CAPS folder (see [CAPS Specifications](../CAPS/Specifications.md#the-dataset_descriptionjson-file) for more details).
with specific options :

Finally, it is possible to use [ANTsPy](https://antspyx.readthedocs.io/en/latest/index.html) instead of [ANTs](../Software/Third-party.md#ants) by passing the `--use-antspy` flag.
- `-ui`/`--uncropped_image` : By default, output images are cropped to a **fixed** matrix size of 169×208×179, 1 mm isotropic voxels. This allows reducing the computing power required when training deep learning models afterwards.
Use this option if you do not want to get cropped images.
- `--random_seed` : By default, results are not deterministic. Use this option if you want to obtain a deterministic output. The value you set corresponds to the random seed used by ANTs. This option requires [ANTs](../Software/Third-party.md#ants) version `2.3.0` onwards and is also compatible with [ANTsPy](https://antspyx.readthedocs.io/en/latest/index.html).
- `--use-antspy` : By default, the pipeline is running with [ANTs](../Software/Third-party.md#ants). Use this flag option if you want to use [ANTsPy](https://antspyx.readthedocs.io/en/latest/index.html) instead.

!!! note
The arguments common to all Clinica pipelines are described in
[Interacting with clinica](../Software/InteractingWithClinica.md).
??? info "Optional parameters common to all pipelines"
--8<-- "snippets/pipelines_options.md"

!!! tip
Do not hesitate to type `clinica run t1-linear --help` to see the full list of parameters.

## Outputs

Results are stored in the following folder of the [CAPS hierarchy](../CAPS/Specifications.md#t1-linear---affine-registration-of-t1w-images-to-the-mni-standard-space): `subjects/<participant_id>/<session_id>/t1_linear` with the following outputs:
Results are stored in the folder `subjects/<participant_id>/<session_id>/t1_linear` following the [CAPS hierarchy](../CAPS/Specifications.md#t1-linear---affine-registration-of-t1w-images-to-the-mni-standard-space) and include the outputs:

- `<source_file>_space-MNI152NLin2009cSym_res-1x1x1_T1w.nii.gz`: T1w image affinely registered to the [`MNI152NLin2009cSym` template](https://bids-specification.readthedocs.io/en/stable/99-appendices/08-coordinate-systems.html).
- (optional) `<source_file>_space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz`: T1w image registered to the [`MNI152NLin2009cSym` template](https://bids-specification.readthedocs.io/en/stable/99-appendices/08-coordinate-systems.html) and cropped.
- `<source_file>_space-MNI152NLin2009cSym_res-1x1x1_affine.mat`: affine transformation estimated with [ANTs](https://stnava.github.io/ANTs/).
- (optional) `<source_file>_space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz`: T1w image registered to the [`MNI152NLin2009cSym` template](https://bids-specification.readthedocs.io/en/stable/99-appendices/08-coordinate-systems.html) and cropped. By default this file will be present but the flag `--uncropped_image` can be used to avoid computing it.

## Going further

Expand Down
1 change: 1 addition & 0 deletions docs/Software/InteractingWithClinica.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ For example, an `AD` group ID label could be used when creating a template for a
Any time you would like to use this `AD` template you will need to provide the group ID used to identify the pipeline output obtained from this group.
You might also use `CNvsAD`, for instance, as group ID for a statistical group comparison between patients with Alzheimer's disease (`AD`) and cognitively normal (`CN`) subjects.

## Common options for pipelines
--8<-- "snippets/pipelines_options.md"

--8<-- "snippets/converters_options.md"
Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/converters_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
- `-- subjects_list` / `- sl` : path to a text file containing a list of specific subjects to extract. The expected format is one subject per line :

=== "Example with the ADNI dataset"
```
```{ .text .copy }
001_S_0001
002_S_0002
```
Expand Down
51 changes: 24 additions & 27 deletions docs/snippets/pipelines_options.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,40 @@
## Common options for pipelines
- `-tsv` / `--subjects_sessions_tsv`

### `-tsv` / `--subjects_sessions_tsv`

The `-tsv` flag allows you to specify in a TSV file the participants belonging to your subset.
This flag allows you to specify in a TSV file the participants belonging to your subset.
For instance, running the [FreeSurfer pipeline](/Pipelines/T1_FreeSurfer.md) on T1w MRI can be done using :

```shell
clinica run t1-freesurfer path/to/my/bids/dataset path/where/results/will/be/stored -tsv my_list_of_subjects.tsv
```

where your TSV file looks as follows:

```text
participant_id session_id
sub-CLNC0001 ses-M000
sub-CLNC0001 ses-M018
sub-CLNC0002 ses-M000
sub-CLNC0002 ses-M018
sub-CLNC0003 ses-M000
clinica run t1-freesurfer BIDS_PATH OUTPUT_PATH -tsv my_subjects.tsv
```

!!! warning "Writing the TSV"
Note that to make the display clearer, the rows contain successive tabs, which should not happen in an actual TSV file.
<div class="grid" markdown>

### `-wd` / `--working_directory`
=== "TSV Example :"
```{ .text .copy }
participant_id session_id
sub-CLNC0001 ses-M000
sub-CLNC0001 ses-M018
sub-CLNC0002 ses-M000
```

!!! warning "Creating the TSV"
To make the display clearer the rows here contain successive tabs but that should not happen in an actual TSV.
</div>

In every pipeline, a working directory can be specified.
This directory gathers all the inputs and outputs of the different steps of the pipeline.
It is then very useful for the debugging process.
It is specially useful in the case where your pipeline execution crashes and you relaunch it with the exact same parameters, allowing you to continue from the last successfully executed node.
- `-wd` / `--working_directory`

!!! info "Working directory"
If you do not specify any working directory, a temporary one will be created, then deleted at the end if everything went well.
By default when running a pipeline, a temporary working directory is created. This directory stores all the intermediary inputs and outputs of the different steps of the pipeline. If everything goes well, the output directory is eventually created and the working directory is deleted.

With this option, a working directory of your choice can be specified. It is very useful for the debugging process or if your pipeline crashes. Then, you can relaunch it with the exact same parameters which will allow you to continue from the last successfully executed node.
For the pipelines that generate many files, such as `dwi-preprocessing` (especially if you run it on multiple subjects), a specific drive/partition with enough space can be used to store the working directory.

### `-np` / `--n_procs`
- `-np` / `--n_procs`

The `--n_procs` flag allows you to exploit several cores of your machine to run pipelines in parallel, which is very useful when dealing with numerous subjects and multiple sessions.
This flag allows you to exploit several cores of your machine to run pipelines in parallel, which is very useful when dealing with numerous subjects and multiple sessions.
Thanks to [Nipype](https://nipype.readthedocs.io/en/latest/), even for a single subject, a pipeline can be run in parallel by exploiting the cores available to process simultaneously independent sub-parts.

If you do not specify `-np` / `--n_procs` flag, Clinica will detect the number of threads to run in parallel and propose the adequate number of threads to the user.

- `-cn` / `--caps-name`

Use this option if you want to specify the name of the CAPS dataset that will be used inside the `dataset_description.json` file, at the root of the CAPS folder (see [CAPS Specifications](../CAPS/Specifications.md#the-dataset_descriptionjson-file) for more details). This works if this CAPS dataset does not exist yet, otherwise the existing name will be kept.

0 comments on commit 0321fd4

Please sign in to comment.