Official Github page for "auto-Segmentation Clinical Acceptability & Reproducibility Framework" currently on archive.
-
SCARF is a research, development and clinical assessment framework for auto-segmentation of organs-at-risk in head and neck cancer.
-
SCARF facilitates benchmarking and expert assessment of AI-driven auto-segmentation tools, addressing the need for transparency and reproducibility in this domain.
-
New models can be benchmarked against 11 pre-trained open-source deep learning models, while estimating clinical acceptability using a regularized logistic regression model.
-
The SCARF framework code base is openly-available for OAR auto-segmentation benchmarking.
To run inference using the trained models follow the instruction found here:
git clone https://github.com/bhklab/SCARF.git
cd SCARF
!pip install -r inference/requirements.txt
If needed use med-imagetools to process your raw dicom images. Preferably use the nnUNet flag to combine ROIs(Region of Interests) into one label image, instructions found here.
The ROI mapping used for RADCURE can be found at confis/radcure_oar_mapping.yaml
Organize your dataset according to the structure described in configs/example_config.yaml. Each entry in the configuration file should include paths to:
Nifti (.nii.gz) or NRRD (.nrrd) images for inputs.
Corresponding segmentation masks for training.
Required format for data config(also found in configs/example_config.json):
{
"train": [
{
"image": "data/train/image_001.nii.gz",
"label": "data/train/label_001.nii.gz"
},
...
],
"val": [
{
"image": "data/val/image_001.nii.gz",
"label": "data/val/label_001.nii.gz"
},
...
]
}
EDIT the train.sh
file to your config_path
and data_path
.
Make the training script executable and run it:
chmod +x train.sh
./train.sh
The script will:
- Load training and validation data as specified in the configuration file.
- Initialize the chosen model architecture, loss function, and optimizer.
- Start the training loop using PyTorch Lightning.
Follow the instructions found here , to run inference. Load in your saved weights and pass in argument
CUSTOM
for model in run_inference
function.
You can also evaluate the model using our selected metrics using the function calc_metric
, as detailed on the notebook.
If you find our work or any of our materials useful, please cite our paper:
...