The project is an official implementation of our paper:
Bridging the Gap: Protocol Towards Fair and Consistent Affect Analysis (IEEE FG 2024)
Guanyu Hu, Eleni Papadopoulou, Dimitrios Kollias, Paraskevi Tzouveli, Jie Wei and Xinyu Yang
The increasing integration of machine learning algorithms in daily life underscores the critical need for fairness and equity in their deployment. As these technologies play a pivotal role in decision-making, addressing biases across diverse subpopulation groups, including age, gender, and race, becomes paramount. Automatic affect analysis, at the intersection of physiology, psychology, and machine learning, has seen significant development. However, existing databases and methodologies lack uniformity, leading to biased evaluations. This work addresses these issues by analyzing six affective databases, annotating demographic attributes, and proposing a common protocol for database partitioning. Emphasis is placed on fairness in evaluations. Extensive experiments with baseline and state-of-the-art methods demonstrate the impact of these changes, revealing the inadequacy of prior assessments. The findings underscore the importance of considering demographic attributes in affect analysis research and provide a foundation for more equitable methodologies.
If you are an academic (i.e., a person with a permanent position at a university, e.g. a professor, but not a Post-Doc or a PhD/PG/UG student), to request the developments of our work (i.e., annotations, partitions, and pre-trained models-checkpoints), please:
i) send an email to [email protected] with subject: Fair and Consistent Affect Analysis request by academic;
ii) use your official academic email (as data cannot be released to personal emails);
iii) include in the email the AGREEMENT FORM, the reason why you require access to these developments, and your official academic website.
If you are from industry and you want to acquire the developments of our work (i.e., annotations, partitions, and pre-trained models-checkpoints), please send an email from your official industrial email to [email protected] with subject: Fair and Consistent Affect Analysis request from industry and explain the reason why the database access is needed; also specify if it is for research or commercial purposes.
You can download the dataset from http://mohammadmahoor.com/affectnet/
EXPR-7/8
- New Partition and Annotations (5 columns):
- The split between AffectNet 7 and AffectNet 8 is exactly the same. By simply deleting the images and corresponding labels for 'contempt', AffectNet 7 can be derived from AffectNet 8
- 8 class CSV Path:
AffectNet/annotations/EXPR
- Expression Labels: ['Anger', 'Contempt', 'Disgust', 'Fear', 'Happiness', 'Neutral', 'Sadness', 'Surprise']
- Header: [name, expression, age, gender, race]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [00-02, 03-09, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+]
- Race: [White, Black, Asian, Indian]
VA
- New Partition and Annotations:
- CSV Path:
AffectNet/annotations/VA
- Header: [name, valence, arousal, age, gender, race]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [00-02, 03-09, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+]
- Race: [White, Black, Asian, Indian]
- CSV Path:
You can download the dataset from http://mohammadmahoor.com/disfa/
- New Partition and Annotations:
- 12 AUs
- CSV Path:
DISFA/annotations
- Header: [name, AU1, AU2, AU4, AU5, AU6, AU9, AU12, AU15, AU17, AU20, AU25, AU26, age, gender, race]
- AU Labels: [1: present, 0: absent]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [10-19, 20-29, 30-39, 40-49]]
- Race: [White, Black, Asian, Indian]
You can download the dataset from https://cbcsl.ece.ohio-state.edu/dbform_emotionet.html
- New Partition and Annotations:
- 11 AUs
- CSV Path:
EmotioNet/annotation
- Header: [name, AU1, AU2, AU4, AU5, AU6, AU9, AU12, AU17, AU20, AU25, AU26, age, gender, race]
- AU: [1: present, 0: absent, -1: unknown]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [00-02, 03-09, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+]
- Race: [White, Black, Asian, Indian]
You can download the dataset from https://osf.io/7wcyz/
- New Partition and Annotations:
- 14 AUs
- Path:
RAF_AU/annotation
- Header: [name, AU1, AU2, AU4, AU5, AU6, AU7, AU9, AU10, AU11, AU12, AU15, AU17, AU23, AU24, age, gender, race]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [21, 22, 23, 24, 25, 26, 27, 28]
- Race: [White, Black, Asian, Other]
You can download the dataset from http://www.whdeng.cn/raf/model1.html
- New Partition and Annotations:
- 7 Expressions
- CSV Path:
RAF_DB/annotation
- Header: [name, gender, race, age, expression]
- Expression Labels: ['Anger', 'Disgust', 'Fear', 'Happiness', 'Neutral', 'Sadness', 'Surprise']
- Demographic Labels:
- Gender: [Female, Male, Unsure]
- Age: [00-03, 04-19, 20-39, 40-69, 70+]
- Race: [White, Black, Asian, Indian]
You can download the dataset from http://whdeng.cn/RAF/model3.html
- New Partition and Annotations:
- 13 AUs
- CSV Path:
RAF_AU/annotation
- Header: [name, AU1, AU2, AU4, AU5, AU6, AU9, AU10, AU12, AU16, AU17, AU25, AU26, AU27, age, gender, race]
- Demographic Labels:
- Gender: [Female, Male]
- Age: [00-02, 03-09, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+]
- Race: [White, Black, Asian, Indian]
Support the following models:
Model Type | Models |
---|---|
ResNet | resnet18, resnet34, resnet50, resnet101, resnet152 |
ResNeXt | resnext50_32x4d, resnext101_32x8d, resnext101_64x4d |
Swin | swin_t, swin_s, swin_b, swin_v2_t, swin_v2_s, swin_v2_b |
VGG | vgg11, vgg16, vgg19 |
ViT | vit_b_16, vit_b_32, vit_l_16, vit_l_32, vit_h_14 |
iResNet | iresnet101 |
EfficientNet | efficientnet_b0, efficientnet_b1, efficientnet_b2, efficientnet_b6, efficientnet_b7, efficientnet_v2_s, efficientnet_v2_m, efficientnet_v2_l |
DenseNet | densenet121, densenet161, densenet201 |
ConvNeXt | convnext_tiny, convnext_small, convnext_base, convnext_large |
POSTER++ | POSTER++ |
DAN | DAN |
MT-EffNet | MT-EffNet |
MA-Net | MA-Net |
EAC | EAC |
DACL | DACL |
EmoGCN | EmoGCN |
CTC | abaw5_ctc |
SITU | abaw5_situ |
FUXI | FUXI |
ME-GraphAU | ME-GraphAU |
AUNets | AUNets |
Our experiments are conducted in the following environments:
- Python 3.11
- PyTorch 2.3
- CUDA 11.8
- Torchvision 0.18
You can set up your environment by following the instructions in ENVIRONMENT.md
.
Fair-Consistent-Affect-Analysis # Project code root
βββ config # Configuration and parameters
βββ fair # Fairness metrics
βββ model_utils # Model utilities
βββ ENVIRONMENT.md # Requirements
βββ run.py # Main
βββ output # Created automatically
βββ README.md # Project readme
Your data needs to be structured in the following format:
Dataset_root # dataset root
βββ AffectNet # AffectNet
β βββ images
β β βββ train_set
β β β βββ 0.jpg
β β β βββ ...
β β βββ val_set
β β βββ 0.jpg
β β βββ ...
β βββ annotations
β βββ EXPR
β β βββ train.csv
β β βββ valid.csv
β β βββ test.csv
β βββ VA
β βββ train.csv
β βββ valid.csv
β βββ test.csv
βββ DISFA # DISFA
β βββ images
β β βββ LeftVideoSN001
β β β βββ 00001.jpg
β β β βββ ...
β β βββ RightVideoSN001
β β βββ 00001.jpg
β β βββ ...
β βββ annotations
β βββ train.csv
β βββ valid.csv
β βββ test.csv
βββ EmotioNet # EmotioNet
β βββ images
β β βββ test
β β β βββ 000001.jpg
β β β βββ ...
β β βββ validation
β β βββ 000001.jpg
β β βββ ...
β βββ annotations
β βββ train.csv
β βββ valid.csv
β βββ test.csv
βββ GFT # GFT
β βββ images
β β βββ 001A
β β β βββ 1.jpg
β β β βββ ...
β β βββ 001B
β β βββ 1.jpg
β β βββ ...
β βββ annotations
β βββ train.csv
β βββ valid.csv
β βββ test.csv
βββ RAF-AU # RAF-AU
β βββ images
β β βββ 0001_aligned.jpg
β β βββ ...
β βββ annotations
β βββ train.csv
β βββ valid.csv
β βββ test.csv
βββ RAF-DB # RAF-DB
βββ images
β βββ test_0001.jpg
β βββ ...
β βββ train_00001.jpg
β βββ test.csv
βββ annotations
βββ train.csv
βββ valid.csv
βββ test.csv
To train or test, you need to modify the root path in ./config/config.yaml
to your own path.
Root Path
dataset_root: &dataset_root /path/to/Dataset_root
restructured_csv_root: &restructured_csv_root /path/to/Dataset_root
output_root: /path/to/Fair-Consistent-Affect-Analysis
code_root: /path/to/Fair-Consistent-Affect-Analysis
Dataset Path
If you structured your dataset repository following the Data Preparation guidelines, you do not need to change any content below. Otherwise, you need to change the paths to match your own directory structure.
[Dataset Name]_csv_root
: Dataset CSV folder, where CSV files must follow theannotations
folder structure as described in the Data Preparation section.[Dataset Name]_num_class
: For expression datasets, it should be the number of expressions. For AU datasets, it should be the number of AUs. For VA datasets, it should be set to 2.
The following is an example for the dataset AffectNet-7:
AffectNet-7_dataset_root: !join_path [ *dataset_root, AffectNet/images ] # Dataset folder
AffectNet-7_csv_root: !join_path [ *restructured_csv_root, AffectNet/annotations/EXPR/expr7 ] # Dataset CSV folder
AffectNet-7_num_class: 7 # Number of classes
Training Parameters
dataset
: Controls the training dataset. The following datasets are available:- EXPR: AffectNet-7, AffectNet-8, RAF-DB
- AU: EmotioNet, GFT, RAF-AU, DISFA
- VA: AffectNet-VA
model
: Controls the training model. The models available can be found in the Models section.fair
: Run fair validation.
Training Examples
- To run training on the dataset
AffectNet-7
using the modelresnet18
:
python run.py --dataset AffectNet-7 --model resnet18 -bs 128 --fair
- To run training on the dataset
EmotioNet
using the modelconvnext_base
:
python run.py --dataset EmotioNet --model convnext_base -bs 128 --fair
Output
The output will be saved to Fair-Consistent-Affect-Analysis/output
checkpoint
*.pth
: Checkpoint file*_fair_results.pkl
: Detailed fairness results*_inference_results.pkl
: Inference results of the test set
tensorboard
: TensorBoard filebest_result.txt
: Best result of global validation and its fairness resultfair_result
: Fairness results of each epochresult
: Global validation results of each epoch
output example:
DISFA-convnext_base-[0524-1725]-lr0.0001-bs128
βββ checkpoint
β βββ convnext_base_DISFA_best_f1_macro_threshold0.8.pth
β βββ convnext_base_DISFA_best_f1_macro_threshold0.8_fair_results.pkl
β βββ convnext_base_DISFA_best_f1_macro_threshold0.8_inference_results.pkl
βββ tensorboard
β βββ events.out.tfevents.abc.43074.0
βββ best_result.txt
βββ fair_result.txt
βββ loss.txt
βββ result.txt
eval
: Activate Evaluation mode.checkpoint_path
: Path to the checkpointpth
file
Training Examples
- To test the
vit_b_16
checkpoint pre-trained on the datasetEmotioNet
:
python run.py --dataset AffectNet-VA -bs 128 --fair --eval --ckeckpoint_path /path/to/vit_b_16_AffectNet-VA.pth
- To test the
resnext50
checkpoint pre-trained on the datasetRAF-DB
:
python run.py --dataset AffectNet-7 -bs 128 --fair --eval --ckeckpoint_path /path/to/resnext50_32x4d_RAF-DB.pth
This repo is based on the following projects, We thank the authors a lot for their valuable efforts.
POSTER++, DAN, HSEmotion, MA-Net, EAC, DACL, Emotion-GCN, ME-GraphAU, AUNets
Please cite the below paper in your publications if you use our developments (annotations, partitions, and/or trained models). BibTeX reference is as follows:
@article{hu2024bridging,
title={Bridging the Gap: Protocol Towards Fair and Consistent Affect Analysis},
author={Hu, Guanyu and Papadopoulou, Eleni and Kollias, Dimitrios and Tzouveli, Paraskevi and Wei, Jie and Yang, Xinyu},
journal={arXiv preprint arXiv:2405.06841},
year={2024}
}