Tensorflow implementation of the paper in AAAI 2020. The paper tries to address the robustness of Deep Neeural Networks, but not from pixel-level perturbation lense, rather from semantic lense in which the perturbation happens in the latent parameters that generate the image. This type of robustness is important for safety-critical applications like self-driving cars in which tolerance of error is very low and risk of failure is high.
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications
Abdullah Hamdi, Matthias Muller, Bernard Ghanem
If you find this useful for your research, please use the following.
@inproceedings{hamdi2020sada,
title = {{SADA:} Semantic Adversarial Diagnostic Attacks for Autonomous Applications},
author = {Abdullah Hamdi and
Matthias Muller and
Bernard Ghanem},
booktitle = {AAAI Conference on Artificial Intelligence},
year = 2020
}
- Linux
- Python 2 or 3
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
- Blender 2.79
-
install Blender with the version
blender-2.79b-linux-glibc219-x86_64
and add it to yourPATH
by adding the commandexport PATH="${PATH}:/home/PATH/TO/blender-2.79b-linux-glibc219-x86_64"
in/home/.bashrc
file . make sure at the end that you can runblender
command from your shell script. -
Clone this repo:
git clone https://github.com/ajhamdi/SADA
cd SADA
- install the following
conda
environment as follows:
conda env create -f environment.yaml
conda activate sada
-
Download the dataset that contains the 3D shapes and the environments from this link and place the folder in the same project dir with name
3d/training_pascal
. -
Download the weights for YOLOv3 from this link and place in the
detectos
dir.
- We collect 100 3D shapes from 10 classes from ShapeNet and Pascal3D . All the sahpes are available inside the blender environment
3d/training_pascal/training.blend
file. The classes are the following
- aeroplane
- bench
- bicycle
- boat
- bottle
- bus
- car
- chair
- dining table
- motorbike
- train
- truck
- The parameters that control the environment are 8 as follows
- camera distance to the object
- camera azimuth angle
- camera pitch angle
- light source azimuth angle
- light source pitch angle
- color of the object (R-channel)
- color of the object (G-channel)
- color of the object (B-channel)
Generating images from the 3D environment for a specific class with random parameters and storing the 2D dataset in the folder generated
python main.py --is_gendist=True --class_nb= 0 --dataset_nb= 0 --gendist_size= 10000
is_gendist
: is the option to generate distribution of parameters and imagesclass_nb
the class of the 12 classes above to generatedataset_nb
: is the number assigned to the dataset generatedgendist_size
: the number of inages generated
python main.py --is_train=True --valid_size=50 --log_frq=10 --batch_size=32 --induced_size=50 --nb_steps=600 --learning_rate_t=0.0001 --learning_rate_g=0.0001
class_nb
the class of the 12 classes above to generatedataset_nb
: is the number assigned to the dataset generatednb_steps
: is the number of training steps of the GANlog_frq=10
: how often u save the weights of the networkinduced_size
: is the number of best samples that will be picked out of the total numberof generated imageslearning_rate_g
: the learning rate forf the generatorlearning_rate_t
: the learning rate forf the discrminatorvalid_size
: is the number of paramters u will be generating eventually for evaluation of the BBGAN
Self-Driving with CARLA
- coming soon
UAV racing with Sim4CV
- coming soon