Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: efficientad training its own dataset reports an error #1177

Closed
1 task done
laogonggong847 opened this issue Jul 13, 2023 · 21 comments
Closed
1 task done

[Bug]: efficientad training its own dataset reports an error #1177

laogonggong847 opened this issue Jul 13, 2023 · 21 comments

Comments

@laogonggong847
Copy link

laogonggong847 commented Jul 13, 2023

Describe the bug

I encountered an error while training my own dataset using EfficientAD. I only made modifications to the dataset section of the configuration file provided by the official EfficientAD repository. Based on the same modifications, I was able to train models like CFA and PatchCore successfully, but I encountered an error specifically when training EfficientAD.

The yaml file for my efficientad is as follows (I only changed the dataset section, the rest is consistent)

dataset:
  name: mydata
  format: folder
  path: ./MyDataset/HC_ZT_ROI
  normal_dir: normal #  name of the folder containing normal images.
  abnormal_dir: abnormal #  name of the folder containing abnormal images.
  normal_test_dir: null #  name of the folder containing normal test images.
  task: classification
  mask: null
  extensions: null
  train_batch_size: 32
  test_batch_size: 32
  num_workers: 0
  image_size: 500 # dimensions to which images are resized (mandatory)
  center_crop: null # dimensions to which images are center-cropped after resizing (optional)
  normalization: null # data distribution to which the images will be normalized: [none, imagenet]
  transform_config:
    train: null
    eval: null
  test_split_mode: from_dir # options: [from_dir, synthetic]
  test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
  val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
  val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16


I have tentatively determined that the cause of my error is the parameter "normalization"

When I when I set normalization: imagenet or normalization: none the exact error message is:

2023-07-13 13:56:58,938 - anomalib.models.efficientad.lightning_model - INFO - Load pretrained teacher model from pre_trained\efficientad_pretrained_weights\pretrained_teacher_small.pth
Traceback (most recent call last):
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 82, in <module>
    train(args)
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 59, in train
    model = get_model(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\__init__.py", line 106, in get_model
    model = getattr(module, f"{_snake_to_pascal_case(config.model.name)}Lightning")(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 289, in __init__
    super().__init__(
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 95, in __init__
    self.prepare_imagenette_data()
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 121, in prepare_imagenette_data
    imagenet_dataset = ImageFolder(imagenet_dir, transform=TransformsWrapper(t=self.data_transforms_imagenet))
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 310, in __init__
    super().__init__(
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 145, in __init__
    classes, class_to_idx = self.find_classes(self.root)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 219, in find_classes
    return find_classes(directory)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 43, in find_classes
    raise FileNotFoundError(f"Couldn't find any class folder in {directory}.")
FileNotFoundError: Couldn't find any class folder in datasets\imagenette.

Then I referred to the related answer in #1148, and I see that @alexriedel1 explains that it should be set to normalization: null.

When I when I set normalization: null the exact error message is:

C:\ProgramData\anaconda3\envs\HC_Anomalib\python.exe E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py
E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\config\config.py:275: UserWarning: config.project.unique_dir is set to False. This does not ensure that your results will be written in an empty directory and you may overwrite files.
  warn(
Global seed set to 42
2023-07-13 14:08:23,153 - anomalib.data - INFO - Loading the datamodule
Traceback (most recent call last):
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 82, in <module>
    train(args)
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 57, in train
    datamodule = get_datamodule(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\data\__init__.py", line 116, in get_datamodule
    datamodule = Folder(
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\data\folder.py", line 270, in __init__
    normalization=InputNormalizationMethod(normalization),
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\enum.py", line 339, in __call__
    return cls.__new__(cls, value)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\enum.py", line 663, in __new__
    raise ve_exc
ValueError: None is not a valid InputNormalizationMethod

Dataset

Folder

Model

Other (please specify in the field below)

Steps to reproduce the behavior

efficientad training its own dataset reports an error

OS information

Anomalib: 0.6.0
torch: 1.12.1+cu113
OS: windows

Expected behavior

Hello @alexriedel1@nelson1425, As someone who is most familiar with efficiented, can you answer the following questions?

1: What is the reason for this error in efficiented and how should I fix it.

2: Is the performance of efficiented really as good as in the paper, in fact I am more concerned about the speed of efficiented as in the paper. In the paper, it is mentioned that the FPS reaches 269 with efficientAD-M and 614 with efficientAD-S. Is it really possible to achieve this in real tests? If not, what is the FPS of your implementation for different sizes of images. (Although I realize this may be affected and limited by specific hardware)

3:What are the advantages of Efficiented over other tools in Anomalib, and what situations is it more suitable for.

Looking forward to your answer, thanks!

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project's Code of Conduct
@blaz-r
Copy link
Contributor

blaz-r commented Jul 13, 2023

Hello. I can't answer all the questions. but regarding the normalization. As config says, only none and imagenet are valid options, null will cause error.
Now your problem is not really related to imagenet normalization, I do see where the confusion might come from. There seems to be a problem with preparing of dataset that is downloaded as part of EfficientAD, here called imagenette in these lines of code. I think something went wrong while downloading, so I recommend you remove imagenette from datasets folder and run the procedure again, so that it is redownloaded. Also make sure you are running train script from root folder, as path is relative: ./datasets/imagenette, so working dir needs to be inside anomalib root.

@laogonggong847
Copy link
Author

hello @blaz-r, Thank you very much for your reply, I wasn't able to understand some of the things you said very accurately. As I said at the beginning of the question, for other models such as patchcore and cfa, when I make the exact same changes to the dataset section of the corresponding yaml files for these models (exactly the same dataset I gave in this question), all the other models train correctly and get results. Why is there still a download problem when my dataset is already local? Also the fact that other models' profiles utilizing the same dataset can be trained should be enough to show that there is no error in my file. Looking forward to another reply from you, thanks a lot!

@blaz-r
Copy link
Contributor

blaz-r commented Jul 13, 2023

It's not about your dataset, it's imagenet(te) dataset downloaded by EfficientAD model, as it uses imagenet as part of its functionality. It should be located inside datasets/imagenette. So first thing I would recommend is to just delete this and rerun, which will download the entire dataset again, potentially fixing the issue.

@laogonggong847
Copy link
Author

laogonggong847 commented Jul 13, 2023

Hello @blaz-r Thank you very much, I'll give it a try and get back to you!

@laogonggong847
Copy link
Author

Hello @blaz-r , This does work, thank you very much for your help. Also I would like to ask two more questions:
1: What specifically is the role of ImageNet in efficientAD? What exactly do you mean by efficientAD using ImageNet as part of its functionality?

2: If I want to skip ImageNet, i.e. not use ImageNet, will my efficientAD still work?

Looking forward to another reply from you, thanks a lot!

@blaz-r
Copy link
Contributor

blaz-r commented Jul 13, 2023

If you want to know exactly how EfficientAD works, I recommend reading the paper.

To cite the authors:

In the standard S–T framework, the teacher is pretrained on an image classification dataset, or it is a distilled version of such a pretrained network. The student is not trained on that pretraining dataset but only on the application’s normal images. We propose to also use the images from the teacher’s pretraining during the training of the student. Specifically, we sample a random image P from the pretraining dataset, in our case ImageNet, in each training step. We compute the loss of the student as
image
This penalty hinders the student from generalizing its imitation of the teacher to out-of-distribution images.

So imagenet is a key component of training, which can't really be skipped.

@laogonggong847
Copy link
Author

Thank you very much @blaz-r

@blaz-r
Copy link
Contributor

blaz-r commented Jul 13, 2023

Glad to help 😄. Regarding the other questions you had, I can't answer all that properly, but I'm sure contributors of EfficientAD can help.

@laogonggong847
Copy link
Author

Looking forward to hearing from them, and thank you very much for your patience, again! @blaz-r

@laogonggong847
Copy link
Author

hello @blaz-r , I'm sorry to bother you again, but I have a new error. Once ImageNet was downloaded, it started to train, but not long after that it reported an error.

2023-07-13 19:04:06,034 - anomalib.models.efficientad.lightning_model - INFO - Calculate teacher channel mean and std
Calculate teacher channel mean: 100%|██████████| 11/11 [00:02<00:00,  4.49it/s]
Calculate teacher channel std: 100%|██████████| 11/11 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/20 [00:00<?, ?it/s] Traceback (most recent call last):

......

  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\torch_model.py", line 282, in forward
    d_hard = torch.quantile(distance_st, 0.999)
RuntimeError: quantile() input tensor is too large
Epoch 0:   0%|          | 0/20 [00:00<?, ?it/s]

Why am I prompted to RuntimeError: quantile() input tensor is too large

@blaz-r
Copy link
Contributor

blaz-r commented Jul 13, 2023

This indeed seems like a bug, that was already addressed in one PR, but it was only fixed in lighting model as it seems.
I believe that quantile should also be implemented differently when calculating d_hard. Maybe @nelson1425 can confirm. The problem is that quantile only works with input up to 2**24.

I think this will need to be fixed the same way as it was done in lightning model. If you are able to fix this, a PR would be very welcome.

@alexriedel1
Copy link
Contributor

hello @blaz-r , I'm sorry to bother you again, but I have a new error. Once ImageNet was downloaded, it started to train, but not long after that it reported an error.

2023-07-13 19:04:06,034 - anomalib.models.efficientad.lightning_model - INFO - Calculate teacher channel mean and std
Calculate teacher channel mean: 100%|██████████| 11/11 [00:02<00:00,  4.49it/s]
Calculate teacher channel std: 100%|██████████| 11/11 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/20 [00:00<?, ?it/s] Traceback (most recent call last):

......

  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\torch_model.py", line 282, in forward
    d_hard = torch.quantile(distance_st, 0.999)
RuntimeError: quantile() input tensor is too large
Epoch 0:   0%|          | 0/20 [00:00<?, ?it/s]

Why am I prompted to RuntimeError: quantile() input tensor is too large

You could start by using a train and test batch size of 1, as recommended for the training of efficientad

@samet-akcay
Copy link
Contributor

@alexriedel1, would it be an idea to hardcode the batch size and remove from the config file for now ?

@laogonggong847
Copy link
Author

hello @alexriedel1 , @blaz-r Thank you very much for your help and patience in answering, I have successfully trained it when I set the batch size to 1. But I have a question, if the maximum value is 2**24, then when I start training with batch_size set to 32 and image_size set to 500. Logically, 500*500*32<2**24. should match, so why the error?

@alexriedel1
Copy link
Contributor

@alexriedel1, would it be an idea to hardcode the batch size and remove from the config file for now ?

Best of I can think right now is raising an error if the batch size is different from one. Otherwise it would be needed to implemented in the datamodule generator too I guess

@alexriedel1
Copy link
Contributor

hello @alexriedel1 , @blaz-r Thank you very much for your help and patience in answering, I have successfully trained it when I set the batch size to 1. But I have a question, if the maximum value is 2**24, then when I start training with batch_size set to 32 and image_size set to 500. Logically, 500*500*32<2**24. should match, so why the error?

The quantile calculation is not based on the input image but on feature maps from the teacher model. the tensor shape for 500x500 images and batch size 32 is [32, 384, 117, 117] -> 168,210,432 > 2**24

@laogonggong847
Copy link
Author

hello @alexriedel1 , Thank you very much for your help and patience in answering. I see, but there doesn't seem to be an early stop mechanism in efficientad. If I set the max_epochs to a very large setting efficientad may get good results, but will this cause overfitting. How should I set my max_epochs to be more reasonable when there is no early stopping mechanism.

@alexriedel1 alexriedel1 mentioned this issue Jul 14, 2023
13 tasks
@alexriedel1
Copy link
Contributor

hello @alexriedel1 , Thank you very much for your help and patience in answering. I see, but there doesn't seem to be an early stop mechanism in efficientad. If I set the max_epochs to a very large setting efficientad may get good results, but will this cause overfitting. How should I set my max_epochs to be more reasonable when there is no early stopping mechanism.

you can use early stopping just like in other models

early_stopping:
patience: 2
metric: pixel_AUROC
mode: max

@laogonggong847
Copy link
Author

@alexriedel1 OK! Thank you very much, at the very beginning of this question I presented my three confusions about efficientad:

1: What is the reason for this error in efficiented and how should I fix it.

2: Is the performance of efficiented really as good as in the paper, in fact I am more concerned about the speed of efficiented as in the paper. In the paper, it is mentioned that the FPS reaches 269 with efficientAD-M and 614 with efficientAD-S. Is it really possible to achieve this in real tests? If not, what is the FPS of your implementation for different sizes of images. (Although I realize this may be affected and limited by specific hardware)

3:What are the advantages of Efficiented over other tools in Anomalib, and what situations is it more suitable for.

I've got a clearer picture of the first question so far. Regarding my second and third questions, can you give the appropriate answers? Because I think you are one of the most knowledgeable people about efficientad. Very much looking forward to your answers and thanks again for your patience and help!

@alexriedel1
Copy link
Contributor

@alexriedel1 OK! Thank you very much, at the very beginning of this question I presented my three confusions about efficientad:

1: What is the reason for this error in efficiented and how should I fix it.

2: Is the performance of efficiented really as good as in the paper, in fact I am more concerned about the speed of efficiented as in the paper. In the paper, it is mentioned that the FPS reaches 269 with efficientAD-M and 614 with efficientAD-S. Is it really possible to achieve this in real tests? If not, what is the FPS of your implementation for different sizes of images. (Although I realize this may be affected and limited by specific hardware)

3:What are the advantages of Efficiented over other tools in Anomalib, and what situations is it more suitable for.

I've got a clearer picture of the first question so far. Regarding my second and third questions, can you give the appropriate answers? Because I think you are one of the most knowledgeable people about efficientad. Very much looking forward to your answers and thanks again for your patience and help!

Im getting around 30 FPS on a GTX 1650, but that GPU is nowhere near the one used in the paper..

@laogonggong847
Copy link
Author

@alexriedel1 OK, Thank you very much.

@openvinotoolkit openvinotoolkit locked and limited conversation to collaborators Jul 21, 2023
@samet-akcay samet-akcay converted this issue into discussion #1200 Jul 21, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants