Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dear author, could you share the code and script on the M4 dataset. esteem it a favor #20

Open
2ySong opened this issue Dec 1, 2024 · 7 comments

Comments

@2ySong
Copy link

2ySong commented Dec 1, 2024

No description provided.

@2ySong
Copy link
Author

2ySong commented Dec 1, 2024

And your F1 score in the anomaly detection task. After I run it, it displays
Accuracy : 0.8234, Precision : 0.9677, Recall : 0.2300, F-score : 0.3716 . Instead of the 78.12 reported in the paper, I am a beginner and therefore have doubts. Thank you for your reply.

@VEWOXIC
Copy link
Owner

VEWOXIC commented Dec 7, 2024

I will check the M4 script, and if my memory is correct I directly put the FITS model to a repo that has already implement the M4,M5 dataloader for running.

@VEWOXIC
Copy link
Owner

VEWOXIC commented Dec 7, 2024

As for the AD task, there should be a separate threshold tuning process after the training. maybe you can check.

@2ySong
Copy link
Author

2ySong commented Dec 7, 2024

Thank you very much for your reply. I have one more question about calculating the number of parameters in FITS. I saw someone on OpenReview mention that complex numbers are counted as double the parameter count, but when I use Lightning or other scripts to calculate the parameter count, it comes out as single.

@2ySong
Copy link
Author

2ySong commented Dec 7, 2024

至于AD任务,训练后应该有一个单独的阈值调整过程。也许你可以检查一下。

individual: False
input_c: 55
k: 3
lr: 0.0001
mode: test
model_save_path: checkpoints
num_epochs: 10
output_c: 55
plot: False
pretrained_model: 20
win_size: 400
-------------- End ----------------
test: (73729, 1)
train: (58317, 1)
test: (73729, 1)
train: (58317, 1)
test: (73729, 1)
train: (58317, 1)
test: (73729, 1)
train: (58317, 1)
Model(
  (freq_upsampler): Linear(in_features=50, out_features=200, bias=True)
)
======================TEST MODE======================
Threshold : 0.10661790966987611
(73330,) (73330,)
pred:    (73330,)
gt:      (73330,)
pred:  (73330,)
gt:    (73330,)
Accuracy : 0.7846, Precision : 0.6892, Recall : 0.6438, F-score : 0.6657 

This is my run results. I can't understand the correlation between Accuracy : 0.7846, Precision : 0.6892, Recall : 0.6438, F-score : 0.6657 and Tab. 22
image
. Thanks.

@VEWOXIC
Copy link
Owner

VEWOXIC commented Dec 8, 2024

I am not sure on which dataset you are experimenting on. And there should be an anomaly rate tuning process as this to get the correct threshold.

@2ySong
Copy link
Author

2ySong commented Dec 8, 2024

我不确定您正在对哪个数据集进行实验。应该有一个异常率调整过程来获得正确阈值。

yeal,I follow your script in MSL dataset. I got 20 logs after test.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants