-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one #331
Comments
I found that Odd number dataset would reproduce the above error, even number is ok. |
@feitiandemiaomi the error doesn't look familiar. But you do need Python 3.7 now, I see you are on 3.6. |
I also confronted this problem, just as @feitiandemiaomi said, tuning the dataset to a even total number is a temporary solution. There may be some bugs ? |
@remindchobits It may be related to 'Accumulate gradient for x batches before optimizing'. |
Hi @feitiandemiaomi I met the same issue when training a custom model. |
@feitiandemiaomi @gwestner94 the accumulate variable is the number of batches to accumulate before an optimizer step. So for example batch_size 16 accumulate 4 will produce an effective batch_size of 64 (one optimizer step every 64 images). |
@gwestner94 I can not find out the solution ,it's strange. If you find out it , pelease share us ,thanks a lot |
@glenn-jocher I see, could you reproduce the above error by using odd number dataset? |
@feitiandemiaomi @remindchobits this repository is verified working as intended on COCO2014, by default an odd-number dataset with 117263 images. Please note that most technical problems are due to:
sudo rm -rf yolov3 # remove exising repo
git clone https://github.com/ultralytics/yolov3 && cd yolov3 # git clone latest
python3 detect.py # verify detection
python3 train.py # verify training (a few batches only)
# CODE TO REPRODUCE YOUR ISSUE HERE
If none of these apply to you, we suggest you close this issue and raise a new one using the Bug Report template, providing screenshots and minimum viable code to reproduce your issue. Thank you! |
@feitiandemiaomi |
Hi, my friend , when I train my data ,a strange error happen, showing as follows:
**Traceback (most recent call last):
File "train.py", line 342, in
accumulate=opt.accumulate,
File "train.py", line 224, in train
pred = model(imgs)
File "/home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, kwargs)
File "/home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 392, in forward
self.reducer.prepare_for_backward([])
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing its output (the return value of
forward
). You can enable unused parameter detection by passing the keyword argumentfind_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
. If you already have this argument set, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:408)frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f6a5435c441 in /home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f6a5435bd7a in /home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #2: c10d::Reducer::prepare_for_backward(std::vector<torch::autograd::Variable, std::allocatortorch::autograd::Variable > const&) + 0x5ec (0x7f6a54e8283c in /home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #3: + 0x6c52bd (0x7f6a54e782bd in /home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x130cfc (0x7f6a548e3cfc in /home/data/anaconda3/envs/caffe2_py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #33: __libc_start_main + 0xf0 (0x7f6a594a8830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #34: python() [0x4009e9]
This is a puzzling problem, because I use my second dataset ,it is ok , I have try to make a dataset only include one img, but it's still wrong. Is it related to the number of datasets or batch-size?
I don't know why it happend , and how to slove it, could you give me some advice? Thanks a lot.
The text was updated successfully, but these errors were encountered: