Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Log #2

Open
ecr23xx opened this issue Oct 10, 2018 · 4 comments
Open

Update Log #2

ecr23xx opened this issue Oct 10, 2018 · 4 comments
Labels
good first issue Good for newcomers

Comments

@ecr23xx
Copy link
Owner

ecr23xx commented Oct 10, 2018

This is the update log of this repo. Before using this repo, please check it first to ensure you get the right version because this repo is under construction and might not work in every commit.

@ecr23xx ecr23xx added the good first issue Good for newcomers label Oct 10, 2018
@ecr23xx
Copy link
Owner Author

ecr23xx commented Oct 10, 2018

2018/10/07

⚠️ mAP computation seems not very accruate

Roll back and fix evaluation error. Loss function is still not working, while evaluation is working.

@ecr23xx
Copy link
Owner Author

ecr23xx commented Oct 10, 2018

2018/10/08

Loss function seems to work, furthur test is needed. Details about training could be seen in issue #1

@ecr23xx
Copy link
Owner Author

ecr23xx commented Oct 10, 2018

2018/10/10

Updates in current commit

  1. Support training on VOC dataset, while the training results couldn't be ensured as the loss function seems not working very well. Pay attention that multi-datasets tranining is not supported. If you want to train on COCO, you might have to make some modification on train.py and dataset.py.
  2. Create a transfer.py file to transfer .weights file to PyTorch readable checkpoint file
  3. There's a small change in load_weights function. I change the header length to 4 instead of 5. While I don't konw if it will influence the results.
  4. Fix learning rate warm up. In previous commit, the learning rate will stay static.
    # learning rate warm up
    if (epoch == 0) & (batch_idx <= 1000):
      lr = args.lr * (batch_idx / 1000) ** 4
      for g in optimizer.param_groups:
        g['lr'] = lr

Plan for next commit

Training on VOC. Because VOC dataset is much smaller than COCO (there're only 20 classes in VOC while 60 classes in COCO), so it might be a good start for this YOLOv3 implmentation. I'm not sure the loss function works or not.

@ecr23xx
Copy link
Owner Author

ecr23xx commented Oct 23, 2018

2018/10/23

It's been long since last log, so I sum up recent commits in one log.

Updates in recent commits

  1. Support multi-GPU training. Multi-GPU training support heavily accelerate training process. Take VOC as an example, with 3 GTX 1080 Ti, I can finish one epoch in a few minutes. Now you can specify GPU ids you want to use like --gpu=2,1,0 to train the model with multi GPUs.
  2. Add learning rate scheduler and train the model for 90 epochs. As we can see, the loss converges very beautifully, which indicates the optimizer works well.

    While the generated result looks similar to that in issue Training from scratch #1 , training problems are not fixed.

Plan for next commit

Modify the loss function. In the original implementation, it seems prediction with high objectiveness confidence will be ignored to suppress the objectiveness classifier increase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

1 participant