-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/remove ultralytics #98
Feature/remove ultralytics #98
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #98 +/- ##
===========================================
+ Coverage 62.29% 65.60% +3.30%
===========================================
Files 5 7 +2
Lines 183 250 +67
===========================================
+ Hits 114 164 +50
- Misses 69 86 +17
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot!
I think we could do a few modifications to get something really neat :)
|
||
y = self.session.run(["output"], {"images": x.numpy()})[0] | ||
|
||
output = non_max_suppression(torch.tensor(y), self.conf_thres)[0].numpy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to convert it back to a pytorch tensor?
Can't we do it with numpy? I'd suggest implementing the utils in pure numpy to get rid of PyTorch & torchvision. (onnx works with numpy either way, not Pytorch tensors)
This is not relevant anymore after #101 |
The goal of this PR is to get rid of the ultralytics repo and all its dependencies with a class that allows to perform inference directly from the onnx model