-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: All bounding boxes should have positive height and width. Found invaid box [500.728515625, 533.3333129882812, 231.10546875, 255.2083282470703] for target at index 0. #2740
Comments
I guess you have a degenerate box case. The boxes should be of format (xmin, ymin, xmax, ymax) for FRCNN to work. |
Hi, The answer from @oke-aditya is correct. You are probably passing to the model bounding boxes in the format Changing this should fix the issue. We have btw recently added box conversion utilities to torchvision (thanks to @oke-aditya ), they can be found in vision/torchvision/ops/boxes.py Lines 137 to 156 in a98e17e
|
So should I change my xml file format. |
@kashf99 this question is better suited to the detecto repo, and this is part of their API. https://github.com/alankbi/detecto |
Ok thank you |
Yeah thank you . It worked. But its very slow. Overload of nonzero is deprecated. |
This has been fixed in torchvision master since #2705 |
Hi @fmassa . I am also getting the same error, but I had passed [xmin, ymin, xmax, ymax] to the model. Can someone help me out. |
Can you post details so that we can reproduce the issue ? |
@oke-aditya what I have share code or abstract details. |
Any code sample that can help people to reprdouce error you get. |
boxes.append([xmin, ymin, xmax, ymax]) |
@MALLI7622 make sure that |
@fmassa I had resolved the issue 4 days back, Thanks for your help. I was getting another error in Faster-RCNN. My model was resulting in these values. I don't know how to resolve this. I had changed the class index starting from 1 instead of 0 and increased output classes+1 because of starting with 1. Can you help me how to resolve this issue? When I was predicting with this model. I didn't get anything. It was predicting this |
@MALLI7622 this might be due to many things. I would encourage you to start with the finetuning tutorial in https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html , as maybe you are not training for long enough. |
@MALLI7622 how did you resolved the issue? I having similar issue for a custom dataset with 39 classes(including background). Any help will do. Thanks |
@clothme-io Can you share your sample dataset file and also custom dataset class. I'll try to help you with it. |
@MALLI7622 sure I can share it here as well as email it to you. And thank you for the help. How I Generated The Dataset:
Here is my custom dataset class:
` |
Hi - the example in torchvision is: model22 = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) For trainingimages, boxes = torch.rand(4, 3, 600, 1200), torch.rand(4, 11, 4) For inferencemodel22.eval() optionally, if you want to export the model to ONNX:torch.onnx.export(model22, x, "faster_rcnn.onnx", opset_version = 11) https://pytorch.org/vision/master/models.html#torchvision.models.detection.fasterrcnn_resnet50_fpn and I get the same error: ValueError: All bounding boxes should have positive height and width. Found invalid box [0.5358670949935913, 0.6406093239784241, 0.873319149017334, 0.33925700187683105] for target at index 0. |
@OrielBanne one of you bounding boxes have a negative height, I would recommend you checking your training data |
@OrielBanne Yes I found the same error while using this, maybe producing random bboxes( torch.rand(4, 11, 4)) is creating the problem |
I have a similar issue Following this tutorial Building Your Own Object Detector Pytorch Vs Tensorflow And How To Even Get Started to use transfer learning to train a custom data set Running on Clone the github repo of
Model i am using
I did manually check the Gave a print statement inside the
Print statement output of
I get his error
I am sure, the problem has been addressed long back by looking at this responses given here But I look at this post on stackoverflow suffering from same error ValueError: All bounding boxes should have positive height and width Could any of you guide what exactly should be changed? and where it has to be changed? I will surely write a medium blog on Pytorch Object Detection from custom data using Transfer Learning after I have sorted out these few minor hiccups @fmassa I guess you could help me sort this issue out |
Hey @santhoshnumberone , In your bounding box data, there are few datapoints which do not fit the above format, some of them are -
so first you need to check the format of the bounding boxes that you have. You need to convert the bounding boxes to (xmin, ymin, xmax, ymax) format. vision/torchvision/ops/boxes.py Lines 137 to 189 in a98e17e
I hope this helps. |
Also note that if you are trying to train an object detection model you should use
since mask_rcnn is an instance segmentation model which will expect segmentation mask during training. |
Thank you for highlighting the issue, will look into it. |
Can't I PSMask is required to calculate the loss I guess, I got his error
|
I had the same problem, all the images and the masks were fine, for the image augmentation I was using this transforms : from torchvision.transforms import v2 as T
when "transforms.append(T.RandomRotation(10))" was uncommented, i had an error when i start the training, but when I commented that line the training step was successfully done. |
i am training detecto for custom object detection. anyone who can help me as soon as possible. i will be very grateful to you.
here is the code.
from detecto import core, utils, visualize
dataset = core.Dataset('content/sample_data/newdataset/car/images/')
model = core.Model(['car'])
model.fit(dataset)
here is the output:
ValueError Traceback (most recent call last)
in ()
4 model = core.Model(['car'])
5
----> 6 model.fit(dataset)
2 frames
/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
91 raise ValueError("All bounding boxes should have positive height and width."
92 " Found invalid box {} for target at index {}."
---> 93 .format(degen_bb, target_idx))
94
95 features = self.backbone(images.tensors)
ValueError: All bounding boxes should have positive height and width. Found invaid box [500.728515625, 533.3333129882812, 231.10546875, 255.2083282470703] for target at index 0.
The text was updated successfully, but these errors were encountered: