-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Viewpoint difference between event and RGB camera on the same side #12
Comments
Update: Pixel-to-pixel alignment should be possible. Only we will need the calibration pattern to do this. I wonder if you could provide the calibration images (after rectification) for both event camera (cam0) and RGB camera (cam1) . Thanks again! |
Hi @RunqiuBao First, you need to make use of the intrinsic and extrinsic parameters that are provided with the calibration file of each sequence. In addition, you need depth information to map from images to event cameras, which you can compute using known stereo matching approaches, for example. I answered a related question in another issue: #11 (comment) Let me know if you still have questions |
Thanks for reply! @magehrig I totally understand your point. However, existing stereo matching approaches usually give noisy results and I am afraid such mapping from image to event camera would not turn out to be useful. My point is, since the left event camera and left blackfly very near to each other (4 cm), we can assume that there is only pure rotation between their poses. Therefore, we should be able to align them directly by a simple homography. Sorry I did not notice #11 was a similar issue. |
The calibration files already contain all the transformations: extrinsics[T_10]: transforms points from left distorted event camera coordinate frame to the left distorted standard camera frame That's all you need. |
Let me know if this answers your question or maybe I should also improve the documentation if there is something missing or ambiguous/unclear. |
Hi, @magehrig However, about the sensors' FoV alignment, I know the extrinsics are available. I am just afraid that they are not accurate enough (sorry.. but I have spent two days on this, quite sure it is not working). For example, if you see the following figures, they are from the interlaken_00_c, both left side and right side. Obviously, rgb cameras are looking at a lower angle than event cameras. But after reprojection with the extrinsics you provided, rgb cameras are still looking at a lower angle, not much improved. I can provide my test script for validation if necessary. Btw, I am using 'interlaken/interlaken_00_c_images_rectified_left/000000.png' and 'interlaken/interlaken_00_c_images_rectified_right/000000.png' as well as the corresponding events of the first 25ms stacked into frames for this test. I suppose they are at the same time point, are they? |
@RunqiuBao I am trying to solve the same issue can please your code (if possible). So that I can also try to find out the actual issue. |
I need a bit more information about how you compute these results. Can you show me how you get the transformations for this warping? I will give you an example for warping a point from the left image coordinate system to the left event coordinate system. I.e. you want to compute the transformation Now, we need to relate them to the transformations in the calibration file: Now, we can transform a 3D point in the rectified image coordinate system to the rectified event coordinate system: Is this how you did it? Here is the rosbag that was used for calibration with kalibr for the interlaken_00 sequence: https://download.ifi.uzh.ch/rpg/tmp/interlaken_00_kalibr.bag. I will leave the file up for a few days and then delete it. So download it as soon as possible if you want to use it. |
Hi, @magehrig really thanks for the calibration patterns! About computing transformations for the warping, yes I exactly used the same equations as you has kindly shown above. Regards. |
@RunqiuBao Thanks for sharing, I will also try and share if I find any breakthrough. |
a) Did you rectify the events using the provided rectification map? This is not visible in the code. I will have a look at your code in more detail as soon as I have the time. But these are 2 sources of errors I identified from a quick look. |
Hi, @magehrig a) I rectified the events. Because I used the "Sequence" class you provided in the tool scripts to load the stacked event frame, which by default rectifies the events. This part is not included in the code I uploaded. Thanks. |
regarding b): This is an error, that won't work. Use my previously posted approach instead to avoid resizing. The reason for this is that, by resizing the image, you are essentially changing the intrinsics and the pixel warping is wrong. |
Hi, @magehrig |
No worries, happy to help. |
Hello, I noticed that there is quite some viewpoint difference between the event (rectified) and RGB image (rectified) on the same side, for example the following alpha image between Cam0_rect and Cam1_rect.
This can be a problem if somebody wants to compare disparity map between event camera and RGB camera.
Personally, I think it can be solved by reprojecting the two camera's view to the same attitude (so that the view between cam0_rect and cam1_rect is completely aligned, pixel to pixel). But with the extrinsics you provide I could not achieve that. I wonder if you have tried this? is the extrinsics between event and RGB accurate enough?
Thanks a lot!

The text was updated successfully, but these errors were encountered: