You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for sharing your work! I am currently developing my own data capturing system and would like to ask some specific questions based on your implementation:
(1) Camera and Optical Marker Selection
The paper mentions the use of 54 Vicon MoCap cameras. Does this imply that the MoCap cameras were used exclusively for obtaining ground truth (GT) data, while additional RGB cameras were employed to build the actual dataset? If so, why were additional cameras necessary for RGB video capture? Were the Vicon camera frames insufficient for this purpose?
Could you share the specific RGB camera and optical marker products you used for your data collection system? The paper mentions the use of 54 Vicon MoCap cameras, and didn't metioned about the specific marker(only says 1.5mm marker), or the exact camera name(like FLIR).
Is it necessary to pair markers and cameras from the same brand (e.g., OptiTrack), or can we mix different products while still ensuring accurate 6-DoF pose estimation?
(2) Marker Placement Rules for Objects
How many markers did you typically attach to each object?
Are there any specific rules or guidelines(e.g. marker constellation) for placing markers to achieve optimal tracking accuracy?
(3) Defining Object's 6-DoF Pose
In your project, since the objects consist of two rigid parts, does your approach to object pose tracking involve fitting the marker points separately for each part? In other words, do you track the object by estimating the pose of each rigid part individually rather than as a single entity?
If relevant, could you point to specific functions or sections of your codebase where this computation is implemented?
Thank you in advance for your guidance! Your insights will be highly valuable for my development process.
The text was updated successfully, but these errors were encountered:
Hi, thanks for sharing your work! I am currently developing my own data capturing system and would like to ask some specific questions based on your implementation:
(1) Camera and Optical Marker Selection
The paper mentions the use of 54 Vicon MoCap cameras. Does this imply that the MoCap cameras were used exclusively for obtaining ground truth (GT) data, while additional RGB cameras were employed to build the actual dataset? If so, why were additional cameras necessary for RGB video capture? Were the Vicon camera frames insufficient for this purpose?
Could you share the specific RGB camera and optical marker products you used for your data collection system? The paper mentions the use of 54 Vicon MoCap cameras, and didn't metioned about the specific marker(only says 1.5mm marker), or the exact camera name(like FLIR).
Is it necessary to pair markers and cameras from the same brand (e.g., OptiTrack), or can we mix different products while still ensuring accurate 6-DoF pose estimation?
(2) Marker Placement Rules for Objects
How many markers did you typically attach to each object?
Are there any specific rules or guidelines(e.g. marker constellation) for placing markers to achieve optimal tracking accuracy?
(3) Defining Object's 6-DoF Pose
In your project, since the objects consist of two rigid parts, does your approach to object pose tracking involve fitting the marker points separately for each part? In other words, do you track the object by estimating the pose of each rigid part individually rather than as a single entity?
If relevant, could you point to specific functions or sections of your codebase where this computation is implemented?
Thank you in advance for your guidance! Your insights will be highly valuable for my development process.
The text was updated successfully, but these errors were encountered: