-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for multiple views? #196
Comments
Hi @vigji, thanks for bringing this up! What you are asking for is different, you want to be able to load multiple 2D camera views of the same animal and keypoints, right? I see why this is helpful. For example, that would enable us (or the users) to apply custom triangulation functionalities, or for example, employ clever outlier detection tools that rely on multi-view consistency (see #145). It would be useful to hear what sort of functionalities you have in mind for such data in On the technical level, it would be feasible. Probably the "cleanest" would be to add an optional |
A question regarding the practicalities: Are such multi-view predictions typically stored in a single file or as separate files? In the 2nd case the loader would have to accept a list of files as inputs. An alternative would be to load one view the "usual" way, and then have an |
My idea would be to stick to
Yes they are! A loader accepting an iterable would be my favourite option, although a |
Hmm, I like the def from_multi_view(files_dict: dict[str, Path], source_software: str, fps: float): Under the hood, this would call I like this option, because it means that using Do you want to take a shot at implementing it @vigji ? You already seem to have relevant test data and some concatenation code that works. I'm also fine with you opening a draft PR with a quick-and-dirty implementation, and we can help you refine it (or we can do the refinement for you if you wish). With regards to the triangulation, I guess you'd also need to load camera calibration parameters for that to work. If you end up doing something like that with |
To clarify, if this can be achieved by |
Yes, happy to try that! :)
I am currently using some triangulation code that people have kindly shared with me, I'll bring the topic up with them and get back on this point as I do not want to appropriate other people's pipeline! Will let you know |
Of course! There's also no absolute need or rush on this. |
Is it worth reopening this? Others may want to chime in about this general concept. |
Also, I know that this is not the right place for the question, feel free to redirect me: would you consider a native way of dropping the |
There are some xarray-native methods to do that. |
Also, FYI we have a zulip channel where you are welcome to ask all such questions, or to discuss anything movement-related. |
Hi people! Thanks for getting this going, a tool like this is truly needed!
Is your feature request related to a problem? Please describe.
I have a dataset with single animals detected from multiple cameras. As far as I can see there is no option for a "views" dimension in the default data format. (I tried to look for discussions of this issue, sorry if I missed something)
Describe the solution you'd like
A syntax within the movement framework to load together multiple files referring to different cameras in the same experiment (I don't know what are the long term plans in terms of file structures to load - so I am not necessarily committed to my current organisation)
Describe alternatives you've considered
I am currently loading them and concatenating the
xarray
s after loading (which is totally reasonable, so no rush for this!)Additional context
This kind of data structure would be needed by anyone in the process of extracting 3D poses. This is something needed only as long as you do not have yet 3D coordinates, in my case (but I assume in anyone's) the end goal of having multiple views. So, if you want to support only the final, "clean" data where some triangulation of sort already happened, I would understand!
The text was updated successfully, but these errors were encountered: