-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sam2 real time inference? #388
Comments
The existing code doesn't support live/streaming video. Instead you'd have to first record a video on webcam and then process that video using the existing video predictor and/or web demo. It is possible to handle streaming video by modifying the code. It's mostly a matter of re-implementing the existing frame loop outside of the sam2_video_predictor.py script, and handling the frame reading (e.g. from a webcam) in that new loop. If you don't want to implement this manually, there is an existing repo that has done this: |
We also adapted the code in video_predictor.py for real-time tracking. This implementation can easily be paired with YOLO for real-time tracking of detected objects. However, the number of objects to track must be specified during initialization. The code can be found here: The core tracking logic is implemented in object_tracker.py Check out the following notebook for a usage example: |
can someone explain to me how to use sam2 for real-time inference, such that the model can predict objects while my webcam is open.
The text was updated successfully, but these errors were encountered: