- We have release a webui version using the Segment Anything Model!
Please check out here https://github.com/derekray311511/SAM-webui
The code requires python>=3.8
, as well as pytorch>=1.7
and torchvision>=0.8
. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
We have tested the setting below on 4090, 3060ti, 1060-6G
Python 3.8
pytorch 2.0.0 (py3.8_cuda11.7_cudnn8.5.0_0)
torchvision 0.15.0
Install Segment Anything:
https://github.com/derekray311511/segment-anything.git
cd segment-anything; pip install -e .
The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.
pip install opencv-python pycocotools matplotlib onnxruntime onnx
You can download the model checkpoints here.
python scripts/select_obj.py --img /PATH/TO/YOUR/IMG.file_type --output /OUTPUT/FILE/NAME --model_type MODEL_TYPE --checkpoint /PATH/TO/MODEL
MODEL_TYPE: vit_h
, vit_l
, vit_b
Auto
: Segment all objects in the imageCustom
: Select object(s) with points or boxes using mouse clicksView
: View the masks you just created and disable manipulation
- Press
SPACE
to inference all objects in the image
Point select
: Pressp
to switch topoint select
functiona
: Positive promptd
: Negative prompt
Box select
: Pressb
to switch tobox select
function
- Press
v
to switch between view / previous mode
Function | Key |
---|---|
Switch to auto mode | enter |
Switch to view mode | v |
Point select mode | p |
Box select mode | b |
Positive prompt | a |
Negative prompt | d |
Save image | s |
Inference | SPACE |
Exit | ESC |