Skip to content

Sassanmtr/CenterArt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CenterArt: Joint Shape Reconstruction and 6-DoF Grasp Estimation of Articulated Objects

Repository provides the source code for the paper "CenterArt: Joint Shape Reconstruction and 6-DoF Grasp Estimation of Articulated Objects":

teaser

Installation

conda create --name centerart_env python=3.8
conda activate centerart_env
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
git clone [email protected]:PRBonn/manifold_python.git
cd manifold_python
git submodule update --init
make install
cd CenterArt
pip install -r requirements.txt

Data Generation

Download the articulated objects from PartNet-Mobility dataset. Collect the desired urdf objects in datasets/urdfs directory. For grasp label generation, check out this repo. Run the following script to generate sdf values:

python scripts/sdf_generator.py

The datasets directory should be as follow:

datasets
|-- urdfs
|-- grasps
|-- sdfs_value
|-- sdfs_point

First, generate a json file for collected objects by running:

python scripts/json_file_creator.py

Split the data to train and validation sets:

python scripts/sgdf_data_split.py

Generate sgdf decoder dataset by running:

python scripts/make_sgdf_dataset.py

Generate rgb encoder dataset by running:

python scripts/make_rgb_single_dataset.py # for scenes with single articulated object
python scripts/make_rgb_multiple_dataset.py # for scenes with multiple articulated objects

multiple objects scenes

Pretrained Weights Download

Please download the pretrained weights:
sgdf decoder: Extract and place in the ckpt_sgdf folder, at the root of the repository.
rgb encoder (single objects): Extract and place in the ckpt_rgb folder, at the root of the repository.
rgb encoder (multiple objects): Extract and place in the ckpt_rgb folder, at the root of the repository.

Evaluation

To reproduce the results from the paper (Table I), do the following. If you want to evaluate a different checkpoint, remember to change the --rgb-model argument.

python scripts/evaluate.py #(To get success rate)
python scripts/evaluate_relaxed.py #(To get relaxed success rate)

evaluation

Training

To train your own models instead of using the pretrained checkpoints, do the following:

python scripts/train_sgdf.py

Modify configs/rgb_train_specs.json -> EmbeddingCkptPath with the checkpoint id that you just trained. Now you can use those embeddings to train the rgbd model:

python scripts/train_rgbd.py

Citation

@inproceedings{mokhtar2024centerart,
  title={CenterArt: Joint Shape Reconstruction and 6-DoF Grasp Estimation of Articulated Objects},
  author={Mokhtar, Sassan and Chisari, Eugenio and Heppert, Nick and Valada, Abhinav},
  booktitle={ICRA 2024 Workshop on 3D Visual Representations for Robot Manipulation}
}

Feedback

For any feedback or inquiries, please contact [email protected]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages