MetaUrban
is a cutting-edge simulation platform designed for Embodied AI research in urban spaces. It offers:
- π Infinite Urban Scene Generation: Create diverse, interactive city environments.
- ποΈ High-Quality Urban Objects: Includes real-world infrastructure and clutter.
- π§ Rigged Human Models: SMPL-compatible models with diverse motions.
- π€ Urban Agents: Including delivery robots, cyclists, skateboarders, and more.
- πΉοΈ Flexible User Interfaces: Compatible with mouse, keyboard, joystick, and racing wheel.
- π₯ Configurable Sensors: Supports RGB, depth, semantic map, and LiDAR.
- βοΈ Rigid-Body Physics: Realistic mechanics for agents and environments.
- π OpenAI Gym Interface: Seamless integration for AI and reinforcement learning tasks.
- π Framework Compatibility: Seamlessly integrates with Ray, Stable Baselines, Torch, Imitation, etc.
π Check out MetaUrban
Documentation to learn more!
- [24/01/25] v0.1.0: The first official release of MetaUrban π§ [release notes]
- MetaUrban
If you find MetaUrban helpful for your research, please cite the following BibTeX entry.
@article{wu2025metaurban,
title={MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility},
author={Wu, Wayne and He, Honglin and He, Jack and Wang, Yiran and Duan, Chenda and Liu, Zhizheng and Li, Quanyi and Zhou, Bolei},
journal={ICLR},
year={2025}
}
To ensure the best experience with MetaUrban, please review the following hardware guidelines:
-
Tested Platforms:
- Linux: Supported and Recommended (preferably Ubuntu).
- Windows: Works with WSL2.
- MacOS: Supported.
-
Recommended Hardware:
- GPU: Nvidia GPU with at least 8GB RAM and 3GB VRAM.
- Storage: Minimum of 10GB free space.
-
Performance Benchmarks:
- Tested GPUs: Nvidia RTX-3090, RTX-4080, RTX-4090, RTX-A5000, Tesla V100.
- Example benchmark:
- Running
metaurban/examples/drive_in_static_env.py
achieves:- ~60 FPS
- ~2GB GPU memory usage
- Running
git clone -b main --depth 1 https://github.com/metadriverse/metaurban.git
cd metaurban
bash install.sh
conda activate metaurban
If not installed successfully by running install.sh
, try step-by-step installation.
Create conda environment and install metaurban
conda create -n metaurban python=3.9
conda activate metaurban
pip install -e .
Install ORCA algorithm for trajectory generation
conda install pybind11 -c conda-forge
cd metaurban/orca_algo && rm -rf build
bash compile.sh && cd ../..
It should be noticed that you should install cmake,make,gcc
on your system before installing ORCA, more details can be found in FAQs.
Then install the following libs to use MetaUrban for RL training and testing.
pip install stable_baselines3 imitation tensorboard wandb scikit-image pyyaml gdown
We provide a script to quickly run our simulator with a tiny subset of 3D assets. The assets (~500mb) will be downloaded automatically the first time you run the script:
python metaurban/examples/tiny_example.py
In order to access the entire dataset and use the complete version, please fill out a form to register through registration link. This process will be triggered automatically when you attempt to pull the full set of assets.
The assets are compressed and password-protected. Youβll be prompted to fill out a registration form the first time you run the script to download all assets. Youβll receive the password once the form is completed.
Assets will be downloaded automatically when first running the script
python metaurban/examples/drive_in_static_env.py
Or use the script
python metaurban/pull_asset.py --update
If you cannot download assets by Python scripts, please download assets via the link in the Python file and organize the folder as:
-metaurban
-metaurban
-assets
-assets_pedestrian
-base_class
-...
We provide a docker file for MetaUrban. This works on machines with an NVIDIA GPU. To set up the MetaUrban using docker, follow the below steps:
[sudo] docker -D build -t metaurban .
[sudo] docker run -it metaurban
cd metaurban/orca_algo && rm -rf build
bash compile.sh && cd ../..
Then you can run the simulator in docker.
We provide examples to demonstrate features and basic usages of metaurban after the local installation.
In a point navigation environment, there will be only static objects besides the ego agent in the scenario.
Run the following command to launch a simple scenario with manual control. Press W,S,A,D
to control the delivery robot.
python -m metaurban.examples.drive_in_static_env
--density_obj 0.4
Press the key R
to load a new scenario. If there is no response when you press W,S,A,D
, press T
to enable manual control.
In a social navigation environment, there will be vehicles, pedestrians, and some other agents in the scenario.
Run the following command to launch a simple scenario with manual control. Press W,S,A,D
to control the delivery robot.
python -m metaurban.examples.drive_in_dynamic_env
--density_obj 0.4 --density_ped 1.0
We provide RL models trained on the task of navigation, which can be used to preview the performance of RL agents.
python -m metaurban.examples.drive_with_pretrained_policy
python RL/PointNav/train_ppo.py
For PPO training in PointNav Env. You can change the parameters in the file.
python RL/SocialNav/train_ppo.py
For PPO training in Social Env. You can change the parameters in the file.
We provide a script used to evaluate the quantitative performance of the RL agent
python RL/PointNav/eval_ppo.py --policy ./pretrained_policy_576k.zip
As an example of evaluating the provided policy.
python scripts/collect_data_in_custom_env.py
For expert data collection used in IL. You can change the parameters in the file custom_metaurban_env.yaml
to modify the environment.
python IL/PointNav/train_BC.py
For behavior cloning, you should change the path of the expert_data_path
.
python IL/PointNav/train_GAIL.py
For GAIL, you should change the path of the expert_data_path
.
For frequently asked questions about installing, RL training and other modules, please refer to: FAQs
Can't find the answer to your question? Try asking the developers and community on our Discussions forum.
The simulator can not be built without the help from Panda3D community and the following open-sourced projects:
- panda3d-simplepbr: https://github.com/Moguri/panda3d-simplepbr
- panda3d-gltf: https://github.com/Moguri/panda3d-gltf
- RenderPipeline (RP): https://github.com/tobspr/RenderPipeline
- Water effect for RP: https://github.com/kergalym/RenderPipeline
- procedural_panda3d_model_primitives: https://github.com/Epihaius/procedural_panda3d_model_primitives
- DiamondSquare for terrain generation: https://github.com/buckinha/DiamondSquare
- KITSUNETSUKI-Asset-Tools: https://github.com/kitsune-ONE-team/KITSUNETSUKI-Asset-Tools
- Objaverse: https://github.com/allenai/objaverse-xl
- OmniObject3D: https://github.com/omniobject3d/OmniObject3D
- Synbody: https://github.com/SynBody/SynBody
- BEDLAM: https://github.com/pixelite1201/BEDLAM
- ORCA: https://gamma.cs.unc.edu/ORCA/