-
Notifications
You must be signed in to change notification settings - Fork 23
Getting Started
A comprehensive video tutorial was created to get new users familiar with the User Interface as well as explore the features available for VT&R3. We recommend viewing this video before attempting to run any version of VT&R3. This demo will show you how to:
- Initialize the user interface
- Enter teach mode and manually drive a new path
- Perform loop closures
- Align the pose graph with the satellite imagery
- Teach additional branches to a graph
- Repeat along a path to user defined waypoints
- Utilize multi-experience localization
- Load and initialize pose graphs from previous driving sessions
We also have some supplementary tutorials specific for the offline versions of Stereo SURF-Feature-Based T&R and LiDAR Point-Cloud-Based T&R. Please note the offline data was recorded manually and as such the path tracking may not function correctly.
It is assumed that VT&R is Installed. Follow the Docker Instructions
ASRL Lab Use Only: See the internal page for details on installing on the Grizzly or Warthog.
Launching VT&R3 to use LiDAR point-clouds as input:
# launch command
tmuxp load ${VTRSRC}/launch/offline_honeycomb_grizzly.launch.yaml
Open the UI in browser at localhost:5200
Data playback for teaching a path:
- Run these lines in a separate terminal after initializing the UI into teach mode.
# replay the first ros2 bag source ${VTRSRC}/main/install/setup.bash ros2 bag play ${VTRDATA}/utias_20210921_ros2bag/rosbag2_2021_09_21-16_25_56
Data playback for repeating a path:
- Run these lines in a separate terminal after teaching a path, entering repeat mode, and specifying repeat waypoints.
# replay the second ros2 bag source ${VTRSRC}/main/install/setup.bash ros2 bag play ${VTRDATA}/utias_20210921_ros2bag/rosbag2_2021_09_21-16_28_53
- Terminate VT&R3 (
Ctrl-C
once in terminal)