Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROS] Add Official Docker image for ROS2 #1381

Merged
merged 48 commits into from
Jun 6, 2020
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
8e1766c
Start docs for ROS 2 by duplicating those from ROS 1
ruffsl Nov 28, 2018
5fb5991
Update lichening information to reflect ROS 2
ruffsl Nov 28, 2018
d70ba62
Update intro
ruffsl Nov 29, 2018
f7bd0f2
Update example dockerfile and compose file
ruffsl Nov 29, 2018
66a3f10
Remove redundant walk through and skip to compose
ruffsl Nov 29, 2018
725cd24
Cleanup dockerfile example
ruffsl Nov 30, 2018
36501f9
Make dockerfile example a multistage build
ruffsl Nov 30, 2018
92406af
Create an install and build example
ruffsl Dec 13, 2018
bdd6d7d
Update body and links
ruffsl Dec 13, 2018
a7ed044
Update logo
ruffsl Dec 13, 2018
702f14d
Add security example
ruffsl Dec 14, 2018
3a66e1a
Add warnings about networking with ROS2/DDS
ruffsl Dec 14, 2018
db9791a
Fix compose file
ruffsl Dec 14, 2018
c1c457d
Fix Spelling
ruffsl Dec 14, 2018
b03d7d9
Remove duplicate files before merge
ruffsl Dec 15, 2018
40f09e8
Remove duplicate files before merge
ruffsl Dec 15, 2018
eb9d43c
Move changes for merge
ruffsl Dec 15, 2018
f4edbfb
Rewording to just reference latest ros release
ruffsl Dec 15, 2018
199e0a5
Fix markdown for CI
ruffsl Dec 15, 2018
347e32d
Provide sros2 cli in example and fix keystore permissions
ruffsl Dec 15, 2018
bbeff28
Fix Spelling
ruffsl Jun 27, 2019
332a6e8
Update example to use latest LTS tag
ruffsl Jun 27, 2019
3d1f775
Hold off on sros example
ruffsl Jun 27, 2019
8398e71
Update docs link to point to index.ros.org
ruffsl Jun 27, 2019
edb64bb
Merge branch 'master' into ros2
ruffsl Nov 27, 2019
a62f77d
Update Dockerfile build example
ruffsl Nov 27, 2019
e71b10d
Add links to target support reps
ruffsl Nov 27, 2019
1a14579
Update metrics
ruffsl Nov 27, 2019
a1c20bd
Add minimal ros1_bridge example
ruffsl Nov 27, 2019
32e1ffe
Fix Formatting
ruffsl Nov 27, 2019
6d09179
Update More Resources
ruffsl Apr 28, 2020
2cd312f
Dont split up ros version tokens for SEO
ruffsl Apr 28, 2020
966f6ae
Keep ros org landing page as main link
ruffsl Apr 28, 2020
f685e28
Update size given no-install-recommends changes
ruffsl Apr 28, 2020
379df84
Spell check
ruffsl Apr 28, 2020
66047eb
Update examples
ruffsl Apr 28, 2020
b4f8a13
Fix logging for python node
ruffsl Apr 28, 2020
23bf362
Link external license resources like other images
ruffsl Apr 28, 2020
cc17087
Stage changes to build example
ruffsl Apr 30, 2020
2316f47
Update example
ruffsl Apr 30, 2020
522d608
Add links to tools
ruffsl May 21, 2020
50a2c27
Link to relevent reps on variants
ruffsl May 21, 2020
dd187b9
Simplify example
ruffsl May 22, 2020
957be64
Update explanation of example
ruffsl May 22, 2020
2f8fd67
Fix markdown linter
ruffsl May 22, 2020
38e4286
Update tags to foxy and noetic
ruffsl May 22, 2020
c305016
Correct vcstool link
ruffsl May 26, 2020
cf84c67
Nit fix grammar
ruffsl May 26, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
296 changes: 158 additions & 138 deletions ros/content.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,104 @@
# What is [ROS](http://www.ros.org/)?
# What is [ROS](https://index.ros.org/doc/ros2)?
ruffsl marked this conversation as resolved.
Show resolved Hide resolved

The Robot Operating System (ROS) is a set of software libraries and tools that help you build robot applications. From drivers to state-of-the-art algorithms, and with powerful developer tools, ROS has what you need for your next robotics project. And it's all open source.

> [wikipedia.org/wiki/Robot_Operating_System](https://en.wikipedia.org/wiki/Robot_Operating_System)

[%%LOGO%%](http://www.ros.org/)
[%%LOGO%%](https://index.ros.org/doc/ros2)

# How to use this image

## Create a `Dockerfile` in your ROS app project
## Creating a `Dockerfile` to install ROS packages

To create your own ROS docker images and install custom packages, here's a simple example of installing the C++, Python client library demos and security CLI using the official released Debian packages via apt-get.

```dockerfile
FROM %%IMAGE%%:indigo
# place here your application's setup specifics
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
FROM %%IMAGE%%:crystal

# install ros packages for installed release
RUN apt-get update && apt-get install -y \
ros-${ROS_DISTRO}-demo-nodes-cpp \
ros-${ROS_DISTRO}-demo-nodes-py \
ros-${ROS_DISTRO}-sros2 && \
rm -rf /var/lib/apt/lists/*

# run ros package launch file
CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener.launch.py"]
```

You can then build and run the Docker image:
Note: all ROS images include a default entrypoint that sources the ROS environment setup before exiting the configured command, in this case the demo packages launch file. You can then build and run the Docker image like so:
ruffsl marked this conversation as resolved.
Show resolved Hide resolved

```console
$ docker build -t my-ros-app .
$ docker run -it --rm --name my-running-app my-ros-app
$ docker build -t my/ros:app .
$ docker run -it --rm my/ros:app
[INFO] [launch]: process[talker-1]: started with pid [813]
[INFO] [launch]: process[listener-2]: started with pid [814]
[INFO] [talker]: Publishing: 'Hello World: 1'
[INFO] [listener]: I heard: [Hello World: 1]
[INFO] [talker]: Publishing: 'Hello World: 2'
[INFO] [listener]: I heard: [Hello World: 2]
...
```

## Creating a `Dockerfile` to build ROS packages

To create your own ROS docker images and build custom packages, here's a simple example of installing a package's build dependencies, compiling it from source, and installing the resulting build artifacts into a final multi-stage image layer.

```dockerfile
FROM %%IMAGE%%:crystal-ros-base

# install ros build tools
RUN apt-get update && apt-get install -y \
python3-colcon-common-extensions && \
rm -rf /var/lib/apt/lists/*

# clone ros package repo
ENV ROS_WS /opt/ros_ws
RUN mkdir -p $ROS_WS/src
WORKDIR $ROS_WS
RUN git -C src clone \
-b $ROS_DISTRO \
https://github.com/ros2/demos.git

# install ros package dependencies
RUN apt-get update && \
rosdep update && \
rosdep install -y \
--from-paths \
src/demos/demo_nodes_cpp \
--ignore-src && \
rm -rf /var/lib/apt/lists/*

# build ros package source
RUN . /opt/ros/$ROS_DISTRO/setup.sh && \
colcon build \
--packages-select \
demo_nodes_cpp \
--cmake-args \
-DCMAKE_BUILD_TYPE=Release

# copy ros package install via multi-stage
FROM %%IMAGE%%:crystal-ros-core
ENV ROS_WS /opt/ros_ws
COPY --from=0 $ROS_WS/install $ROS_WS/install

# source ros package from entrypoint
RUN sed --in-place --expression \
'$isource "$ROS_WS/install/setup.bash"' \
/ros_entrypoint.sh

# run ros package launch file
CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener.launch.py"]
```

Note: `--from-paths` and `--packages-select` are set here as so to only install the dependencies and build for the `demo_nodes_cpp` package, one among many in the demo git repo that was cloned. To install the dependencies and build all the packages in the source workspace, merely change the scope by setting `--from-paths src/` and dropping the `--packages-select` arguments.

REPOSITORY TAG IMAGE ID CREATED SIZE
my/ros app-multi-stage 66c8112b2fb6 4 seconds ago 775MB
my/ros app-single-stage 6b500239d0d6 2 minutes ago 797MB

For this particular package, using a multi-stage build didn't shrink the final image by much, but for more complex applications, segmenting build setup from the runtime can help keep image sizes down. Additionally, doing so can also prepare you for releasing your package to the community, helping to reconcile dependency discrepancies you may have otherwise forgotten to declare in your `package.xml` manifest.

## Deployment use cases

This dockerized image of ROS is intended to provide a simplified and consistent platform to build and deploy distributed robotic applications. Built from the [official Ubuntu image](https://hub.docker.com/_/ubuntu/) and ROS's official Debian packages, it includes recent supported releases for quick access and download. This provides roboticists in research and industry with an easy way to develop, reuse and ship software for autonomous actions and task planning, control dynamics, localization and mapping, swarm behavior, as well as general system integration.
Expand All @@ -35,12 +111,10 @@ With the advancements and standardization of software containers, roboticists ar

The available tags include supported distros along with a hierarchy tags based off the most common meta-package dependencies, designed to have a small footprint and simple configuration:

- `ros-core`: barebone ROS install
- `ros-core`: barebone ROS 2 install
ruffsl marked this conversation as resolved.
Show resolved Hide resolved
- `ros-base`: basic tools and libraries (also tagged with distro name with LTS version as `latest`)
ruffsl marked this conversation as resolved.
Show resolved Hide resolved
- `robot`: basic install for robots
- `perception`: basic install for perception tasks

The rest of the common meta-packages such as `desktop` and `desktop-full` are hosted on automatic build repos under OSRF's Docker Hub profile [here](https://hub.docker.com/r/osrf/ros/). These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile.
The rest of the common meta-packages such as `desktop` and `ros1-bridge` are hosted on automatic build repos under OSRF's Docker Hub profile [here](https://hub.docker.com/r/osrf/ros/). These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile.
ruffsl marked this conversation as resolved.
Show resolved Hide resolved

### Volumes

Expand All @@ -58,185 +132,131 @@ Some application may require device access for acquiring images from connected c

### Networks

The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of data over topics, and storage of data on a Parameter Server. To abide by the best practice of [one process per container](https://docs.docker.com/articles/dockerfile_best-practices/), Docker networks can be used to string together several running ROS processes. For further details about [ROS NetworkSetup](http://wiki.ros.org/ROS/NetworkSetup) wik article, or see the Deployment example below.
ROS allows for peer-to-peer networking of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of typed data over topics, combinations of both prior via request/reply and status/feedback over actions, and run-time settings via configuration over parameters. To abide by the best practice of [one process per container](https://docs.docker.com/articles/dockerfile_best-practices/), Docker networks can be used to string together several running ROS processes. For further details see the Deployment example further below.

Alternatively, more permissive network setting can be use to share all host network interfaces with the container, such as [`host` network driver](https://docs.docker.com/network/host/), simplifying connectivity with external network participants. Be aware however that this removes the networking namespace separation between containers, and can affect the ability of DDS participants communicate between containers, as documented [here](https://community.rti.com/kb/how-use-rti-connext-dds-communicate-across-docker-containers-using-host-driver).
ruffsl marked this conversation as resolved.
Show resolved Hide resolved

## Deployment example

If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running `roscore` advertised as the `master` service on the new network, then spawn a message publisher and subscriber process as services on the same network.
### Docker Compose

### Build image
In this example we'll demonstrate using [`docker-compose`](https://docs.docker.com/compose/) to spawn a pair of message publisher and subscriber nodes in separate containers connected through shared software defined network.

> Build a ROS image that includes ROS tutorials using this `Dockerfile:`
> Create the directory `~/ros_demos` and add the first `Dockerfile` example from above. In the same directory, also create file `docker-compose.yml` with the following that runs a C++ publisher with a Python subscriber:

```dockerfile
FROM %%IMAGE%%:indigo-ros-base
# install ros tutorials packages
RUN apt-get update && apt-get install -y \
ros-indigo-ros-tutorials \
ros-indigo-common-tutorials \
&& rm -rf /var/lib/apt/lists/
```
```yaml
version: '3'

> Then to build the image from within the same directory:
services:
talker:
build: ./Dockerfile
ruffsl marked this conversation as resolved.
Show resolved Hide resolved
command: ros2 run demo_nodes_cpp talker

```console
$ docker build --tag %%IMAGE%%:ros-tutorials .
listener:
build: ./Dockerfile
ruffsl marked this conversation as resolved.
Show resolved Hide resolved
command: ros2 run demo_nodes_py listener
```

#### Create network

> To create a new network `foo`, we use the network command:

docker network create foo

> Now that we have a network, we can create services. Services advertise there location on the network, making it easy to resolve the location/address of the service specific container. We'll use this make sure our ROS nodes can find and connect to our ROS `master`.

#### Run services

> To create a container for the ROS master and advertise it's service:
> Use docker-compose inside the same directory to launch our ROS nodes. Given the containers created derive from the same docker compose project, they will coexist on shared project network:

```console
$ docker run -it --rm \
--net foo \
--name master \
%%IMAGE%%:ros-tutorials \
roscore
$ docker-compose up -d
```

> Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service:
> Notice that a new network named `ros_demos` has been created, as can be shown further with:

```console
$ docker run -it --rm \
--net foo \
--name talker \
--env ROS_HOSTNAME=talker \
--env ROS_MASTER_URI=http://master:11311 \
%%IMAGE%%:ros-tutorials \
rosrun roscpp_tutorials talker
$ docker network inspect ros_demos
```

> Then in another terminal, run the `listener` node similarly:
> We can monitor the logged output of each container, such as the listener node like so:

```console
$ docker run -it --rm \
--net foo \
--name listener \
--env ROS_HOSTNAME=listener \
--env ROS_MASTER_URI=http://master:11311 \
%%IMAGE%%:ros-tutorials \
rosrun roscpp_tutorials listener
$ docker-compose logs listener
```

> Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this:
> Finally, we can stop and remove all the relevant containers using docker-compose from the same directory:

```console
$ docker service ls
SERVICE ID NAME NETWORK CONTAINER
67ce73355e67 listener foo a62019123321
917ee622d295 master foo f6ab9155fdbe
7f5a4748fb8d talker foo e0da2ee7570a
$ docker-compose stop
$ docker-compose rm
```

> And for the services:
> Note: the auto-generated network, `ros_demos`, will persist until you explicitly remove it using `docker-compose down`.

```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener
e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master
```
### Securing ROS

#### Introspection
Lets build upon the example above by adding authenticated encryption to the message transport. This is done by leveraging [Secure DDS](https://www.omg.org/spec/DDS-SECURITY). We'll use the same ROS docker image to bootstrap the PKI, CAs, and Digitally Signed files.

> Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are:
> Create a script at `~/ros_demos/keystore/bootstrap_keystore.bash` to bootstrap a keystore and add entries for each node:

```console
$ docker exec -it master bash
$ source /ros_entrypoint.sh
```shell
#!/usr/bin/env bash
# Bootstrap ROS keystore
ros2 security create_keystore ./
ros2 security create_key ./ talker
ros2 security create_key ./ listener
chown -R $(stat -c '%u:%g' ./) ./
```

> If we then use `rostopic` to list published message topics, we should see something like this:
> Create a enforcement file at `~/ros_demos/config.env` to configure ROS Security:

```console
$ rostopic list
/chatter
/rosout
/rosout_agg
```shell
# Configure ROS Security
ROS_SECURITY_NODE_DIRECTORY=/keystore
ROS_SECURITY_STRATEGY=Enforce
ROS_SECURITY_ENABLE=true
ROS_DOMAIN_ID=0
```

#### Tear down

> To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them:
> Use a temporary container to run the keystore bootstrapping script in the keystore directory:

```console
$ docker stop master talker listener
$ docker rm master talker listener
$ docker run -it --rm \
--env-file ./config.env \
--volume ./keystore:/keystore:rw \
--workdir /keystore \
ros2 bash bootstrap_keystore.bash
```

### Compose

Now that you have an appreciation for bootstrapping a distributed ROS example manually, lets try and automate it using [`docker-compose`](https://docs.docker.com/compose/)\.

> Start by making a folder named `rostutorials` and moving the Dockerfile we used earlier inside this directory. Then create a yaml file named `docker-compose.yml` in the same directory and paste the following inside:
> Now modify the original `docker-compose.yml` to use the configured environment and respective keystore entries:

```yaml
version: '2'
version: '3'

services:
master:
build: .
container_name: master
command:
- roscore

talker:
build: .
container_name: talker
build: ./Dockerfile
environment:
- "ROS_HOSTNAME=talker"
- "ROS_MASTER_URI=http://master:11311"
command: rosrun roscpp_tutorials talker

- ./config.env
volumes:
- ./keystore/talker:/keystore:ro
command: ros2 run demo_nodes_cpp talker

listener:
build: .
container_name: listener
build: ./Dockerfile
environment:
- "ROS_HOSTNAME=listener"
- "ROS_MASTER_URI=http://master:11311"
command: rosrun roscpp_tutorials listener
```

> Now from inside the same folder, use docker-copose to launch our ROS nodes and specify that they coexist on their own network:

```console
$ docker-compose up -d
```

> Notice that a new network named `rostutorials_default` has now been created, you can inspect it further with:

```console
$ docker network inspect rostutorials_default
- ./config.env
volumes:
- ./keystore/listener:/keystore:ro
command: ros2 run demo_nodes_py listener
```

> We can monitor the logged output of each service, such as the listener node like so:
> Now simply startup docker-compose as before:

```console
$ docker-compose logs listener
```

> Finally, we can stop and remove all the relevant containers using docker-copose from the same directory:

```console
$ docker-compose stop
$ docker-compose rm
$ docker-compose up
```

> Note: the auto-generated network, `rostutorials_default`, will persist over the life of the docker engine or until you explicitly remove it using [`docker network rm`](https://docs.docker.com/engine/reference/commandline/network_rm/)\.
Note: So far this has only added authenticated encryption, i.e. only participants with public certificates signed by a trusted CA may join the domain. To enable access control within the secure domain, i.e. restrict which and how topics may be used by participants, more such details can be found [here](https://github.com/ros2/sros2/).

# More Resources

[ROS.org](http://www.ros.org/): Main ROS website
[Wiki](http://wiki.ros.org/): Find tutorials and learn more
[ROS Answers](http://answers.ros.org/questions/): Ask questions. Get answers
[Docs](https://docs.ros2.org/): Core Documentation
[Index](https://index.ros.org/doc/ros2/): Package Index
[Design](https://design.ros2.org/): Design Articles
[ROS Answers](https://answers.ros.org/questions/): Ask questions. Get answers
[Forums](https://discourse.ros.org/): Hear the latest discussions
[Blog](http://www.ros.org/news/): Stay up-to-date
[OSRF](http://www.osrfoundation.org/): Open Source Robotics Foundation
[OSRF](https://www.osrfoundation.org/): Open Source Robotics Foundation
Loading