Skip to content

Commit

Permalink
[Doc] Add instructions on using Podman when SELinux is active (vllm-p…
Browse files Browse the repository at this point in the history
…roject#12136)

Signed-off-by: Yuan Tang <[email protected]>
  • Loading branch information
terrytangyuan authored and GWS0428 committed Feb 12, 2025
1 parent b8c99cc commit 524477f
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/source/deployment/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
for vLLM to find the current GPU type and build for that.
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
```

## Building for Arm64/aarch64
Expand Down

0 comments on commit 524477f

Please sign in to comment.