This guide will walk you through the process of building and testing all Mayastor components using Nix and Docker.
Mayastor is a multi-component Rust project that makes heavy use of Nix for our development and build process.
If you're coming from a non-Rust (or non-Nix) background, building Mayastor may be a bit
different than you're used to. There is no Makefile
, you won't need a build toolchain,
you won't need to worry about cross compiler toolchains, and all builds are reproducible.
Mayastor is a sub-project of OpenEBS, so don't forget to checkout the umbrella contributor guide.
Mayastor only builds on modern Linuxes. We'd adore contributions to add support for Windows, FreeBSD, OpenWRT, or other server platforms.
If you do not have a Linux system:
- Windows: We recommend using WSL2 if you only need to build Mayastor. You'll need a Hyper-V VM if you want to use it.
- Mac: We recommend you use Docker for Mac and follow the Docker process described. Please let us know if you find a way to run it!
- FreeBSD: We think this might actually work, SPDK is compatible! But, we haven't tried it yet.
- Others: This is kind of a "Do-it-yourself" situation. Sorry, we can't be more help!
The only thing your system needs to build Mayastor is Nix.
Usually Nix can be installed via (Do not use sudo
!):
curl -L https://nixos.org/nix/install | sh
Mayastor is split across different GitHub repositories under the OpenEBS organization.
Here's a breakdown of the required repos for the task at hand:
- data-plane: https://github.com/openebs/mayastor
- The data-plane components:
- io-engine (the only one which we need for this)
- io-engine-client
- casperf
- The data-plane components:
- control-plane: https://github.com/openebs/mayastor-control-plane
- Various control-plane components:
- agent-core
- agent-ha-cluster
- agent-ha-node
- operator-diskpool
- csi-controller
- csi-node
- api-rest
- Various control-plane components:
- extensions: https://github.com/openebs/mayastor-extensions
- Mostly K8s specific components:
- kubectl-mayastor
- metrics-exporter-io-engine
- call-home
- stats-aggregator
- upgrade-job
- Also contains the helm-chart
- Mostly K8s specific components:
NOTE: There are also a few other repositories which are pulled or submoduled by the repositories above
If you want to tinker with all repos, here's how you can check them all out:
mkdir ~/mayastor && cd ~/mayastor
git clone --recurse-submodules https://github.com/openebs/mayastor.git -- io-engine
git clone --recurse-submodules https://github.com/openebs/mayastor-control-plane.git -- controller
git clone --recurse-submodules https://github.com/openebs/mayastor-extensions.git -- extensions
Each code repository contains it's own nix-shell
environment and with it all pre-requisite build dependencies.
NOTE To run the tests, you might need additional OS configuration, example: a docker service.
cd ~/mayastor/controller
nix-shell
Once entered, you can start any tooling (eg code .
) to ensure the correct resources are available.
The project can then be interacted with like any other Rust project.
Building:
cargo build --bins
There are a few different types of tests used in Mayastor:
- Unit Tests
- Component Tests
- BDD Tests
- E2E Tests
- Load Tests
- Performance Tests
Each repo may have a subset of the types defined above.
Find the guide here.
Find the guide here.
Find the guide here.
Each repo has its own CI system which is based on bors and GitHub Actions. At its core, each pipeline runs the Unit/Integration tests, the BDD tests and image-build tests, ensuring that a set of images can be built once a PR is merged to the target branch.
For the Jenkins pipeline you can refer to the ./Jenkinsfile
on each branch.
The Jenkins systems are currently setup on the DataCore sponsored hardware and need to be reinstalled to CNCF sponsored hardware or perhaps even completely moved to GitHub Actions.
Deprecated
CI has now fully migrated to GithubActions and Jenkins CI is now deprecated, and only setup for older release branches (up to release/2.7)
For the GitHub Actions you can refer to the ./github/workflows
on each repo.
Some actions run when a PR is created/updated, whilst others run as part of bors.
Here are some examples of how to interact with bors:
Syntax | Description |
---|---|
bors r+ | Run the test suite and push to master if it passes. Short for "reviewed: looks good" |
bors merge | Equivalent to bors r+ |
bors r=[list] | Same as r+, but the "reviewer" in the commit log will be recorded as the user(s) given as the argument |
bors merge=[list] | Equivalent to bors r=[list] |
bors r- | Cancel an r+, r=, merge, or merge= |
bors merge- | Equivalent to bors r- |
bors try | Run the test suite without pushing to master |
bors try- | Cancel a try |
bors delegate+ bors d+ |
Allow the pull request author to r+ their changes |
bors delegate=[list] bors d=[list] |
Allow the listed users to r+ this pull request's changes |
bors ping | Check if bors is up. If it is, it will comment with pong |
bors retry | Run the previous command a second time |
bors p=[priority] | Set the priority of the current pull request. Pull requests with different priority are never batched together. The pull request with the bigger priority number goes first |
bors r+ p=[priority] | Set the priority, run the test suite, and push to master (shorthand for doing p= and r+ one after the other) |
bors merge p=[priority] | Equivalent to bors r+ p=[priority] |
When you're mostly done with a set of changes, you'll want to test them in a K8s cluster, and for this you need to build docker images.
Each of the repos contains a script for building and pushing all their respective container images.
Usually this is located at ./scripts/release.sh
The api for this script is generally the same as it leverages a common base script.
> ./scripts/release.sh --help
Usage: release.sh [OPTIONS]
-d, --dry-run Output actions that would be taken, but don't run them.
-h, --help Display this text.
--registry <host[:port]> Push the built images to the provided registry.
To also replace the image org provide the full repository path, example: docker.io/org
--debug Build debug version of images where possible.
--skip-build Don't perform nix-build.
--skip-publish Don't publish built images.
--image <image> Specify what image to build and/or upload.
--tar Decompress and load images as tar rather than tar.gz.
--skip-images Don't build nor upload any images.
--alias-tag <tag> Explicit alias for short commit hash tag.
--tag <tag> Explicit tag (overrides the git tag).
--incremental Builds components in two stages allowing for faster rebuilds during development.
--build-bins Builds all the static binaries.
--no-static-linking Don't build the binaries with static linking.
--build-bin Specify which binary to build.
--skip-bins Don't build the static binaries.
--build-binary-out <path> Specify the outlink path for the binaries (otherwise it's the current directory).
--skopeo-copy Don't load containers into host, simply copy them to registry with skopeo.
--skip-cargo-deps Don't prefetch the cargo build dependencies.
Environment Variables:
RUSTFLAGS Set Rust compiler options when building binaries.
Examples:
release.sh --registry 127.0.0.1:5000
If you want to see what happens under the hood, without building, you can use the --dry-run
.
cd ~/mayastor/controller
./scripts/release.sh --dry-run --alias-tag my-tag
Here's a snippet of what you'd actually see:
~/mayastor/controller ~/mayastor
nix-build --argstr img_tag my-tag --no-out-link -A control-plane.project-builder.cargoDeps
Cargo vendored dependencies pre-fetched after 1 attempt(s)
Building openebs/mayastor-agent-core:my-tag ...
nix-build --argstr img_tag my-tag --out-link agents.core-image -A images.release.agents.core --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags
docker load -i agents.core-image
rm agents.core-image
Building openebs/mayastor-agent-ha-node:my-tag ...
nix-build --argstr img_tag my-tag --out-link agents.ha.node-image -A images.release.agents.ha.node --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags
docker load -i agents.ha.node-image
rm agents.ha.node-image
Building openebs/mayastor-agent-ha-cluster:my-tag ...
nix-build --argstr img_tag my-tag --out-link agents.ha.cluster-image -A images.release.agents.ha.cluster --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags
docker load -i agents.ha.cluster-image
If you want to build, but not push it anywhere, you can skip the publishing with --skip-publish
.
NOTE: For repos with static binaries, you can avoid building them with
--skip-bins
.
cd ~/mayastor/controller
./scripts/release.sh --skip-publish --alias-tag my-tag
_NOTE: Take a look here for the guide building and pushing all images
You can push the images to your required registry/namespace using the argument --registry
.
For the purposes of this, we'll push my docker.io namespace: docker.io/tiagolobocastro
.
cd ~/mayastor/controller
./scripts/release.sh --registry docker.io/tiagolobocastro --alias-tag my-tag
NOTE: If you don't specify the namespace, the default openebs namespace is kept.
The default image build process attempts to build all images part of a single repo in one shot, thus reducing the build time. If you're iterating over code changes on a single image, you may wish to enable the iterative build flag which will not rebuild the dependencies over and over again.
cd ~/mayastor/controller
./scripts/release.sh --registry docker.io/tiagolobocastro --alias-tag my-tag --image csi.controller --incremental
Installing the full helm chart with the custom images is quite simple.
NOTE: One last step is required, mostly due to a bug or unexpected behaviour with the helm chart.
We'll need to manually push this container image:docker pull docker.io/openebs/alpine-sh:4.1.0 docker tag docker.io/openebs/alpine-sh:4.1.0 docker.io/tiagolobocastro/alpine-sh:4.1.0 docker push docker.io/tiagolobocastro/alpine-sh:4.1.0
> helm install mayastor mayastor/mayastor -n mayastor --create-namespace --set="image.repo=tiagolobocastro,image.tag=my-tag" --wait
NAME: mayastor
LAST DEPLOYED: Fri Dec 6 15:42:16 2024
NAMESPACE: mayastor
STATUS: deployed
REVISION: 1
NOTES:
OpenEBS Mayastor has been installed. Check its status by running:
$ kubectl get pods -n mayastor
For more information or to view the documentation, visit our website at https://openebs.io/docs/
If you're only building certain components, you may want to modify the images of an existing deployment, or configure per-repo tags, example:
helm install mayastor mayastor/mayastor -n mayastor --create-namespace --set="image.repo=tiagolobocastro,image.repoTags.control-plane=my-tag" --wait
NOTE: We are currently missing overrides for registry/namespace/image:tag on specific Mayastor components