diff --git a/.ci/ci b/.ci/ci
index 47678769f..e982e8e74 100755
--- a/.ci/ci
+++ b/.ci/ci
@@ -59,6 +59,7 @@ make -j8 bootloader-btc-production
make -j8 firmware
make -j8 firmware-btc
make -j8 factory-setup
+make -j8 firmware-debug
# Disallow some symbols in the final binary that we don't want.
if arm-none-eabi-nm build/bin/firmware.elf | grep -q "float_to_decimal_common_shortest"; then
diff --git a/BUILD.md b/BUILD.md
index 67f6f8508..0c246e9e5 100644
--- a/BUILD.md
+++ b/BUILD.md
@@ -2,21 +2,6 @@
# Build BitBox02 firmware and bootloader
-## Dependencies
-
-- [HIDAPI](https://github.com/signal11/hidapi)
-- [GNU ARM Embedded Toolchain](https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads)
-- SEGGER J-Link software
- - [All packages and versions](https://www.segger.com/downloads/jlink/#J-LinkSoftwareAndDocumentationPack)
- - Newer versions should work, but if not, go to "Older versions" and get version 6.34g
- - [OSX package](https://www.segger.com/downloads/jlink/JLink_MacOSX_V630d.pkg)
- - [Linux 64bit](https://www.segger.com/downloads/jlink/JLink_Linux_x86_64.tgz)
- - [others](https://www.segger.com/downloads/jlink/)
-- cmake
-- git
-- Install the pre-built [protobuf python binary](https://github.com/protocolbuffers/protobuf/releases)
- - Then install the included [Python Protocol Buffers](https://github.com/protocolbuffers/protobuf/tree/master/python#installation) runtime library
-
## Reporting issues
@@ -25,129 +10,218 @@ For security related issues please see [SECURITY.md](SECURITY.md).
## Development environment
-### Install development environment as a Docker container
+There is a container image with all the build dependencies and there are some
+`make` shortcuts to use it.
-The container will contain all tools needed to build the project but it is still necessary to get
-the J-Link software to flash the bootloader. Run the commands below to build the container and
-execute a persistent one.
+> [!TIP]
+> It is highly recommended to use the container for development.
-```sh
-make dockerinit
-make dockerdev
-```
+Accessing USB devices, like the flashing tool and the bitbox, is easier outside
+of the container. So it is recommended to install the J-Link Software on your
+development machine to follow the instructions below.
+
+### Development Dependencies*
+
+| Dependency | Version** |
+| ---------- | -------- |
+| [Arm GNU Toolchain](https://developer.arm.com/downloads/-/gnu-rm) | 8-2018-q4 |
+| [HIDAPI](https://github.com/signal11/hidapi) | 0.11.2 |
+| [cmake](https://cmake.org/download/) | 3.10 |
+| [git](https://git-scm.com/downloads) | 2.34 |
+| [Protobuf Compiler](https://github.com/protocolbuffers/protobuf/releases) | 21.2 |
+| [Python Probobuf Runtime](https://github.com/protocolbuffers/protobuf/tree/master/python#installation) | 5.27.3 |
+| [SEGGER J-Link Software and Documentation Pack](https://www.segger.com/downloads/jlink) | 6.34g |
+| Graphviz | 2.42.2 |
+| Doxygen | 1.9.1 |
+| [cmocka](https://cmocka.org/files/1.1/) | 1.1.5 |
-If you do not want to build the docker image locally, or are not working on it, it may be more straightforward to
-pull the image from docker instead of building it. This should be a bit faster and there should not be any issues with
-`make dockerdev` expecting specific version of the image.
+* See the complete list of dependences in the Dockerfile.
+
+** The versions here are known to be working. Newer versions should
+work.
+
+### Setup containerized environment
+
+Run the following commands to fetch the container image and run it:
```sh
make dockerpull
make dockerdev
```
+`dockerpull` will use `docker pull` to fetch the current container image.
+`dockerdev` will use `docker run` and `docker exec` to run a container in the
+background and enter it. `dockerdev` will mount the project root using the same
+path inside the container, which lets you use your preferred editor/IDE outside
+the container.
-The docker container will not allow you to access the hosts USB devices by default which means that
-it is necessary to flash the device in a terminal not running in docker.
+> [!NOTE]
+> The current development container is defined in
+> [.containerversion](.containerversion). This is the version that is pulled
+> with `dockerpull` and built with `dockerinit`.
> [!NOTE]
-> Current development container is defined in the file `.containerversion`
+> `make dockerdev` will enter an already running container if it exists.
-The docker container mounts the repo it was launched from, so you can freely edit your fork in your preferred IDE and
-the changes will be reflected in the docker container.
+Run the following command to build the container:
-**It is highly recommended you develop using this docker container as not all of local setup is completely up to date
-with every Operating system.**
+```sh
+make dockerinit
+```
+
+`dockerinit` is a shortcut to run `docker build`. Use this if you need to
+permanently update the container image ([Dockerfile](Dockerfile)). Don't forget
+to update the [container version file](.containerversion).
+
+> [!TIP]
+> For temporary changes you should enter the container running `docker exec`
+> with user id 0.
-### Install development environment on macOS
+### Setup development environment on macOS with brew
-Make sure you have [Homebrew](https://brew.sh) installed.
-Install the dependencies with:
+> [!CAUTION]
+> Brew usually only supports the latest versions of software packages. It is
+> not easy to get a working development environment using brew. Any
+> discrepancies between your environment and the containerized environment may
+> lead to CI build failures, since CI uses the container.
+
+> [!IMPORTANT]
+> If you use compiler versions different from CI you will not be able to
+> reproducibly build the firmware. Different compilers typically lead to
+> slightly different binary outputs.
+
+Make sure you have [Homebrew](https://brew.sh) installed. Install the
+dependencies with:
```sh
-brew install hidapi cmake protobuf
-brew install automake libtool # for building some code in the external/ folder
+brew install hidapi cmake protobuf@21
+brew install automake libtool
brew tap osx-cross/arm
brew install arm-gcc-bin
```
-## Simulator
-
-The Multi edition firmware can be built as a simulator for linux-amd64. To build it, run:
-
- make -j simulator
+## Contributor instructions
-Run it with:
+### Check out the repository
- ./build-build/bin/simulator
+#### 1. Fork the repository on github.
-This launches a server simulating the firmware. The send_message tool can connect to it with:
+Go to [bitbox02-firmware](https://github.com/bitboxswiss/bitbox02-firmware) and fork the repository.
- ./py/send_message.py --simulator
+#### 2. Check out your fork
-If you choose to create a wallet by restoring a mnemonic, the simulator will automatically use this
-mnemonic:
+Run the following commands to check out your fork:
- boring mistake dish oyster truth pigeon viable emerge sort crash wire portion cannon couple enact box walk height pull today solid off enable tide
+```sh
+git clone --recurse-submodules git@github.com:/bitbox02-firmware.git
+cd bitbox02-firmware
+```
-## Instructions
+> [!TIP]
+> If you have already cloned the repository without the `--recurse-submodules`
+> argument, run:
+>
+> ```sh
+> git submodule update --init --recursive
+> ```
-Connect the J-Link to the debug pins on the BitBox02 prototype board.
+> [!TIP]
+> Add the original repo as a second remote so that you can sync the `master` branch.
+> ```
+> git remote add upstream https://github.com/bitboxswiss/bitbox02-firmware
+> ```
-Plug in both the J-Link hardware and the BitBox02 device into USB ports on your computer or a hub connected to your computer.
+### Build the firmware
-Build the firmware:
+Run the following commands to enter the container and build the firmware:
```sh
-git clone --recurse-submodules https://github.com/BitBoxSwiss/bitbox02-firmware && cd bitbox02-firmware
-# or via ssh
-git clone --recurse-submodules git@github.com:BitBoxSwiss/bitbox02-firmware.git && cd bitbox02-firmware
-make firmware # requires a GNU ARM toolchain for cross-compiling
+make dockerdev
+make firmware
```
-If you have already cloned the repository without the `--recurse-submodules` argument, run:
+> [!TIP]
+> If you have multiple cores you can speed up compilation by passing `-j`, for example `-j8`.
-```sh
-git submodule update --init --recursive
-```
+### Build the bootloader
-Build the bootloader:
+Run the following commands to enter the container and build the bootloader:
```sh
+make dockerdev
make bootloader
```
-(to create a bootloader for a devdevice or a production device, use `make bootloader-devdevice` or
-`make bootloader-production` respectively).
+> [!NOTE]
+> To create a bootloader for a development or a production device, use `make
+> bootloader-devdevice` or `make bootloader-production` respectively.
+
+> [!NOTE]
+> To run unsigned firmwares you need a development bootloader.
+
+### Build the simulator
-Load the bootloader by JLink (requires JLinkExe in PATH).
+The Multi edition firmware can be built as a simulator for linux-amd64. To build it, run:
```sh
-make jlink-flash-bootloader
+make simulator
```
-You need to install the [BitBox02 Python Library](#BitBox02-Python-library) before you can flash the built firmware.
+### Flash instructions
+
+#### Connect J-Link probe
+
+Connect the J-Link probe to the debug pins on the BitBox02 prototype board. The
+pinout of the board and the Arm JTAG/SWD 10-pin connector can be seen in the
+table below.
+
+| Signal | Bitbox02 # | Arm JTAG/SWD # |
+| ------ | ---------- | -------------- |
+| VCC | 1 | 1 |
+| CLK | 2 | 4 |
+| GND | 3 | 3, 5 |
+| DIO | 4 | 2 |
+
+See [bitbox schematics](doc/bb02_v2.10_schematics.pdf) and [Arm JTAG/SWD
+interface](https://developer.arm.com/documentation/101636/0100/Debug-and-Trace/JTAG-SWD-Interface)
+
+Plug **both** the J-Link probe and the BitBox02 into the computer using USB. A
+USB hub can be used.
-Load the firmware by the bootloader (requires loading bootloader.bin by JLink, if not already loaded on the device):
+#### Flash bootloader using J-Link
+
+Load the bootloader by JLink (requires `JLinkExe` in `$PATH`).
```sh
-make flash-dev-firmware
+make jlink-flash-bootloader
```
+> [!NOTE]
+> To flash a bootloader for a development device
+> `make jlink-flash-bootloader-development`.
+
+#### Flash firmware using J-Link
+
Load the firmware by JLink:
```sh
make jlink-flash-firmware
```
-### Build reference documentation (Doxygen)
+#### Flash firmware using bootloader and python cli client
+
+> [!TIP]
+> This method does not require a J-Link probe while developing.
-Dependencies:
+Install the [BitBox02 Python CLI client](#bitbox02-python-cli-client).
+
+Load the firmware through the bootloader:
```sh
-brew install graphviz doxygen
+make flash-dev-firmware
```
-Build:
+### Build reference documentation (Doxygen)
```sh
make docs
@@ -155,48 +229,106 @@ make docs
To view the results, open `build/docs/html/index.html` in a web browser.
-### BitBox02 Python library
+### Debugging
-There is a Python api library in `py/bitbox02`.
+#### Debugging using the simulator
-Run `pip install -r py/requirements.txt` to install the deps (virtualenv recommended).
-
-`make -C py/bitbox02` to generate the protobuf files.
-
-To kick off some api calls:
+Run it with:
```sh
-./py/send_message.py
+./build-build/bin/simulator
```
-### Unit tests
+This launches a server simulating the firmware. The send_message tool can connect to it with:
+
+ ./py/send_message.py --simulator
+
+If you choose to create a wallet by restoring a mnemonic, the simulator will automatically use this
+mnemonic:
+
+ boring mistake dish oyster truth pigeon viable emerge sort crash wire portion cannon couple enact box walk height pull today solid off enable tide
+
+
+#### Debugging using the J-Link probe and GDB
+
+The *debug firmware* enables pretty printing of panics over [RTT](https://www.segger.com/products/debug-probes/j-link/technology/about-real-time-transfer/).
-We are using CMocka [https://cmocka.org/](https://cmocka.org/) for unit tests. To run the tests, the CMocka library
-needs to be installed on your system.
+Run the following commands to build the debug firmware.
-If you're on a Mac, you can use the brew command to install it:
+```sh
+make dockerdev
+make firmware-debug
+```
+
+Run the following command to run the J-Link GDB Server.
```sh
-brew install cmocka
+make jlink-gdb-server
```
-Alternatively, you can get CMocka by cloning the git repository and following these instructions:
+> [!IMPORTANT]
+> The J-Link GDB Server must be left running in the background.
+
+Run the following command to connect with telnet to the J-Link GDB Server to
+see the RTT output.
```sh
-git clone git://git.cryptomilk.org/projects/cmocka.git
-cd cmocka
-mkdir build && cd build
-cmake ..
-make && sudo make install
+make rtt-client
```
-By default, the library will be installed into /usr/local/lib64 directory under Linux x86\_64.
-If the library is not on the library path by default, you might need to export the following environment variable:
+Run the following command to run GDB. GDB will connect to the J-Link GDB
+server, flash the debug firmware and then start execution from the bootloader
+(as if the device was just plugged in).
```sh
-export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib64/
+make run-debug
```
+> [!TIP]
+> After rebuilding the firmware, exit GDB and rerun `run-debug` to flash and reset the device.
+
+> [!TIP]
+> The initial set of GDB commands that are run are specified in the [gdb init
+> script](./scripts/jlink.gdb). You may want to modify it if you are debugging
+> something specific.
+
+> [!TIP]
+> In debug builds you can use the following functions to log:
+> ```c
+> util_log(fmt, args...)
+> ```
+> ```rust
+> use ::util::log::log!(fmt, args...)
+> ```
+> in C you can also format with hex using `util_dbg_hex`:
+> ```c
+> uint8_t arr[] = {1,2};
+> util_log("%s", util_dbg_hex(arr, sizeof(arr)));
+> ```
+> in rust you can format with hex using the built in hex formatter or the hex
+> crate:
+> ```rust
+> let arr = [1, 2];
+> log!("{:02x?}", arr)
+> log!("{}", hex::encode(arr))
+> ```
+
+### Unit tests
+
+CMocka [https://cmocka.org/](https://cmocka.org/) is used for mocking in the
+unit tests. To compile the tests, the CMocka library needs to be installed on
+your system. CMocka is available through most package managers, like *brew* and
+*apt*.
+
+> [!NOTE]
+> If you compiled it yourself from souce, the library will, by default, be
+> installed into **/usr/local/** directory instead of **/usr/**.
+> If the library is not on the library path by default, you might need to export
+> the following environment variable:
+> ```sh
+> export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib64/
+> ```
+
Then you can run the tests by executing
```sh
@@ -220,9 +352,26 @@ make -C build-build coverage-lcovr
### SCCache / CCache
-The build systems supports sccache/ccache, you just need to have it available in your path. You can
-install it into your dev container with the following commands:
+The build systems supports sccache/ccache, you just need to have it available
+in your path. You can install it into your dev container with the following
+commands:
```
docker exec -u 0 -it bitbox02-firmware-dev bash -c 'apt update && apt install -y libssl-dev && CARGO_HOME=/opt/cargo cargo install --locked sccache'
```
+
+## BitBox02 Python Library
+
+There is a Python api library in `py/bitbox02`.
+
+### BitBox02 CLI client
+
+Run `pip install -r py/requirements.txt` to install the deps (virtualenv recommended).
+
+`make -C py/bitbox02` to generate the protobuf files.
+
+To kick off some api calls:
+
+```sh
+./py/send_message.py
+```
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 757c84469..88662b040 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -233,8 +233,9 @@ endif()
# Optimize for size by default
set(CMAKE_C_FLAGS_RELEASE "-Os -DNDEBUG")
-# Allow gdb extensions if available
-set(CMAKE_C_FLAGS_DEBUG "-Og -ggdb")
+# (-ggdb) Allow gdb extensions if available
+# Optimize debug build for size, optimizing for debug takes to much space.
+set(CMAKE_C_FLAGS_DEBUG "-Os -ggdb")
set(CMAKE_C_FLAGS_RELWITHDEBINFO "-Os -ggdb -DNDEBUG")
#-----------------------------------------------------------------------------
@@ -321,13 +322,15 @@ string(APPEND CMAKE_C_FLAGS " -Wundef -Wmissing-include-dirs")
# Disable builtin warning
string(APPEND CMAKE_C_FLAGS " -Wno-cast-function-type")
-# Hardening
-string(APPEND CMAKE_C_FLAGS " -fstack-protector-all")
-if(CMAKE_CROSSCOMPILING)
- # Path to empty dummy libssp and libssp_shared. '-llibssp -llibssp_shared' is automatically added
- # with '-fstack-protector-all', but we don't need them as we have our own custom
- # `__stack_chk_fail`. See https://wiki.osdev.org/Stack_Smashing_Protector.
- set(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} -L${CMAKE_CURRENT_SOURCE_DIR}/external/lib/ssp")
+# Enable stack protection on release builds
+if(NOT CMAKE_BUILD_TYPE STREQUAL "DEBUG")
+ string(APPEND CMAKE_C_FLAGS " -fstack-protector-all")
+ if(CMAKE_CROSSCOMPILING)
+ # Path to empty dummy libssp and libssp_shared. '-llibssp -llibssp_shared' is automatically added
+ # with '-fstack-protector-all', but we don't need them as we have our own custom
+ # `__stack_chk_fail`. See https://wiki.osdev.org/Stack_Smashing_Protector.
+ set(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} -L${CMAKE_CURRENT_SOURCE_DIR}/external/lib/ssp")
+ endif()
endif()
# Disallow duplicate definitions, which is the default since GCC
diff --git a/Makefile b/Makefile
index 071a902fb..90abcd608 100644
--- a/Makefile
+++ b/Makefile
@@ -26,6 +26,11 @@ build/Makefile:
cd build && cmake -DCMAKE_TOOLCHAIN_FILE=arm.cmake ..
$(MAKE) -C py/bitbox02
+build-debug/Makefile:
+ mkdir -p build-debug
+ cd build-debug && cmake -DCMAKE_TOOLCHAIN_FILE=arm.cmake -DCMAKE_BUILD_TYPE=DEBUG ..
+ $(MAKE) -C py/bitbox02
+
build-build/Makefile:
mkdir -p build-build
cd build-build && cmake .. -DCOVERAGE=ON -DSANITIZE_ADDRESS=$(SANITIZE) -DSANITIZE_UNDEFINED=$(SANITIZE)
@@ -41,6 +46,9 @@ build-build-rust-unit-tests/Makefile:
# Directory for building for "host" machine according to gcc convention
build: build/Makefile
+# Directory for building debug build for "host" machine according to gcc convention
+build-debug: build-debug/Makefile
+
# Directory for building for "build" machine according to gcc convention
build-build: build-build/Makefile
@@ -50,10 +58,11 @@ build-build: build-build/Makefile
build-build-rust-unit-tests: build-build-rust-unit-tests/Makefile
firmware: | build
-# Generate python bindings for protobuf for test scripts
$(MAKE) -C build firmware.elf
firmware-btc: | build
$(MAKE) -C build firmware-btc.elf
+firmware-debug: | build-debug
+ $(MAKE) -C build-debug firmware.elf
bootloader: | build
$(MAKE) -C build bootloader.elf
bootloader-development: | build
@@ -112,6 +121,14 @@ jlink-flash-firmware-btc: | build
JLinkExe -if SWD -device ATSAMD51J20 -speed 4000 -autoconnect 1 -CommanderScript ./build/scripts/firmware-btc.jlink
jlink-flash-factory-setup: | build
JLinkExe -if SWD -device ATSAMD51J20 -speed 4000 -autoconnect 1 -CommanderScript ./build/scripts/factory-setup.jlink
+jlink-flash-firmware-debug: | build
+ JLinkExe -if SWD -device ATSAMD51J20 -speed 4000 -autoconnect 1 -CommanderScript ./build-debug/scripts/firmware.jlink
+jlink-gdb-server:
+ JLinkGDBServer -nogui -if SWD -device ATSAMD51J20 -speed 4000
+rtt-client:
+ telnet localhost 19021
+run-debug:
+ arm-none-eabi-gdb -x scripts/jlink.gdb build-debug/bin/firmware.elf
dockerinit:
./scripts/container.sh build --pull --force-rm --no-cache -t shiftcrypto/firmware_v2:$(shell cat .containerversion) .
dockerpull:
@@ -128,4 +145,4 @@ prepare-tidy: | build build-build
make -C build rust-cbindgen
make -C build-build rust-cbindgen
clean:
- rm -rf build build-build build-build-rust-unit-tests
+ rm -rf build build-build build-debug build-build-rust-unit-tests
diff --git a/scripts/jlink.gdb b/scripts/jlink.gdb
new file mode 100644
index 000000000..574b64d5c
--- /dev/null
+++ b/scripts/jlink.gdb
@@ -0,0 +1,16 @@
+# Connect to jlink gdb server
+target extended-remote :2331
+
+# load the firmware into ROM
+load
+
+# Reset the CPU
+monitor reset
+
+#break Reset_Handler
+#break HardFault_Handler
+#break NMI_Handler
+#break MemManage_Handler
+
+# start running
+stepi
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 2fee7912b..dcd459c67 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -224,7 +224,11 @@ if(CMAKE_CROSSCOMPILING)
set(RUST_TARGET_ARCH thumbv7em-none-eabi)
set(RUST_TARGET_ARCH_DIR ${RUST_TARGET_ARCH})
set(RUST_TARGET_ARCH_ARG --target ${RUST_TARGET_ARCH})
- set(RUST_CARGO_FLAGS ${RUST_CARGO_FLAGS} -Zbuild-std=core,alloc -Zbuild-std-features=panic_immediate_abort,optimize_for_size)
+ if(CMAKE_BUILD_TYPE STREQUAL "DEBUG")
+ set(RUST_CARGO_FLAGS ${RUST_CARGO_FLAGS} -Zbuild-std=core,alloc -Zbuild-std-features=optimize_for_size)
+ else()
+ set(RUST_CARGO_FLAGS ${RUST_CARGO_FLAGS} -Zbuild-std=core,alloc -Zbuild-std-features=panic_immediate_abort,optimize_for_size)
+ endif()
else()
set(RUST_TARGET_ARCH_DIR .)
endif()
@@ -366,7 +370,7 @@ foreach(type ${RUST_LIBS})
FIRMWARE_VERSION_SHORT=${FIRMWARE_VERSION}
$<$:RUSTC_WRAPPER=${SCCACHE_PROGRAM}>
RUSTC_BOOTSTRAP=1
- ${CARGO} build $<$:-vv> --offline --features target-${type} --target-dir ${RUST_BINARY_DIR}/feature-${type} ${RUST_CARGO_FLAGS} ${RUST_TARGET_ARCH_ARG}
+ ${CARGO} build $<$:-vv> --offline --features target-${type}$<$:,rtt> --target-dir ${RUST_BINARY_DIR}/feature-${type} ${RUST_CARGO_FLAGS} ${RUST_TARGET_ARCH_ARG}
COMMAND
${CMAKE_COMMAND} -E copy_if_different ${lib} ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY}/lib${type}_rust_c.a
# DEPFILES are only supported with the Ninja build tool
diff --git a/src/platform/platform_init.c b/src/platform/platform_init.c
index df6fcaee5..e7be810ee 100644
--- a/src/platform/platform_init.c
+++ b/src/platform/platform_init.c
@@ -18,11 +18,18 @@
#if !defined(BOOTLOADER)
#include "sd_mmc/sd_mmc_start.h"
#endif
+#include "rust/rust.h"
void platform_init(void)
{
oled_init();
#if !defined(BOOTLOADER)
+// The factory setup image already has a c implementation of RTT.
+#if FACTORYSETUP != 1
+ // these two functions are noops if "rtt" feature isn't enabled in rust
+ rust_rtt_init();
+ util_log("platform_init");
+#endif
sd_mmc_start();
#endif
}
diff --git a/src/rust/Cargo.lock b/src/rust/Cargo.lock
index 040bdbc1d..797e08ddb 100644
--- a/src/rust/Cargo.lock
+++ b/src/rust/Cargo.lock
@@ -30,6 +30,15 @@ version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
+[[package]]
+name = "bare-metal"
+version = "0.2.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5deb64efa5bd81e31fcd1938615a6d98c82eafcbcd787162b6f63b91d6bac5b3"
+dependencies = [
+ "rustc_version 0.2.3",
+]
+
[[package]]
name = "base58ck"
version = "0.1.0"
@@ -182,6 +191,12 @@ dependencies = [
"hex-conservative",
]
+[[package]]
+name = "bitfield"
+version = "0.13.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "46afbd2983a5d5a7bd740ccb198caf5b82f45c40c09c0eed36052d91cb92e719"
+
[[package]]
name = "blake2"
version = "0.10.6"
@@ -253,6 +268,19 @@ dependencies = [
"zeroize",
]
+[[package]]
+name = "cortex-m"
+version = "0.7.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8ec610d8f49840a5b376c69663b6369e71f4b34484b9b2eb29fb918d92516cb9"
+dependencies = [
+ "bare-metal",
+ "bitfield",
+ "critical-section",
+ "embedded-hal",
+ "volatile-register",
+]
+
[[package]]
name = "cpufeatures"
version = "0.2.9"
@@ -277,6 +305,12 @@ version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cace84e55f07e7301bae1c519df89cdad8cc3cd868413d3fdbdeca9ff3db484"
+[[package]]
+name = "critical-section"
+version = "1.2.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "790eea4361631c5e7d22598ecd5723ff611904e3344ce8720784c93e3d83d40b"
+
[[package]]
name = "crypto-common"
version = "0.1.6"
@@ -298,7 +332,7 @@ dependencies = [
"curve25519-dalek-derive",
"digest",
"fiat-crypto",
- "rustc_version",
+ "rustc_version 0.4.0",
"subtle",
]
@@ -352,6 +386,16 @@ version = "1.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb1f6b1ce1c140482ea30ddd3335fc0024ac7ee112895426e0a629a6c20adfe3"
+[[package]]
+name = "embedded-hal"
+version = "0.2.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "35949884794ad573cf46071e41c9b60efb0cb311e3ca01f7af807af1debc66ff"
+dependencies = [
+ "nb 0.1.3",
+ "void",
+]
+
[[package]]
name = "erc20_params"
version = "0.1.0"
@@ -474,6 +518,21 @@ dependencies = [
"bitcoin",
]
+[[package]]
+name = "nb"
+version = "0.1.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "801d31da0513b6ec5214e9bf433a77966320625a37860f910be265be6e18d06f"
+dependencies = [
+ "nb 1.1.0",
+]
+
+[[package]]
+name = "nb"
+version = "1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8d5439c4ad607c3c23abf66de8c8bf57ba8adcd1f129e699851a6e43935d339d"
+
[[package]]
name = "noise-protocol"
version = "0.2.0"
@@ -589,13 +648,32 @@ version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
+[[package]]
+name = "rtt-target"
+version = "0.5.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "10b34c9e6832388e45f3c01f1bb60a016384a0a4ad80cdd7d34913bed25037f0"
+dependencies = [
+ "critical-section",
+ "ufmt-write",
+]
+
+[[package]]
+name = "rustc_version"
+version = "0.2.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a"
+dependencies = [
+ "semver 0.9.0",
+]
+
[[package]]
name = "rustc_version"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfa0f585226d2e68097d4f95d113b15b83a82e819ab25717ec0590d9584ef366"
dependencies = [
- "semver",
+ "semver 1.0.20",
]
[[package]]
@@ -623,12 +701,27 @@ dependencies = [
"cc",
]
+[[package]]
+name = "semver"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403"
+dependencies = [
+ "semver-parser",
+]
+
[[package]]
name = "semver"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "836fa6a3e1e547f9a2c4040802ec865b5d85f4014efe00555d7090a3dcaa1090"
+[[package]]
+name = "semver-parser"
+version = "0.7.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
+
[[package]]
name = "serde"
version = "1.0.204"
@@ -725,6 +818,12 @@ version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba"
+[[package]]
+name = "ufmt-write"
+version = "0.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e87a2ed6b42ec5e28cc3b94c09982969e9227600b2e3dcbc1db927a84c06bd69"
+
[[package]]
name = "unicode-ident"
version = "1.0.5"
@@ -745,15 +844,38 @@ dependencies = [
name = "util"
version = "0.1.0"
dependencies = [
+ "cortex-m",
"num-bigint",
+ "rtt-target",
]
+[[package]]
+name = "vcell"
+version = "0.1.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "77439c1b53d2303b20d9459b1ade71a83c716e3f9c34f3228c00e6f185d6c002"
+
[[package]]
name = "version_check"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
+[[package]]
+name = "void"
+version = "1.0.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
+
+[[package]]
+name = "volatile-register"
+version = "0.2.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "de437e2a6208b014ab52972a27e59b33fa2920d3e00fe05026167a1c509d19cc"
+dependencies = [
+ "vcell",
+]
+
[[package]]
name = "x25519-dalek"
version = "2.0.0"
diff --git a/src/rust/Cargo.toml b/src/rust/Cargo.toml
index 13ff0a54b..684ace324 100644
--- a/src/rust/Cargo.toml
+++ b/src/rust/Cargo.toml
@@ -40,11 +40,27 @@ zeroize = "1.7.0"
# This only affects the .elf output. Debug info is stripped from the final .bin.
# Paths to source code can still appear in the final bin, as they are part of the panic!() output.
debug = true
-
+# Optmimize maximally for size, 'z' should produce even less code than 's'
opt-level = 'z'
+# 1 gives smaller binaries (16 is default in release mode)
codegen-units = 1
+# Abort on panics in release builds.
panic = 'abort'
+# LTO gives smaller binaries due to cross-crate optimizations
lto = true
+# Mimic the release profile to save as much space as possible
[profile.dev]
opt-level = 'z'
+# Set lto="thin" to get faster builds
+lto = true
+# Enabling debug assertions will increase binary size
+debug-assertions = false
+# Enabling overflow checks will increase binary size
+overflow-checks = false
+# Set to maximally 256 to compile more in parallel
+codegen-units = 1
+# Set to 'abort' to save space
+panic = 'unwind'
+# Set to false to potentially reduce binary size
+incremental = true
diff --git a/src/rust/bitbox02-rust-c/Cargo.toml b/src/rust/bitbox02-rust-c/Cargo.toml
index 0f8f8d71d..1603c1c77 100644
--- a/src/rust/bitbox02-rust-c/Cargo.toml
+++ b/src/rust/bitbox02-rust-c/Cargo.toml
@@ -96,4 +96,6 @@ app-u2f = [
app-cardano = [
# enable this feature in the deps
"bitbox02-rust/app-cardano",
-]
\ No newline at end of file
+]
+
+rtt = [ "util/rtt" ]
diff --git a/src/rust/bitbox02-rust-c/src/lib.rs b/src/rust/bitbox02-rust-c/src/lib.rs
index ec83f91e9..6fe78df75 100644
--- a/src/rust/bitbox02-rust-c/src/lib.rs
+++ b/src/rust/bitbox02-rust-c/src/lib.rs
@@ -36,14 +36,37 @@ mod sha2;
mod workflow;
// Whenever execution reaches somewhere it isn't supposed to rust code will "panic". Our panic
-// handler will print the available information on the screen. If we compile with `panic=abort`
-// this code will never get executed.
+// handler will print the available information on the screen and over RTT. If we compile with
+// `panic=abort` this code will never get executed.
#[cfg(not(test))]
#[cfg(not(feature = "testing"))]
#[cfg_attr(feature = "bootloader", allow(unused_variables))]
#[panic_handler]
fn panic(info: &core::panic::PanicInfo) -> ! {
+ ::util::log::log!("{}", info);
#[cfg(feature = "firmware")]
bitbox02_rust::print_debug!(0, "Error: {}", info);
loop {}
}
+
+#[no_mangle]
+pub extern "C" fn rust_rtt_init() {
+ ::util::log::rtt_init();
+}
+
+/// # Safety
+///
+/// The pointer `ptr` must point to a null terminated string
+#[no_mangle]
+#[cfg_attr(not(feature = "rtt"), allow(unused))]
+pub unsafe extern "C" fn rust_log(ptr: *const ::util::c_types::c_char) {
+ #[cfg(feature = "rtt")]
+ {
+ if ptr.is_null() {
+ panic!("`ptr` must be a valid pointer");
+ }
+ let s = unsafe { core::ffi::CStr::from_ptr(ptr as _) };
+ let s = unsafe { core::str::from_utf8_unchecked(s.to_bytes()) };
+ ::util::log::rtt_target::rprintln!("{}", s);
+ }
+}
diff --git a/src/rust/util/Cargo.toml b/src/rust/util/Cargo.toml
index c1c66bfd6..9b92659eb 100644
--- a/src/rust/util/Cargo.toml
+++ b/src/rust/util/Cargo.toml
@@ -22,3 +22,8 @@ license = "Apache-2.0"
[dependencies]
num-bigint = { workspace = true, default-features = false }
+rtt-target = { version = "0.5.0", optional = true }
+cortex-m = { version = "0.7.7", features = ["critical-section-single-core"], optional = true }
+
+[features]
+rtt = ["dep:rtt-target", "dep:cortex-m"]
diff --git a/src/rust/util/src/lib.rs b/src/rust/util/src/lib.rs
index 304fa535c..ea6654fb4 100644
--- a/src/rust/util/src/lib.rs
+++ b/src/rust/util/src/lib.rs
@@ -17,12 +17,17 @@ pub mod ascii;
pub mod bip32;
pub mod c_types;
pub mod decimal;
+pub mod log;
pub mod name;
// for `format!`
#[macro_use]
extern crate alloc;
+// include critical section implementation, needed by rtt-target
+#[cfg(feature = "rtt")]
+extern crate cortex_m;
+
/// Guaranteed to wipe the provided buffer
pub fn zero(dst: &mut [u8]) {
for p in dst {
diff --git a/src/rust/util/src/log.rs b/src/rust/util/src/log.rs
new file mode 100644
index 000000000..58092bf62
--- /dev/null
+++ b/src/rust/util/src/log.rs
@@ -0,0 +1,18 @@
+// Re-export rtt_target so that it is available to the macro user
+#[cfg(feature = "rtt")]
+pub use ::rtt_target;
+
+/// Macro to log over RTT if `rtt` feature is set, otherwise noop
+#[macro_export]
+macro_rules! log {
+ ($($arg:tt)*) => { #[cfg(feature="rtt")] {$crate::log::rtt_target::rprintln!($($arg)*) }};
+}
+
+// Make log macro usable in crate
+pub use log;
+
+pub fn rtt_init() {
+ #[cfg(feature = "rtt")]
+ rtt_target::rtt_init_print!();
+ log!("RTT Initialized");
+}
diff --git a/src/rust/vendor/bare-metal/.cargo-checksum.json b/src/rust/vendor/bare-metal/.cargo-checksum.json
new file mode 100644
index 000000000..07d855e26
--- /dev/null
+++ b/src/rust/vendor/bare-metal/.cargo-checksum.json
@@ -0,0 +1 @@
+{"files":{"CHANGELOG.md":"cbe525fd84e5a7141bcee4fe5ae0c7ff9e1e50da7996277f902859a90600b8e4","Cargo.toml":"fb997fae9de7404a3b148f83ac3f03e84a19d16870843167dadd80e988ac098f","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"035e70219855119df4273b3c5b97543ae82e0dd60c520416e759107c602f651b","README.md":"afa5b1c70325ec18dbfcea11faa65a3d83368c2c907c158f7acb39736b3e0d94","bors.toml":"b96eaac6b3dc8487a2bcc6cb415e745a28d9a61937090df48186c62d2b614aeb","build.rs":"9485deb6c0ab46ed05b1fabfb62518fa6f9fbdcf7a207d4cb6844dc6df70f4d7","ci/install.sh":"e295d97db9e12ac6ee3e523e4597ad58fedcca2b8aa3a21302951ad2327b88a9","ci/script.sh":"e2c28462deea39c9ea792fa7069b9afdb6f561901aa1878ea27046c8ad058e43","src/lib.rs":"9197c65b0daec25ebb5e8c5587f82affaa35757b785a539ebb35ce91dae98b7d"},"package":"5deb64efa5bd81e31fcd1938615a6d98c82eafcbcd787162b6f63b91d6bac5b3"}
\ No newline at end of file
diff --git a/src/rust/vendor/bare-metal/CHANGELOG.md b/src/rust/vendor/bare-metal/CHANGELOG.md
new file mode 100644
index 000000000..a06a869fc
--- /dev/null
+++ b/src/rust/vendor/bare-metal/CHANGELOG.md
@@ -0,0 +1,77 @@
+# Change Log
+
+All notable changes to this project will be documented in this file.
+This project adheres to [Semantic Versioning](http://semver.org/).
+
+## [Unreleased]
+
+## [v0.2.5] - 2019-08-29
+
+### Changed
+
+- The `const-fn` feature is now stable
+
+## [v0.2.4] - 2018-10-30
+
+### Added
+
+- Note in the documentation that `Mutex` is not memory safe in multi-core systems.
+
+### Changed
+
+- The `const-fn` feature can now be used on 1.31-beta and will also work on stable 1.31.
+
+## [v0.2.3] - 2018-08-17
+
+### Fixed
+
+- A compilation error when using a recent nightly while the "const-fn" feature was enabled.
+
+## [v0.2.2] - 2018-08-17 - YANKED
+
+### Fixed
+
+- A compilation error when using a recent nightly while the "const-fn" feature was enabled.
+
+## [v0.2.1] - 2018-08-03
+
+### Fixed
+
+- Soundness issue where it was possible to borrow the contents of a Mutex for longer than the
+ lifetime of the Mutex.
+
+## [v0.2.0] - 2018-05-10 - YANKED
+
+YANKED due to a soundness issue: see v0.2.1 for details
+
+### Changed
+
+- [breaking-change] `const-fn` is no longer a default feature (i.e. a feature that's enabled by
+ default). The consequence is that this crate now compiles on 1.27 (beta) by default, and opting
+ into `const-fn` requires nightly.
+
+## [v0.1.2] - 2018-04-24
+
+### Added
+
+- An opt-out "const-fn" Cargo feature. When this feature is disabled this crate compiles on stable.
+
+## [v0.1.1] - 2017-09-19
+
+### Fixed
+
+- Added feature gate to make this work on recent nightlies
+
+## v0.1.0 - 2017-07-06
+
+- Initial release
+
+[Unreleased]: https://github.com/japaric/bare-metal/compare/v0.2.5...HEAD
+[v0.2.5]: https://github.com/japaric/bare-metal/compare/v0.2.4...v0.2.5
+[v0.2.4]: https://github.com/japaric/bare-metal/compare/v0.2.3...v0.2.4
+[v0.2.3]: https://github.com/japaric/bare-metal/compare/v0.2.2...v0.2.3
+[v0.2.2]: https://github.com/japaric/bare-metal/compare/v0.2.1...v0.2.2
+[v0.2.1]: https://github.com/japaric/bare-metal/compare/v0.2.0...v0.2.1
+[v0.2.0]: https://github.com/japaric/bare-metal/compare/v0.1.2...v0.2.0
+[v0.1.2]: https://github.com/japaric/bare-metal/compare/v0.1.1...v0.1.2
+[v0.1.1]: https://github.com/japaric/bare-metal/compare/v0.1.0...v0.1.1
diff --git a/src/rust/vendor/bare-metal/Cargo.toml b/src/rust/vendor/bare-metal/Cargo.toml
new file mode 100644
index 000000000..0c57fd970
--- /dev/null
+++ b/src/rust/vendor/bare-metal/Cargo.toml
@@ -0,0 +1,27 @@
+# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
+#
+# When uploading crates to the registry Cargo will automatically
+# "normalize" Cargo.toml files for maximal compatibility
+# with all versions of Cargo and also rewrite `path` dependencies
+# to registry (e.g., crates.io) dependencies
+#
+# If you believe there's an error in this file please file an
+# issue against the rust-lang/cargo repository. If you're
+# editing this file be aware that the upstream Cargo.toml
+# will likely look very different (and much more reasonable)
+
+[package]
+name = "bare-metal"
+version = "0.2.5"
+authors = ["Jorge Aparicio "]
+description = "Abstractions common to bare metal systems"
+documentation = "https://docs.rs/bare-metal"
+keywords = ["bare-metal", "register", "peripheral", "interrupt"]
+categories = ["embedded", "hardware-support", "no-std"]
+license = "MIT OR Apache-2.0"
+repository = "https://github.com/japaric/bare-metal"
+[build-dependencies.rustc_version]
+version = "0.2.3"
+
+[features]
+const-fn = []
diff --git a/src/rust/vendor/bare-metal/LICENSE-APACHE b/src/rust/vendor/bare-metal/LICENSE-APACHE
new file mode 100644
index 000000000..16fe87b06
--- /dev/null
+++ b/src/rust/vendor/bare-metal/LICENSE-APACHE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+END OF TERMS AND CONDITIONS
+
+APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+Copyright [yyyy] [name of copyright owner]
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/src/rust/vendor/bare-metal/LICENSE-MIT b/src/rust/vendor/bare-metal/LICENSE-MIT
new file mode 100644
index 000000000..a128ba402
--- /dev/null
+++ b/src/rust/vendor/bare-metal/LICENSE-MIT
@@ -0,0 +1,25 @@
+Copyright (c) 2017 Jorge Aparicio
+
+Permission is hereby granted, free of charge, to any
+person obtaining a copy of this software and associated
+documentation files (the "Software"), to deal in the
+Software without restriction, including without
+limitation the rights to use, copy, modify, merge,
+publish, distribute, sublicense, and/or sell copies of
+the Software, and to permit persons to whom the Software
+is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice
+shall be included in all copies or substantial portions
+of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
+ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
+TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
+IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+DEALINGS IN THE SOFTWARE.
diff --git a/src/rust/vendor/bare-metal/README.md b/src/rust/vendor/bare-metal/README.md
new file mode 100644
index 000000000..b540f8859
--- /dev/null
+++ b/src/rust/vendor/bare-metal/README.md
@@ -0,0 +1,21 @@
+# `bare-metal`
+
+> Abstractions common to bare metal systems
+
+## [Change log](CHANGELOG.md)
+
+## License
+
+Licensed under either of
+
+- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or
+ http://www.apache.org/licenses/LICENSE-2.0)
+- MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
+
+at your option.
+
+### Contribution
+
+Unless you explicitly state otherwise, any contribution intentionally submitted
+for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
+dual licensed as above, without any additional terms or conditions.
diff --git a/src/rust/vendor/bare-metal/bors.toml b/src/rust/vendor/bare-metal/bors.toml
new file mode 100644
index 000000000..5ccee21e0
--- /dev/null
+++ b/src/rust/vendor/bare-metal/bors.toml
@@ -0,0 +1,3 @@
+status = [
+ "continuous-integration/travis-ci/push",
+]
\ No newline at end of file
diff --git a/src/rust/vendor/bare-metal/build.rs b/src/rust/vendor/bare-metal/build.rs
new file mode 100644
index 000000000..f197f20f5
--- /dev/null
+++ b/src/rust/vendor/bare-metal/build.rs
@@ -0,0 +1,9 @@
+extern crate rustc_version;
+
+fn main() {
+ let vers = rustc_version::version().unwrap();
+
+ if vers.major == 1 && vers.minor < 31 {
+ println!("cargo:rustc-cfg=unstable_const_fn")
+ }
+}
diff --git a/src/rust/vendor/bare-metal/ci/install.sh b/src/rust/vendor/bare-metal/ci/install.sh
new file mode 100644
index 000000000..3c4192119
--- /dev/null
+++ b/src/rust/vendor/bare-metal/ci/install.sh
@@ -0,0 +1,9 @@
+set -euxo pipefail
+
+main() {
+ if [ $TARGET != x86_64-unknown-linux-gnu ]; then
+ rustup target add $TARGET
+ fi
+}
+
+main
diff --git a/src/rust/vendor/bare-metal/ci/script.sh b/src/rust/vendor/bare-metal/ci/script.sh
new file mode 100644
index 000000000..b0aec2299
--- /dev/null
+++ b/src/rust/vendor/bare-metal/ci/script.sh
@@ -0,0 +1,11 @@
+set -euxo pipefail
+
+main() {
+ cargo check --target $TARGET
+
+ if [ $TARGET = x86_64-unknown-linux-gnu ]; then
+ cargo test
+ fi
+}
+
+main
diff --git a/src/rust/vendor/bare-metal/src/lib.rs b/src/rust/vendor/bare-metal/src/lib.rs
new file mode 100644
index 000000000..47a6b8edc
--- /dev/null
+++ b/src/rust/vendor/bare-metal/src/lib.rs
@@ -0,0 +1,101 @@
+//! Abstractions common to bare metal systems
+
+#![deny(missing_docs)]
+#![deny(warnings)]
+#![no_std]
+
+use core::cell::UnsafeCell;
+
+/// A peripheral
+#[derive(Debug)]
+pub struct Peripheral
+where
+ T: 'static,
+{
+ address: *mut T,
+}
+
+impl Peripheral {
+ /// Creates a new peripheral
+ ///
+ /// `address` is the base address of the register block
+ pub const unsafe fn new(address: usize) -> Self {
+ Peripheral {
+ address: address as *mut T,
+ }
+ }
+
+ /// Borrows the peripheral for the duration of a critical section
+ pub fn borrow<'cs>(&self, _ctxt: &'cs CriticalSection) -> &'cs T {
+ unsafe { &*self.get() }
+ }
+
+ /// Returns a pointer to the register block
+ pub fn get(&self) -> *mut T {
+ self.address as *mut T
+ }
+}
+
+/// Critical section token
+///
+/// Indicates that you are executing code within a critical section
+pub struct CriticalSection {
+ _0: (),
+}
+
+impl CriticalSection {
+ /// Creates a critical section token
+ ///
+ /// This method is meant to be used to create safe abstractions rather than
+ /// meant to be directly used in applications.
+ pub unsafe fn new() -> Self {
+ CriticalSection { _0: () }
+ }
+}
+
+/// A "mutex" based on critical sections
+///
+/// # Safety
+///
+/// **This Mutex is only safe on single-core systems.**
+///
+/// On multi-core systems, a `CriticalSection` **is not sufficient** to ensure exclusive access.
+pub struct Mutex {
+ inner: UnsafeCell,
+}
+
+impl Mutex {
+ /// Creates a new mutex
+ pub const fn new(value: T) -> Self {
+ Mutex {
+ inner: UnsafeCell::new(value),
+ }
+ }
+}
+
+impl Mutex {
+ /// Borrows the data for the duration of the critical section
+ pub fn borrow<'cs>(&'cs self, _cs: &'cs CriticalSection) -> &'cs T {
+ unsafe { &*self.inner.get() }
+ }
+}
+
+/// ``` compile_fail
+/// fn bad(cs: &bare_metal::CriticalSection) -> &u32 {
+/// let x = bare_metal::Mutex::new(42u32);
+/// x.borrow(cs)
+/// }
+/// ```
+#[allow(dead_code)]
+const GH_6: () = ();
+
+/// Interrupt number
+pub unsafe trait Nr {
+ /// Returns the number associated with an interrupt
+ fn nr(&self) -> u8;
+}
+
+// NOTE A `Mutex` can be used as a channel so the protected data must be `Send`
+// to prevent sending non-Sendable stuff (e.g. access tokens) across different
+// execution contexts (e.g. interrupts)
+unsafe impl Sync for Mutex where T: Send {}
diff --git a/src/rust/vendor/bitfield/.cargo-checksum.json b/src/rust/vendor/bitfield/.cargo-checksum.json
new file mode 100644
index 000000000..d0313d909
--- /dev/null
+++ b/src/rust/vendor/bitfield/.cargo-checksum.json
@@ -0,0 +1 @@
+{"files":{"CHANGELOG.md":"c4945aec76bc2731a0497b605f863a7a11390cadd1d0ff59ae2b2fa2d03f0dda","Cargo.toml":"f36c4d7ba9d81f7105f178eaee447b032ba2f8710e05c6e43fca4fd85e50b549","LICENSE-APACHE":"c6596eb7be8581c18be736c846fb9173b69eccf6ef94c5135893ec56bd92ba08","LICENSE-MIT":"af6b8d2c7ab89b819e3c2db77b572f145d14c8578dbd25015d739b30d4cc92f7","README.md":"0508f6529346eb36ac57497cc72c68e8e64e4f2aac7df2e9395582edfeead850","examples/bits_position.rs":"a00a3c79cb1d87e34e94372bb673a8a468e397ffca437acfbeda1228b2aa99e1","examples/ipv4.rs":"153d81430b512d2277c134c3b29e23924a23fb416c83a3eb010267e98ff0c30c","multitest.toml":"0ad084611444cc582d5421dfac4ef9e9893fd76a4a87d7132e80668f8531eafa","src/lib.rs":"4178acd8676440ac01dc406d23511ff77657d34231343528294d5c12dfa76290","tests/lib.rs":"8a1723a1e34287109cb807d3af318ab22869952d16cde633d6462faaa9e00886"},"package":"46afbd2983a5d5a7bd740ccb198caf5b82f45c40c09c0eed36052d91cb92e719"}
\ No newline at end of file
diff --git a/src/rust/vendor/bitfield/CHANGELOG.md b/src/rust/vendor/bitfield/CHANGELOG.md
new file mode 100644
index 000000000..837f22459
--- /dev/null
+++ b/src/rust/vendor/bitfield/CHANGELOG.md
@@ -0,0 +1,16 @@
+# Changelog
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+## [Unreleased]
+
+## [0.13.2] - 2019-05-28
+
+### Added
+- `from into` can be used in place of `from` to change the input type of the setter. Thanks to @roblabla
+
+[Unreleased]: https://github.com/dzamlo/rust-bitfield/compare/v0.13.1...HEAD
+[0.13.2]: https://github.com/dzamlo/rust-bitfield/compare/v0.13.1...v0.13.2
+
diff --git a/src/rust/vendor/bitfield/Cargo.toml b/src/rust/vendor/bitfield/Cargo.toml
new file mode 100644
index 000000000..e2f4752d1
--- /dev/null
+++ b/src/rust/vendor/bitfield/Cargo.toml
@@ -0,0 +1,22 @@
+# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
+#
+# When uploading crates to the registry Cargo will automatically
+# "normalize" Cargo.toml files for maximal compatibility
+# with all versions of Cargo and also rewrite `path` dependencies
+# to registry (e.g., crates.io) dependencies
+#
+# If you believe there's an error in this file please file an
+# issue against the rust-lang/cargo repository. If you're
+# editing this file be aware that the upstream Cargo.toml
+# will likely look very different (and much more reasonable)
+
+[package]
+name = "bitfield"
+version = "0.13.2"
+authors = ["Loïc Damien "]
+description = "This crate provides macros to generate bitfield-like struct."
+documentation = "https://docs.rs/bitfield"
+license = "MIT OR Apache-2.0"
+repository = "https://github.com/dzamlo/rust-bitfield"
+
+[dependencies]
diff --git a/src/rust/vendor/bitfield/LICENSE-APACHE b/src/rust/vendor/bitfield/LICENSE-APACHE
new file mode 100644
index 000000000..8f71f43fe
--- /dev/null
+++ b/src/rust/vendor/bitfield/LICENSE-APACHE
@@ -0,0 +1,202 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/src/rust/vendor/bitfield/LICENSE-MIT b/src/rust/vendor/bitfield/LICENSE-MIT
new file mode 100644
index 000000000..b964553c0
--- /dev/null
+++ b/src/rust/vendor/bitfield/LICENSE-MIT
@@ -0,0 +1,19 @@
+Copyright (c) 2017 Loïc Damien
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/src/rust/vendor/bitfield/README.md b/src/rust/vendor/bitfield/README.md
new file mode 100644
index 000000000..ac327053a
--- /dev/null
+++ b/src/rust/vendor/bitfield/README.md
@@ -0,0 +1,57 @@
+rust-bitfield
+=============
+
+This crate provides macros to generate bitfield-like struct.
+
+This a complete rewrite of the `bitfield` crate.
+You can find the previous version in the [rust-bitfield-legacy](https://github.com/dzamlo/rust-bitfield-legacy) repository. This version works on the stable version of rustc and use a different syntax with different possibility.
+
+
+## Example
+
+An IPv4 header could be described like that:
+
+```rust
+bitfield!{
+ struct IpV4Header(MSB0 [u8]);
+ u32;
+ get_version, _: 3, 0;
+ get_ihl, _: 7, 4;
+ get_dscp, _: 13, 8;
+ get_ecn, _: 15, 14;
+ get_total_length, _: 31, 16;
+ get_identification, _: 47, 31;
+ get_df, _: 49;
+ get_mf, _: 50;
+ get_fragment_offset, _: 63, 51;
+ get_time_to_live, _: 71, 64;
+ get_protocol, _: 79, 72;
+ get_header_checksum, _: 95, 79;
+ get_source_address, _: 127, 96;
+ get_destination_address, _: 159, 128;
+}
+```
+
+In this example, all the fields are read-only, the _ as setter name signals to skip the setter method.
+The range at the end (e.g. 3, 0) defines the bit range where the information is encoded.
+
+## Documentation
+
+The documentation of the released version is available on [doc.rs](https://docs.rs/bitfield).
+
+
+## License
+
+Licensed under either of
+
+ * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
+ * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
+
+at your option.
+
+### Contribution
+
+Unless you explicitly state otherwise, any contribution intentionally
+submitted for inclusion in the work by you, as defined in the Apache-2.0
+license, shall be dual licensed as above, without any additional terms or
+conditions.
diff --git a/src/rust/vendor/bitfield/examples/bits_position.rs b/src/rust/vendor/bitfield/examples/bits_position.rs
new file mode 100644
index 000000000..0bcf360e8
--- /dev/null
+++ b/src/rust/vendor/bitfield/examples/bits_position.rs
@@ -0,0 +1,75 @@
+#[macro_use]
+extern crate bitfield;
+
+use bitfield::Bit;
+use bitfield::BitRange;
+
+bitfield! {
+ struct BitsLocations([u8]);
+}
+
+bitfield! {
+ struct BitsLocationsMsb0(MSB0 [u8]);
+}
+
+fn println_slice_bits(slice: &[u8]) {
+ if slice.is_empty() {
+ println!("[]");
+ } else {
+ print!("[{:08b}", slice[0]);
+
+ for byte in &slice[1..] {
+ print!(", {:08b}", byte);
+ }
+
+ println!("]");
+ }
+}
+
+fn main() {
+ let mut bits_locations = BitsLocations([0; 3]);
+ let mut bits_locations_msb0 = BitsLocationsMsb0([0; 3]);
+
+ println!("Default version:");
+ for i in 0..(3 * 8) {
+ bits_locations.set_bit(i, true);
+ print!("{:2}: ", i);
+ println_slice_bits(&bits_locations.0);
+ bits_locations.set_bit(i, false);
+ }
+
+ for i in 0..(3 * 8 - 3) {
+ let msb = i + 3;
+ let lsb = i;
+ for value in &[0b1111u8, 0b0001, 0b1000] {
+ bits_locations.set_bit_range(msb, lsb, *value);
+ print!("{:2} - {:2} ({:04b}): ", msb, lsb, value);
+ println_slice_bits(&bits_locations.0);
+ }
+ println!();
+ bits_locations.set_bit_range(msb, lsb, 0u8);
+ }
+
+ println!("MSB0 version:");
+
+ for i in 0..(3 * 8) {
+ bits_locations_msb0.set_bit(i, true);
+ print!("{:2}: ", i);
+ println_slice_bits(&bits_locations_msb0.0);
+
+ bits_locations_msb0.set_bit(i, false);
+ }
+
+ for i in 0..(3 * 8 - 3) {
+ let msb = i + 3;
+ let lsb = i;
+ for value in &[0b1111u8, 0b0001, 0b1000] {
+ bits_locations_msb0.set_bit_range(msb, lsb, *value);
+ print!("{:2} - {:2} ({:04b}): ", msb, lsb, value);
+ println_slice_bits(&bits_locations_msb0.0);
+ }
+ println!();
+
+ bits_locations_msb0.set_bit_range(msb, lsb, 0u8);
+ }
+}
diff --git a/src/rust/vendor/bitfield/examples/ipv4.rs b/src/rust/vendor/bitfield/examples/ipv4.rs
new file mode 100644
index 000000000..93cedcdbf
--- /dev/null
+++ b/src/rust/vendor/bitfield/examples/ipv4.rs
@@ -0,0 +1,60 @@
+#![allow(dead_code)]
+
+#[macro_use]
+extern crate bitfield;
+
+use std::net::Ipv4Addr;
+
+bitfield! {
+ struct IpV4Header(MSB0 [u8]);
+ impl Debug;
+ u32;
+ get_version, _: 3, 0;
+ get_ihl, _: 7, 4;
+ get_dscp, _: 13, 8;
+ get_ecn, _: 15, 14;
+ get_total_length, _: 31, 16;
+ get_identification, _: 47, 31;
+ get_df, _: 49;
+ get_mf, _: 50;
+ get_fragment_offset, _: 63, 51;
+ get_time_to_live, _: 71, 64;
+ get_protocol, _: 79, 72;
+ get_header_checksum, _: 95, 79;
+ u8, get_source_address, _: 103, 96, 4;
+ u32, into Ipv4Addr, get_destination_address, _: 159, 128;
+}
+
+impl + AsMut<[u8]>> IpV4Header {
+ fn get_source_as_ip_addr(&self) -> Ipv4Addr {
+ let mut src = [0; 4];
+ for (i, src) in src.iter_mut().enumerate() {
+ *src = self.get_source_address(i);
+ }
+ src.into()
+ }
+}
+
+fn main() {
+ let data = [
+ 0x45, 0x00, 0x00, 0x40, 0x69, 0x27, 0x40, 0x00, 0x40, 0x11, 0x4d, 0x0d, 0xc0, 0xa8, 0x01,
+ 0x2a, 0xc0, 0xa8, 0x01, 0xfe,
+ ];
+
+ let header = IpV4Header(data);
+
+ assert_eq!(header.get_version(), 4);
+ assert_eq!(header.get_total_length(), 64);
+ assert_eq!(header.get_identification(), 0x6927);
+ assert!(header.get_df());
+ assert!(!header.get_mf());
+ assert_eq!(header.get_fragment_offset(), 0);
+ assert_eq!(header.get_protocol(), 0x11);
+ println!(
+ "from {} to {}",
+ header.get_source_as_ip_addr(),
+ header.get_destination_address()
+ );
+
+ println!("{:#?}", header);
+}
diff --git a/src/rust/vendor/bitfield/multitest.toml b/src/rust/vendor/bitfield/multitest.toml
new file mode 100644
index 000000000..bb62f253a
--- /dev/null
+++ b/src/rust/vendor/bitfield/multitest.toml
@@ -0,0 +1,22 @@
+[[tests]]
+name = "cargo-test-{{toolchain}}"
+command = ["cargo", "+{{toolchain}}", "test", "--all", "--frozen"]
+
+[[tests.env]]
+name = "CARGO_TARGET_DIR"
+value = "target/{{name}}"
+
+[tests.variables]
+toolchain = ["stable", "beta", "nightly", "1.26.0"]
+
+[[tests]]
+name = "cargo-clippy"
+command = ["cargo", "+nightly", "clippy", "--all", "--frozen", "--all-targets", "--", "-D", "warnings"]
+
+[[tests.env]]
+name = "CARGO_TARGET_DIR"
+value = "target/cargo-test-nightly"
+
+[[tests]]
+name = "cargo-fmt"
+command = ["cargo", "fmt", "--all", "--", "--check"]
diff --git a/src/rust/vendor/bitfield/src/lib.rs b/src/rust/vendor/bitfield/src/lib.rs
new file mode 100644
index 000000000..749df7762
--- /dev/null
+++ b/src/rust/vendor/bitfield/src/lib.rs
@@ -0,0 +1,668 @@
+#![no_std]
+#![deny(
+ missing_docs,
+ unused_extern_crates,
+ unused_import_braces,
+ unused_qualifications
+)]
+
+//! This crate provides macros to generate bitfield-like struct.
+//!
+//! See the documentation of the macros for how to use them.
+//!
+//! Examples and tests are also a great way to understand how to use these macros.
+
+/// Declares the fields of struct.
+///
+/// This macro will generate the methods to access the fields of a bitfield. It must be called
+/// from an `impl` block for a type that implements the `BitRange` and/or the `Bit` traits
+/// (which traits are required depending on what type of fields are used).
+///
+/// The syntax of this macro is composed of declarations ended by semicolons. There are two types
+/// of declarations: default type, and fields.
+///
+/// A default type is just a type followed by a semicolon. This will affect all the following field
+/// declarations.
+///
+/// A field declaration is composed of the following:
+///
+/// * Optional attributes (`#[...]`), documentation comments (`///`) are attributes;
+/// * An optional pub keyword to make the methods public
+/// * An optional type followed by a comma
+/// * Optionally, the word `into` followed by a type, followed by a comma
+/// * The getter and setter idents, separated by a comma
+/// * A colon
+/// * One to three expressions of type `usize`
+///
+/// The attributes and pub will be applied to the two methods generated.
+///
+/// If the `into` part is used, the getter will convert the field after reading it.
+///
+/// The getter and setter idents can be `_` to not generate one of the two. For example, if the
+/// setter is `_`, the field will be read-only.
+///
+/// The expressions at the end are the bit positions. Their meaning depends on the number of
+/// expressions:
+///
+/// * One expression: the field is a single bit. The type is ignored and `bool` is used. The trait
+/// `Bit` is used.
+/// * Two expressions: `msb, lsb`, the field is composed of the bits from `msb` to `lsb`, included.
+/// * Three expressions: `msb, lsb, count`, the field is an array. The first element is composed of
+/// the bits from `msb` to `lsb`. The following elements are consecutive bits range of the same
+/// size.
+///
+/// # Example
+///
+/// ```rust
+/// # #[macro_use] extern crate bitfield;
+/// # fn main() {}
+/// # struct FooBar(u64);
+/// # bitfield_bitrange!{struct FooBar(u64)}
+/// # impl From for FooBar{ fn from(_: u32) -> FooBar {unimplemented!()}}
+/// # impl From for u32{ fn from(_: FooBar) -> u32 {unimplemented!()}}
+/// # impl FooBar {
+/// bitfield_fields!{
+/// // The default type will be `u64
+/// u64;
+/// // filed1 is read-write, public, the methods are inline
+/// #[inline]
+/// pub field1, set_field1: 10, 0;
+/// // `field2` is read-only, private, and of type bool.
+/// field2, _ : 0;
+/// // `field3` will be read as an `u32` and then converted to `FooBar`.
+/// // The setter is not affected, it still need an `u32` value.
+/// u32, into FooBar, field3, set_field3: 10, 0;
+/// // `field4` will be read as an `u32` and then converted to `FooBar`.
+/// // The setter will take a `FooBar`, and converted back to an `u32`.
+/// u32, from into FooBar, field4, set_field4: 10, 0;
+/// }
+/// # }
+/// ```
+#[macro_export(local_inner_macros)]
+macro_rules! bitfield_fields {
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, _, $setter:ident: $msb:expr,
+ $lsb:expr, $count:expr) => {
+ $(#[$attribute])*
+ #[allow(unknown_lints)]
+ #[allow(eq_op)]
+ $($vis)* fn $setter(&mut self, index: usize, value: $from) {
+ use $crate::BitRange;
+ __bitfield_debug_assert!(index < $count);
+ let width = $msb - $lsb + 1;
+ let lsb = $lsb + index*width;
+ let msb = lsb + width - 1;
+ self.set_bit_range(msb, lsb, $crate::Into::<$t>::into(value));
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, _, $setter:ident: $msb:expr,
+ $lsb:expr) => {
+ $(#[$attribute])*
+ $($vis)* fn $setter(&mut self, value: $from) {
+ use $crate::BitRange;
+ self.set_bit_range($msb, $lsb, $crate::Into::<$t>::into(value));
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, _, $setter:ident: $bit:expr) => {
+ $(#[$attribute])*
+ $($vis)* fn $setter(&mut self, value: bool) {
+ use $crate::Bit;
+ self.set_bit($bit, value);
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, $getter:ident, _: $msb:expr,
+ $lsb:expr, $count:expr) => {
+ $(#[$attribute])*
+ #[allow(unknown_lints)]
+ #[allow(eq_op)]
+ $($vis)* fn $getter(&self, index: usize) -> $into {
+ use $crate::BitRange;
+ __bitfield_debug_assert!(index < $count);
+ let width = $msb - $lsb + 1;
+ let lsb = $lsb + index*width;
+ let msb = lsb + width - 1;
+ let raw_value: $t = self.bit_range(msb, lsb);
+ $crate::Into::into(raw_value)
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, $getter:ident, _: $msb:expr,
+ $lsb:expr) => {
+ $(#[$attribute])*
+ $($vis)* fn $getter(&self) -> $into {
+ use $crate::BitRange;
+ let raw_value: $t = self.bit_range($msb, $lsb);
+ $crate::Into::into(raw_value)
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, $getter:ident, _: $bit:expr) => {
+ $(#[$attribute])*
+ $($vis)* fn $getter(&self) -> bool {
+ use $crate::Bit;
+ self.bit($bit)
+ }
+ };
+ (@field $(#[$attribute:meta])* ($($vis:tt)*) $t:ty, $from:ty, $into:ty, $getter:ident, $setter:ident:
+ $($exprs:expr),*) => {
+ bitfield_fields!(@field $(#[$attribute])* ($($vis)*) $t, $from, $into, $getter, _: $($exprs),*);
+ bitfield_fields!(@field $(#[$attribute])* ($($vis)*) $t, $from, $into, _, $setter: $($exprs),*);
+ };
+
+ ($t:ty;) => {};
+ ($default_ty:ty; pub $($rest:tt)*) => {
+ bitfield_fields!{$default_ty; () pub $($rest)*}
+ };
+ ($default_ty:ty; #[$attribute:meta] $($rest:tt)*) => {
+ bitfield_fields!{$default_ty; (#[$attribute]) $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attributes:meta])*) #[$attribute:meta] $($rest:tt)*) => {
+ bitfield_fields!{$default_ty; ($(#[$attributes])* #[$attribute]) $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub $t:ty, from into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $t, $into, $into, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub $t:ty, into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $t, $t, $into, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub $t:ty, $getter:tt, $setter:tt: $($exprs:expr),*;
+ $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $t, $t, $t, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub from into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $default_ty, $into, $into, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $default_ty, $default_ty, $into, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) pub $getter:tt, $setter:tt: $($exprs:expr),*;
+ $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* (pub) $default_ty, $default_ty, $default_ty, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+
+ ($default_ty:ty; ($(#[$attribute:meta])*) $t:ty, from into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $t, $into, $into, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+
+ ($default_ty:ty; ($(#[$attribute:meta])*) $t:ty, into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $t, $t, $into, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+
+ ($default_ty:ty; ($(#[$attribute:meta])*) $t:ty, $getter:tt, $setter:tt: $($exprs:expr),*;
+ $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $t, $t, $t, $getter, $setter: $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) from into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $default_ty, $into, $into, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) into $into:ty, $getter:tt, $setter:tt:
+ $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $default_ty, $default_ty, $into, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; ($(#[$attribute:meta])*) $getter:tt, $setter:tt: $($exprs:expr),*;
+ $($rest:tt)*) => {
+ bitfield_fields!{@field $(#[$attribute])* () $default_ty, $default_ty, $default_ty, $getter, $setter:
+ $($exprs),*}
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($previous_default_ty:ty; $default_ty:ty; $($rest:tt)*) => {
+ bitfield_fields!{$default_ty; $($rest)*}
+ };
+ ($default_ty:ty; $($rest:tt)*) => {
+ bitfield_fields!{$default_ty; () $($rest)*}
+ };
+ ($($rest:tt)*) => {
+ bitfield_fields!{SET_A_DEFAULT_TYPE_OR_SPECIFY_THE_TYPE_FOR_EACH_FIELDS; $($rest)*}
+ }
+}
+
+/// Generates a `fmt::Debug` implementation.
+///
+/// This macros must be called from a `impl Debug for ...` block. It will generate the `fmt` method.
+///
+/// In most of the case, you will not directly call this macros, but use `bitfield`.
+///
+/// The syntax is `struct TheNameOfTheStruct` followed by the syntax of `bitfield_fields`.
+///
+/// The write-only fields are ignored.
+///
+/// # Example
+///
+/// ```rust
+/// # #[macro_use] extern crate bitfield;
+/// struct FooBar(u32);
+/// bitfield_bitrange!{struct FooBar(u32)}
+/// impl FooBar{
+/// bitfield_fields!{
+/// u32;
+/// field1, _: 7, 0;
+/// field2, _: 31, 24;
+/// }
+/// }
+///
+/// impl std::fmt::Debug for FooBar {
+/// bitfield_debug!{
+/// struct FooBar;
+/// field1, _: 7, 0;
+/// field2, _: 31, 24;
+/// }
+/// }
+///
+/// fn main() {
+/// let foobar = FooBar(0x11223344);
+/// println!("{:?}", foobar);
+
+/// }
+/// ```
+#[macro_export(local_inner_macros)]
+macro_rules! bitfield_debug {
+ (struct $name:ident; $($rest:tt)*) => {
+ fn fmt(&self, f: &mut $crate::fmt::Formatter) -> $crate::fmt::Result {
+ let mut debug_struct = f.debug_struct(__bitfield_stringify!($name));
+ debug_struct.field(".0", &self.0);
+ bitfield_debug!{debug_struct, self, $($rest)*}
+ debug_struct.finish()
+ }
+ };
+ ($debug_struct:ident, $self:ident, #[$attribute:meta] $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, pub $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, _, $setter:tt: $($exprs:expr),*; $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, $type:ty; $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, $getter:ident, $setter:tt: $msb:expr, $lsb:expr, $count:expr;
+ $($rest:tt)*) => {
+ let mut array = [$self.$getter(0); $count];
+ for (i, e) in (&mut array).into_iter().enumerate() {
+ *e = $self.$getter(i);
+ }
+ $debug_struct.field(__bitfield_stringify!($getter), &array);
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, $getter:ident, $setter:tt: $($exprs:expr),*; $($rest:tt)*)
+ => {
+ $debug_struct.field(__bitfield_stringify!($getter), &$self.$getter());
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, from into $into:ty, $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, into $into:ty, $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, $type:ty, $($rest:tt)*) => {
+ bitfield_debug!{$debug_struct, $self, $($rest)*}
+ };
+ ($debug_struct:ident, $self:ident, ) => {};
+}
+
+/// Implements `BitRange` for a tuple struct (or "newtype").
+///
+/// This macro will generate an implementation of the `BitRange` trait for an existing single
+/// element tuple struct.
+///
+/// The syntax is more or less the same as declaring a "newtype", **without** the attributes,
+/// documentation comments and pub keyword.
+///
+/// The difference with a normal "newtype" is the type in parentheses. If the type is `[t]` (where
+/// `t` is any of the unsigned integer type), the "newtype" will be generic and implement
+/// `BitRange` for `T: AsMut<[t]> + AsRef<[t]>` (for example a slice, an array or a `Vec`). You can
+/// also use `MSB0 [t]`. The difference will be the positions of the bit. You can use the
+/// `bits_positions` example to see where each bits is. If the type is neither of this two, the
+/// "newtype" will wrap a value of the specified type and implements `BitRange` the same ways as
+/// the wrapped type.
+///
+/// # Examples
+///
+/// ```rust
+/// # #[macro_use] extern crate bitfield;
+/// # fn main() {}
+/// struct BitField1(u32);
+/// bitfield_bitrange!{struct BitField1(u32)}
+///
+/// struct BitField2(T);
+/// bitfield_bitrange!{struct BitField2([u8])}
+///
+/// struct BitField3(T);
+/// bitfield_bitrange!{struct BitField3(MSB0 [u8])}
+/// ```
+///
+#[macro_export(local_inner_macros)]
+macro_rules! bitfield_bitrange {
+ (@impl_bitrange_slice $name:ident, $slice_ty:ty, $bitrange_ty:ty) => {
+ impl + AsRef<[$slice_ty]>> $crate::BitRange<$bitrange_ty>
+ for $name {
+ fn bit_range(&self, msb: usize, lsb: usize) -> $bitrange_ty {
+ let bit_len = $crate::size_of::<$slice_ty>()*8;
+ let value_bit_len = $crate::size_of::<$bitrange_ty>()*8;
+ let mut value = 0;
+ for i in (lsb..=msb).rev() {
+ value <<= 1;
+ value |= ((self.0.as_ref()[i/bit_len] >> (i%bit_len)) & 1) as $bitrange_ty;
+ }
+ value << (value_bit_len - (msb - lsb + 1)) >> (value_bit_len - (msb - lsb + 1))
+ }
+
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: $bitrange_ty) {
+ let bit_len = $crate::size_of::<$slice_ty>()*8;
+ let mut value = value;
+ for i in lsb..=msb {
+ self.0.as_mut()[i/bit_len] &= !(1 << (i%bit_len));
+ self.0.as_mut()[i/bit_len] |= (value & 1) as $slice_ty << (i%bit_len);
+ value >>= 1;
+ }
+ }
+ }
+ };
+ (@impl_bitrange_slice_msb0 $name:ident, $slice_ty:ty, $bitrange_ty:ty) => {
+ impl + AsRef<[$slice_ty]>> $crate::BitRange<$bitrange_ty>
+ for $name {
+ fn bit_range(&self, msb: usize, lsb: usize) -> $bitrange_ty {
+ let bit_len = $crate::size_of::<$slice_ty>()*8;
+ let value_bit_len = $crate::size_of::<$bitrange_ty>()*8;
+ let mut value = 0;
+ for i in lsb..=msb {
+ value <<= 1;
+ value |= ((self.0.as_ref()[i/bit_len] >> (bit_len - i%bit_len - 1)) & 1)
+ as $bitrange_ty;
+ }
+ value << (value_bit_len - (msb - lsb + 1)) >> (value_bit_len - (msb - lsb + 1))
+ }
+
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: $bitrange_ty) {
+ let bit_len = $crate::size_of::<$slice_ty>()*8;
+ let mut value = value;
+ for i in (lsb..=msb).rev() {
+ self.0.as_mut()[i/bit_len] &= !(1 << (bit_len - i%bit_len - 1));
+ self.0.as_mut()[i/bit_len] |= (value & 1) as $slice_ty
+ << (bit_len - i%bit_len - 1);
+ value >>= 1;
+ }
+ }
+ }
+ };
+ (struct $name:ident([$t:ty])) => {
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, u8);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, u16);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, u32);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, u64);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, u128);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, i8);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, i16);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, i32);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, i64);
+ bitfield_bitrange!(@impl_bitrange_slice $name, $t, i128);
+ };
+ (struct $name:ident(MSB0 [$t:ty])) => {
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, u8);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, u16);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, u32);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, u64);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, u128);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, i8);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, i16);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, i32);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, i64);
+ bitfield_bitrange!(@impl_bitrange_slice_msb0 $name, $t, i128);
+ };
+ (struct $name:ident($t:ty)) => {
+ impl $crate::BitRange for $name where $t: $crate::BitRange {
+ fn bit_range(&self, msb: usize, lsb: usize) -> T {
+ self.0.bit_range(msb, lsb)
+ }
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: T) {
+ self.0.set_bit_range(msb, lsb, value);
+ }
+ }
+ };
+}
+
+/// Combines `bitfield_bitrange` and `bitfield_fields`.
+///
+/// The syntax of this macro is the syntax of a tuple struct, including attributes and
+/// documentation comments, followed by a semicolon, some optional elements, and finally the fields
+/// as described in the `bitfield_fields` documentation.
+///
+/// The first optional element is `no default BitRange;`. With that, no implementation of
+/// `BitRange` will be generated.
+///
+/// The second optional element is `impl Debug;`. This will generate an implementation of
+/// `fmt::Debug` with the `bitfield_debug` macro.
+///
+/// The difference with calling those macros separately is that `bitfield_fields` is called
+/// from an appropriate `impl` block. If you use the non-slice form of `bitfield_bitrange`, the
+/// default type for `bitfield_fields` will be set to the wrapped fields.
+///
+/// See the documentation of these macros for more information on their respective syntax.
+///
+/// # Example
+///
+/// ```rust
+/// # #[macro_use] extern crate bitfield;
+/// # fn main() {}
+/// bitfield!{
+/// pub struct BitField1(u16);
+/// impl Debug;
+/// // The fields default to u16
+/// field1, set_field1: 10, 0;
+/// pub field2, _ : 12, 3;
+/// }
+/// ```
+///
+/// or with a custom `BitRange` implementation :
+/// ```rust
+/// # #[macro_use] extern crate bitfield;
+/// # use bitfield::BitRange;
+/// # fn main() {}
+/// bitfield!{
+/// pub struct BitField1(u16);
+/// no default BitRange;
+/// impl Debug;
+/// u8;
+/// field1, set_field1: 10, 0;
+/// pub field2, _ : 12, 3;
+/// }
+/// impl BitRange for BitField1 {
+/// fn bit_range(&self, msb: usize, lsb: usize) -> u8 {
+/// let width = msb - lsb + 1;
+/// let mask = (1 << width) - 1;
+/// ((self.0 >> lsb) & mask) as u8
+/// }
+/// fn set_bit_range(&mut self, msb: usize, lsb: usize, value: u8) {
+/// self.0 = (value as u16) << lsb;
+/// }
+/// }
+/// ```
+#[macro_export(local_inner_macros)]
+macro_rules! bitfield {
+ ($(#[$attribute:meta])* pub struct $($rest:tt)*) => {
+ bitfield!($(#[$attribute])* (pub) struct $($rest)*);
+ };
+ ($(#[$attribute:meta])* struct $($rest:tt)*) => {
+ bitfield!($(#[$attribute])* () struct $($rest)*);
+ };
+ // Force `impl Debug` to always be after `no default BitRange` it the two are present.
+ // This simplify the rest of the macro.
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident($($type:tt)*); impl Debug; no default BitRange; $($rest:tt)*) => {
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name($($type)*); no default BitRange; impl Debug; $($rest)*}
+ };
+
+ // If we have `impl Debug` without `no default BitRange`, we will still match, because when
+ // we call `bitfield_bitrange`, we add `no default BitRange`.
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident([$t:ty]); no default BitRange; impl Debug; $($rest:tt)*) => {
+ impl + AsRef<[$t]> + $crate::fmt::Debug> $crate::fmt::Debug for $name {
+ bitfield_debug!{struct $name; $($rest)*}
+ }
+
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name([$t]); no default BitRange; $($rest)*}
+ };
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident([$t:ty]); no default BitRange; $($rest:tt)*) => {
+ $(#[$attribute])*
+ $($vis)* struct $name(pub T);
+
+ impl + AsRef<[$t]>> $name {
+ bitfield_fields!{$($rest)*}
+ }
+ };
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident([$t:ty]); $($rest:tt)*) => {
+ bitfield_bitrange!(struct $name([$t]));
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name([$t]); no default BitRange; $($rest)*}
+ };
+
+ // The only difference between the MSB0 version anf the non-MSB0 version, is the BitRange
+ // implementation. We delegate everything else to the non-MSB0 version of the macro.
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident(MSB0 [$t:ty]); no default BitRange; $($rest:tt)*) => {
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name([$t]); no default BitRange; $($rest)*}
+ };
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident(MSB0 [$t:ty]); $($rest:tt)*) => {
+ bitfield_bitrange!(struct $name(MSB0 [$t]));
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name([$t]); no default BitRange; $($rest)*}
+ };
+
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident($t:ty); no default BitRange; impl Debug; $($rest:tt)*) => {
+ impl $crate::fmt::Debug for $name {
+ bitfield_debug!{struct $name; $($rest)*}
+ }
+
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name($t); no default BitRange; $($rest)*}
+ };
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident($t:ty); no default BitRange; $($rest:tt)*) => {
+ $(#[$attribute])*
+ $($vis)* struct $name(pub $t);
+
+ impl $name {
+ bitfield_fields!{$t; $($rest)*}
+ }
+ };
+ ($(#[$attribute:meta])* ($($vis:tt)*) struct $name:ident($t:ty); $($rest:tt)*) => {
+ bitfield_bitrange!(struct $name($t));
+ bitfield!{$(#[$attribute])* ($($vis)*) struct $name($t); no default BitRange; $($rest)*}
+ };
+}
+
+#[doc(hidden)]
+pub use core::convert::Into;
+#[doc(hidden)]
+pub use core::fmt;
+#[doc(hidden)]
+pub use core::mem::size_of;
+
+/// A trait to get or set ranges of bits.
+pub trait BitRange {
+ /// Get a range of bits.
+ fn bit_range(&self, msb: usize, lsb: usize) -> T;
+ /// Set a range of bits.
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: T);
+}
+
+/// A trait to get or set a single bit.
+///
+/// This trait is implemented for all type that implement `BitRange`.
+pub trait Bit {
+ /// Get a single bit.
+ fn bit(&self, bit: usize) -> bool;
+
+ /// Set a single bit.
+ fn set_bit(&mut self, bit: usize, value: bool);
+}
+
+impl> Bit for T {
+ fn bit(&self, bit: usize) -> bool {
+ self.bit_range(bit, bit) != 0
+ }
+ fn set_bit(&mut self, bit: usize, value: bool) {
+ self.set_bit_range(bit, bit, value as u8);
+ }
+}
+
+macro_rules! impl_bitrange_for_u {
+ ($t:ty, $bitrange_ty:ty) => {
+ impl BitRange<$bitrange_ty> for $t {
+ #[inline]
+ #[allow(unknown_lints)]
+ #[allow(cast_lossless)]
+ fn bit_range(&self, msb: usize, lsb: usize) -> $bitrange_ty {
+ let bit_len = size_of::<$t>()*8;
+ let result_bit_len = size_of::<$bitrange_ty>()*8;
+ let result = ((*self << (bit_len - msb - 1)) >> (bit_len - msb - 1 + lsb))
+ as $bitrange_ty;
+ result << (result_bit_len - (msb - lsb + 1)) >> (result_bit_len - (msb - lsb + 1))
+ }
+
+ #[inline]
+ #[allow(unknown_lints)]
+ #[allow(cast_lossless)]
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: $bitrange_ty) {
+ let bit_len = size_of::<$t>()*8;
+ let mask: $t = !(0 as $t)
+ << (bit_len - msb - 1)
+ >> (bit_len - msb - 1 + lsb)
+ << (lsb);
+ *self &= !mask;
+ *self |= (value as $t << lsb) & mask;
+ }
+ }
+ }
+}
+
+macro_rules! impl_bitrange_for_u_combinations {
+((),($($bitrange_ty:ty),*)) => {
+
+};
+(($t:ty),($($bitrange_ty:ty),*)) => {
+ $(impl_bitrange_for_u!{$t, $bitrange_ty})*
+};
+ (($t_head:ty, $($t_rest:ty),*),($($bitrange_ty:ty),*)) => {
+ impl_bitrange_for_u_combinations!{($t_head), ($($bitrange_ty),*)}
+ impl_bitrange_for_u_combinations!{($($t_rest),*), ($($bitrange_ty),*)}
+ };
+}
+
+impl_bitrange_for_u_combinations! {(u8, u16, u32, u64, u128), (u8, u16, u32, u64, u128)}
+impl_bitrange_for_u_combinations! {(u8, u16, u32, u64, u128), (i8, i16, i32, i64, i128)}
+
+// Same as std::stringify but callable from local_inner_macros macros defined inside
+// this crate.
+#[macro_export]
+#[doc(hidden)]
+macro_rules! __bitfield_stringify {
+ ($s:ident) => {
+ stringify!($s)
+ };
+}
+
+// Same as std::debug_assert but callable from local_inner_macros macros defined inside
+// this crate.
+#[macro_export]
+#[doc(hidden)]
+macro_rules! __bitfield_debug_assert {
+ ($e:expr) => {
+ debug_assert!($e)
+ };
+}
diff --git a/src/rust/vendor/bitfield/tests/lib.rs b/src/rust/vendor/bitfield/tests/lib.rs
new file mode 100644
index 000000000..f762367f3
--- /dev/null
+++ b/src/rust/vendor/bitfield/tests/lib.rs
@@ -0,0 +1,1105 @@
+#![recursion_limit = "128"]
+
+#[macro_use]
+extern crate bitfield;
+
+// We use a constant to make sure bits positions don't need to be literals but
+// can also be constants or expressions.
+const THREE: usize = 3;
+
+#[derive(Copy, Clone, Debug)]
+pub struct Foo(u16);
+impl From for Foo {
+ fn from(value: u8) -> Foo {
+ Foo(u16::from(value))
+ }
+}
+
+impl From for u8 {
+ fn from(value: Foo) -> u8 {
+ value.0 as u8
+ }
+}
+
+bitfield! {
+ #[derive(Copy, Clone)]
+ /// documentation comments also work!
+ struct FooBar(u32);
+ impl Debug;
+ foo1, set_foo1: 0, 0;
+ u8;
+ foo2, set_foo2: 31, 31;
+ foo3, set_foo3: THREE, 0;
+ // We make sure attributes are applied to fields. If attributes were not
+ // applied, the compilation would fail with a `duplicate definition`
+ // error.
+ #[cfg(not(test))]
+ foo3, set_foo3: 3, 0;
+ u16, foo4, set_foo4: 31, 28;
+ foo5, set_foo5: 0, 0, 32;
+ u32;
+ foo6, set_foo6: 5, THREE, THREE;
+ getter_only, _: 3, 1;
+ _, setter_only: 2*2, 2;
+ getter_only_array, _: 5, 3, 3;
+ _, setter_only_array: 2*THREE, 4, 3;
+ all_bits, set_all_bits: 31, 0;
+ single_bit, set_single_bit: 3;
+ u8, into Foo, into_foo1, set_into_foo1: 31, 31;
+ pub u8, into Foo, into_foo2, set_into_foo2: 31, 31;
+ u8, from into Foo, from_foo1, set_from_foo1: 31, 31;
+ u8, from into Foo, _, set_from_foo2: 31, 31;
+ u8;
+ into Foo, into_foo3, set_into_foo3: 31, 31;
+ pub into Foo, into_foo4, set_into_foo4: 31, 31;
+ into Foo, _, set_into_foo5: 31, 31;
+ into Foo, into_foo6, _: 29, 29, 3;
+ from into Foo, from_foo3, set_from_foo3: 31, 31;
+ from into Foo, _, set_from_foo4: 31, 31;
+ from into Foo, from_foo5, set_from_foo5: 29, 29, 3;
+ from into Foo, from_foo6, _: 31, 31;
+ i8;
+ signed_single_bit, set_signed_single_bit: 0, 0;
+ signed_two_bits, set_signed_two_bits: 1, 0;
+ signed_eight_bits, set_signed_eight_bits: 7, 0;
+ signed_eight_bits_unaligned, set_signed_eight_bits_unaligned: 8, 1;
+ u128, u128_getter, set_u128: 8, 1;
+ i128, i128_getter, set_i128: 8, 1;
+}
+
+impl FooBar {
+ bitfield_fields! {
+ // Boolean field don't need a type
+ foo7, _: 1;
+ }
+
+ bitfield_fields! {
+ // If all fields have a type, we don't need to specify a default type
+ u8, foo8,_: 1, 0;
+ u32, foo9, _: 2, 0;
+ }
+
+ bitfield_fields! {
+ // We can still set a default type
+ u16;
+ foo10, _: 2, 0;
+ u32, foo11, _: 2, 0;
+ foo12, _: 2, 0;
+ }
+
+ // Check if an empty bitfield_fields compiles without errors.
+ bitfield_fields! {}
+}
+
+#[test]
+fn test_single_bit() {
+ let mut fb = FooBar(0);
+
+ fb.set_foo1(1);
+ assert_eq!(0x1, fb.0);
+ assert_eq!(0x1, fb.foo1());
+ assert_eq!(0x0, fb.foo2());
+ assert_eq!(false, fb.single_bit());
+ assert_eq!(-1, fb.signed_single_bit());
+
+ fb.set_foo2(1);
+ assert_eq!(0x8000_0001, fb.0);
+ assert_eq!(0x1, fb.foo1());
+ assert_eq!(0x1, fb.foo2());
+ assert_eq!(false, fb.single_bit());
+ assert_eq!(-1, fb.signed_single_bit());
+
+ fb.set_foo1(0);
+ assert_eq!(0x8000_0000, fb.0);
+ assert_eq!(0x0, fb.foo1());
+ assert_eq!(0x1, fb.foo2());
+ assert_eq!(false, fb.single_bit());
+ assert_eq!(0, fb.signed_single_bit());
+
+ fb.set_single_bit(true);
+ assert_eq!(0x8000_0008, fb.0);
+ assert_eq!(0x0, fb.foo1());
+ assert_eq!(0x1, fb.foo2());
+ assert_eq!(true, fb.single_bit());
+ assert_eq!(0, fb.signed_single_bit());
+
+ fb.set_signed_single_bit(-1);
+ assert_eq!(0x8000_0009, fb.0);
+ assert_eq!(0x1, fb.foo1());
+ assert_eq!(0x1, fb.foo2());
+ assert_eq!(true, fb.single_bit());
+ assert_eq!(-1, fb.signed_single_bit());
+}
+
+#[test]
+fn test_single_bit_plus_garbage() {
+ let mut fb = FooBar(0);
+
+ fb.set_foo1(0b10);
+ assert_eq!(0x0, fb.0);
+ assert_eq!(0x0, fb.foo1());
+ assert_eq!(0x0, fb.foo2());
+
+ fb.set_foo1(0b11);
+ assert_eq!(0x1, fb.0);
+ assert_eq!(0x1, fb.foo1());
+ assert_eq!(0x0, fb.foo2());
+}
+
+#[test]
+fn test_multiple_bit() {
+ let mut fb = FooBar(0);
+
+ fb.set_foo3(0x0F);
+ assert_eq!(0xF, fb.0);
+ assert_eq!(0xF, fb.foo3());
+ assert_eq!(0x0, fb.foo4());
+
+ fb.set_foo4(0x0F);
+ assert_eq!(0xF000_000F, fb.0);
+ assert_eq!(0xF, fb.foo3());
+ assert_eq!(0xF, fb.foo4());
+
+ fb.set_foo3(0);
+ assert_eq!(0xF000_0000, fb.0);
+ assert_eq!(0x0, fb.foo3());
+ assert_eq!(0xF, fb.foo4());
+
+ fb.set_foo3(0xA);
+ assert_eq!(0xF000_000A, fb.0);
+ assert_eq!(0xA, fb.foo3());
+ assert_eq!(0xF, fb.foo4());
+}
+
+#[test]
+fn test_getter_setter_only() {
+ let mut fb = FooBar(0);
+ fb.setter_only(0x7);
+ assert_eq!(0x1C, fb.0);
+ assert_eq!(0x6, fb.getter_only());
+}
+
+#[test]
+fn test_array_field1() {
+ let mut fb = FooBar(0);
+
+ fb.set_foo5(0, 1);
+ assert_eq!(0x1, fb.0);
+ assert_eq!(1, fb.foo5(0));
+
+ fb.set_foo5(0, 0);
+ assert_eq!(0x0, fb.0);
+ assert_eq!(0, fb.foo5(0));
+
+ fb.set_foo5(0, 1);
+ fb.set_foo5(6, 1);
+ fb.set_foo5(31, 1);
+ assert_eq!(0x8000_0041, fb.0);
+ assert_eq!(1, fb.foo5(0));
+ assert_eq!(1, fb.foo5(6));
+ assert_eq!(1, fb.foo5(31));
+ assert_eq!(0, fb.foo5(1));
+ assert_eq!(0, fb.foo5(5));
+ assert_eq!(0, fb.foo5(7));
+ assert_eq!(0, fb.foo5(30));
+}
+
+#[test]
+fn test_array_field2() {
+ let mut fb = FooBar(0);
+
+ fb.set_foo6(0, 1);
+ assert_eq!(0x8, fb.0);
+ assert_eq!(1, fb.foo6(0));
+ assert_eq!(0, fb.foo6(1));
+ assert_eq!(0, fb.foo6(2));
+
+ fb.set_foo6(0, 7);
+ assert_eq!(0x38, fb.0);
+ assert_eq!(7, fb.foo6(0));
+ assert_eq!(0, fb.foo6(1));
+ assert_eq!(0, fb.foo6(2));
+
+ fb.set_foo6(2, 7);
+ assert_eq!(0xE38, fb.0);
+ assert_eq!(7, fb.foo6(0));
+ assert_eq!(0, fb.foo6(1));
+ assert_eq!(7, fb.foo6(2));
+
+ fb.set_foo6(0, 0);
+ assert_eq!(0xE00, fb.0);
+ assert_eq!(0, fb.foo6(0));
+ assert_eq!(0, fb.foo6(1));
+ assert_eq!(7, fb.foo6(2));
+}
+
+#[allow(unknown_lints)]
+#[allow(identity_op)]
+#[allow(erasing_op)]
+#[test]
+fn test_setter_only_array() {
+ let mut fb = FooBar(0);
+
+ fb.setter_only_array(0, 0);
+ assert_eq!(0x0, fb.0);
+
+ fb.setter_only_array(0, 0b111);
+ assert_eq!(0b111 << (4 + 0 * 2), fb.0);
+
+ fb.setter_only_array(0, 0);
+ fb.setter_only_array(1, 0b111);
+ assert_eq!(0b111 << (4 + 1 * 3), fb.0);
+
+ fb.setter_only_array(1, 0);
+ fb.setter_only_array(2, 0b111);
+ assert_eq!(0b111 << (4 + 2 * 3), fb.0);
+}
+
+#[test]
+fn test_getter_only_array() {
+ let mut fb = FooBar(0);
+
+ assert_eq!(0, fb.getter_only_array(0));
+ assert_eq!(0, fb.getter_only_array(1));
+ assert_eq!(0, fb.getter_only_array(2));
+
+ fb.0 = !(0x1FF << 3);
+ assert_eq!(0, fb.getter_only_array(0));
+ assert_eq!(0, fb.getter_only_array(1));
+ assert_eq!(0, fb.getter_only_array(2));
+
+ fb.0 = 0xF << 3;
+ assert_eq!(0b111, fb.getter_only_array(0));
+ assert_eq!(0b001, fb.getter_only_array(1));
+ assert_eq!(0, fb.getter_only_array(2));
+
+ fb.0 = 0xF << 6;
+ assert_eq!(0, fb.getter_only_array(0));
+ assert_eq!(0b111, fb.getter_only_array(1));
+ assert_eq!(0b001, fb.getter_only_array(2));
+
+ fb.0 = 0xF << 8;
+ assert_eq!(0, fb.getter_only_array(0));
+ assert_eq!(0b100, fb.getter_only_array(1));
+ assert_eq!(0b111, fb.getter_only_array(2));
+
+ fb.0 = 0b101_010_110 << 3;
+ assert_eq!(0b110, fb.getter_only_array(0));
+ assert_eq!(0b010, fb.getter_only_array(1));
+ assert_eq!(0b101, fb.getter_only_array(2));
+}
+
+#[test]
+fn test_signed() {
+ let mut fb = FooBar(0);
+
+ assert_eq!(0, fb.signed_two_bits());
+ assert_eq!(0, fb.signed_eight_bits());
+ assert_eq!(0, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_two_bits(-2);
+ assert_eq!(0b10, fb.0);
+ assert_eq!(-2, fb.signed_two_bits());
+ assert_eq!(2, fb.signed_eight_bits());
+ assert_eq!(1, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_two_bits(-1);
+ assert_eq!(0b11, fb.0);
+ assert_eq!(-1, fb.signed_two_bits());
+ assert_eq!(3, fb.signed_eight_bits());
+ assert_eq!(1, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_two_bits(0);
+ assert_eq!(0, fb.0);
+ assert_eq!(0, fb.signed_two_bits());
+ assert_eq!(0, fb.signed_eight_bits());
+ assert_eq!(0, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_two_bits(1);
+ assert_eq!(1, fb.0);
+ assert_eq!(1, fb.signed_two_bits());
+ assert_eq!(1, fb.signed_eight_bits());
+ assert_eq!(0, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits(0);
+ assert_eq!(0, fb.0);
+ assert_eq!(0, fb.signed_two_bits());
+ assert_eq!(0, fb.signed_eight_bits());
+ assert_eq!(0, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits(-1);
+ assert_eq!(0xFF, fb.0);
+ assert_eq!(-1, fb.signed_two_bits());
+ assert_eq!(-1, fb.signed_eight_bits());
+ assert_eq!(127, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits(-128);
+ assert_eq!(0x80, fb.0);
+ assert_eq!(0, fb.signed_two_bits());
+ assert_eq!(-128, fb.signed_eight_bits());
+ assert_eq!(64, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits(127);
+ assert_eq!(0x7F, fb.0);
+ assert_eq!(-1, fb.signed_two_bits());
+ assert_eq!(127, fb.signed_eight_bits());
+ assert_eq!(63, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits_unaligned(0);
+ assert_eq!(1, fb.0);
+ assert_eq!(1, fb.signed_two_bits());
+ assert_eq!(1, fb.signed_eight_bits());
+ assert_eq!(0, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits(0);
+ fb.set_signed_eight_bits_unaligned(-1);
+ assert_eq!(0x1FE, fb.0);
+ assert_eq!(-2, fb.signed_two_bits());
+ assert_eq!(-2, fb.signed_eight_bits());
+ assert_eq!(-1, fb.signed_eight_bits_unaligned());
+
+ fb.set_signed_eight_bits_unaligned(-128);
+ assert_eq!(0x100, fb.0);
+ assert_eq!(0, fb.signed_two_bits());
+ assert_eq!(0, fb.signed_eight_bits());
+ assert_eq!(-128, fb.signed_eight_bits_unaligned());
+ fb.set_signed_eight_bits_unaligned(127);
+ assert_eq!(0xFE, fb.0);
+ assert_eq!(-2, fb.signed_two_bits());
+ assert_eq!(-2, fb.signed_eight_bits());
+ assert_eq!(127, fb.signed_eight_bits_unaligned());
+}
+
+#[test]
+fn test_field_type() {
+ let fb = FooBar(0);
+ let _: u32 = fb.foo1();
+ let _: u8 = fb.foo2();
+ let _: u8 = fb.foo3();
+ let _: u16 = fb.foo4();
+ let _: u8 = fb.foo5(0);
+ let _: u32 = fb.foo6(0);
+
+ let _: bool = fb.foo7();
+ let _: u8 = fb.foo8();
+ let _: u32 = fb.foo9();
+ let _: u16 = fb.foo10();
+ let _: u32 = fb.foo11();
+ let _: u16 = fb.foo12();
+
+ let _: Foo = fb.into_foo1();
+ let _: Foo = fb.into_foo2();
+ let _: Foo = fb.into_foo3();
+ let _: Foo = fb.into_foo4();
+ let _: Foo = fb.into_foo6(0);
+
+ let _: Foo = fb.from_foo1();
+ let _: Foo = fb.from_foo3();
+ let _: Foo = fb.from_foo5(0);
+
+ let _: i8 = fb.signed_single_bit();
+ let _: i8 = fb.signed_two_bits();
+ let _: i8 = fb.signed_eight_bits();
+ let _: i8 = fb.signed_eight_bits_unaligned();
+
+ let _: u128 = fb.u128_getter();
+ let _: i128 = fb.i128_getter();
+}
+
+#[test]
+fn test_into_setter() {
+ let mut fb = FooBar(0);
+
+ // We just check that the parameter type is correct
+ fb.set_into_foo1(0u8);
+ fb.set_into_foo2(0u8);
+ fb.set_into_foo3(0u8);
+ fb.set_into_foo4(0u8);
+}
+
+#[test]
+fn test_from_setter() {
+ let mut fb = FooBar(0);
+ assert_eq!(0, fb.0);
+
+ fb.set_from_foo1(Foo(1));
+ assert_eq!(1 << 31, fb.0);
+ fb.set_from_foo1(Foo(0));
+ assert_eq!(0, fb.0);
+
+ fb.set_from_foo2(Foo(1));
+ assert_eq!(1 << 31, fb.0);
+ fb.set_from_foo2(Foo(0));
+ assert_eq!(0, fb.0);
+
+ fb.set_from_foo3(Foo(1));
+ assert_eq!(1 << 31, fb.0);
+ fb.set_from_foo3(Foo(0));
+ assert_eq!(0, fb.0);
+
+ fb.set_from_foo4(Foo(1));
+ assert_eq!(1 << 31, fb.0);
+ fb.set_from_foo4(Foo(0));
+ assert_eq!(0, fb.0);
+
+ fb.set_from_foo5(1, Foo(1));
+ assert_eq!(1 << 30, fb.0);
+}
+
+#[test]
+fn test_all_bits() {
+ let mut fb = FooBar(0);
+
+ assert_eq!(0, fb.all_bits());
+
+ fb.set_all_bits(!0u32);
+ assert_eq!(!0u32, fb.0);
+ assert_eq!(!0u32, fb.all_bits());
+
+ fb.0 = 0x8000_0001;
+ assert_eq!(0x8000_0001, fb.all_bits());
+}
+
+#[test]
+fn test_is_copy() {
+ let a = FooBar(0);
+ let _b = a;
+ let _c = a;
+}
+
+#[test]
+fn test_debug() {
+ let fb = FooBar(1_234_567_890);
+ let expected = "FooBar { .0: 1234567890, foo1: 0, foo2: 0, foo3: 2, foo3: 2, foo4: 4, foo5: [0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0], foo6: [2, 3, 1], getter_only: 1, getter_only_array: [2, 3, 1], all_bits: 1234567890, single_bit: false, into_foo1: Foo(0), into_foo2: Foo(0), from_foo1: Foo(0), into_foo3: Foo(0), into_foo4: Foo(0), into_foo6: [Foo(0), Foo(1), Foo(0)], from_foo3: Foo(0), from_foo5: [Foo(0), Foo(1), Foo(0)], from_foo6: Foo(0), signed_single_bit: 0, signed_two_bits: -2, signed_eight_bits: -46, signed_eight_bits_unaligned: 105, u128_getter: 105, i128_getter: 105 }";
+ assert_eq!(expected, format!("{:?}", fb))
+}
+
+bitfield! {
+ struct ArrayBitfield([u8]);
+ u32;
+ foo1, set_foo1: 0, 0;
+ foo2, set_foo2: 7, 0;
+ foo3, set_foo3: 8, 1;
+ foo4, set_foo4: 19, 4;
+ i32;
+ signed_foo1, set_signed_foo1: 0, 0;
+ signed_foo2, set_signed_foo2: 7, 0;
+ signed_foo3, set_signed_foo3: 8, 1;
+ signed_foo4, set_signed_foo4: 19, 4;
+ u128, u128_getter, set_u128: 19, 4;
+}
+
+#[test]
+fn test_arraybitfield() {
+ let mut ab = ArrayBitfield([0; 3]);
+
+ assert_eq!(0u32, ab.foo1());
+ assert_eq!(0u32, ab.foo2());
+ assert_eq!(0u32, ab.foo3());
+ assert_eq!(0u32, ab.foo4());
+ assert_eq!(0i32, ab.signed_foo1());
+ assert_eq!(0i32, ab.signed_foo2());
+ assert_eq!(0i32, ab.signed_foo3());
+ assert_eq!(0i32, ab.signed_foo4());
+ assert_eq!(0u128, ab.u128_getter());
+
+ ab.set_foo1(1);
+ assert_eq!([1, 0, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(1, ab.foo2());
+ assert_eq!(0, ab.foo3());
+ assert_eq!(0, ab.foo4());
+ assert_eq!(-1, ab.signed_foo1());
+ assert_eq!(1, ab.signed_foo2());
+ assert_eq!(0, ab.signed_foo3());
+ assert_eq!(0, ab.signed_foo4());
+ assert_eq!(0, ab.u128_getter());
+
+ ab.set_foo1(0);
+ ab.set_foo2(0xFF);
+ assert_eq!([0xFF, 0, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(0xFF, ab.foo2());
+ assert_eq!(0x7F, ab.foo3());
+ assert_eq!(0x0F, ab.foo4());
+ assert_eq!(-1, ab.signed_foo1());
+ assert_eq!(-1, ab.signed_foo2());
+ assert_eq!(127, ab.signed_foo3());
+ assert_eq!(0x0F, ab.signed_foo4());
+ assert_eq!(0x0F, ab.u128_getter());
+
+ ab.set_foo2(0);
+ ab.set_foo3(0xFF);
+ assert_eq!([0xFE, 0x01, 0], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0xFE, ab.foo2());
+ assert_eq!(0xFF, ab.foo3());
+ assert_eq!(0x1F, ab.foo4());
+ assert_eq!(0, ab.signed_foo1());
+ assert_eq!(-2, ab.signed_foo2());
+ assert_eq!(-1, ab.signed_foo3());
+ assert_eq!(0x1F, ab.signed_foo4());
+ assert_eq!(0x1F, ab.u128_getter());
+
+ ab.set_foo3(0);
+ ab.set_foo4(0xFFFF);
+ assert_eq!([0xF0, 0xFF, 0x0F], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0xF0, ab.foo2());
+ assert_eq!(0xF8, ab.foo3());
+ assert_eq!(0xFFFF, ab.foo4());
+ assert_eq!(0, ab.signed_foo1());
+ assert_eq!(-16, ab.signed_foo2());
+ assert_eq!(-8, ab.signed_foo3());
+ assert_eq!(-1, ab.signed_foo4());
+ assert_eq!(0xFFFF, ab.u128_getter());
+
+ ab.set_foo4(0x0);
+ ab.set_signed_foo1(0);
+ assert_eq!([0x00, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo1(-1);
+ assert_eq!([0x01, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo1(0);
+ ab.set_signed_foo2(127);
+ assert_eq!([0x7F, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(-128);
+ assert_eq!([0x80, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(1);
+ assert_eq!([0x01, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(-1);
+ assert_eq!([0xFF, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(0);
+ ab.set_signed_foo3(127);
+ assert_eq!([0xFE, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo3(-1);
+ assert_eq!([0xFE, 0x01, 0x00], ab.0);
+
+ ab.set_signed_foo3(0);
+ ab.set_signed_foo4(-1);
+ assert_eq!([0xF0, 0xFF, 0x0F], ab.0);
+
+ ab.set_signed_foo4(0);
+ ab.set_u128(0xFFFF);
+ assert_eq!([0xF0, 0xFF, 0x0F], ab.0);
+}
+
+#[test]
+fn test_arraybitfield2() {
+ // Check that the macro can be called from a function.
+ bitfield! {
+ struct ArrayBitfield2([u16]);
+ impl Debug;
+ u32;
+ foo1, set_foo1: 0, 0;
+ foo2, set_foo2: 7, 0;
+ foo3, set_foo3: 8, 1;
+ foo4, set_foo4: 20, 4;
+ }
+ let mut ab = ArrayBitfield2([0; 2]);
+
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0, ab.foo2());
+ assert_eq!(0, ab.foo3());
+ assert_eq!(0, ab.foo4());
+
+ ab.set_foo1(1);
+ assert_eq!([1, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(1, ab.foo2());
+ assert_eq!(0, ab.foo3());
+ assert_eq!(0, ab.foo4());
+
+ ab.set_foo1(0);
+ ab.set_foo2(0xFF);
+ assert_eq!([0xFF, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(0xFF, ab.foo2());
+ assert_eq!(0x7F, ab.foo3());
+ assert_eq!(0x0F, ab.foo4());
+
+ ab.set_foo2(0);
+ ab.set_foo3(0xFF);
+ assert_eq!([0x1FE, 0x0], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0xFE, ab.foo2());
+ assert_eq!(0xFF, ab.foo3());
+ assert_eq!(0x1F, ab.foo4());
+
+ ab.set_foo3(0);
+ ab.set_foo4(0xFFFF);
+ assert_eq!([0xFFF0, 0xF], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0xF0, ab.foo2());
+ assert_eq!(0xF8, ab.foo3());
+ assert_eq!(0xFFFF, ab.foo4());
+}
+
+bitfield! {
+ struct ArrayBitfieldMsb0(MSB0 [u8]);
+ impl Debug;
+ u32;
+ foo1, set_foo1: 0, 0;
+ foo2, set_foo2: 7, 0;
+ foo3, set_foo3: 8, 1;
+ foo4, set_foo4: 19, 4;
+ i32;
+ signed_foo1, set_signed_foo1: 0, 0;
+ signed_foo2, set_signed_foo2: 7, 0;
+ signed_foo3, set_signed_foo3: 8, 1;
+ signed_foo4, set_signed_foo4: 19, 4;
+}
+
+#[test]
+fn test_arraybitfield_msb0() {
+ let mut ab = ArrayBitfieldMsb0([0; 3]);
+
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0, ab.foo2());
+ assert_eq!(0, ab.foo3());
+ assert_eq!(0, ab.foo4());
+ assert_eq!(0, ab.signed_foo1());
+ assert_eq!(0, ab.signed_foo2());
+ assert_eq!(0, ab.signed_foo3());
+ assert_eq!(0, ab.signed_foo4());
+
+ ab.set_foo1(1);
+ assert_eq!([0b1000_0000, 0, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(0b1000_0000, ab.foo2());
+ assert_eq!(0, ab.foo3());
+ assert_eq!(0, ab.foo4());
+ assert_eq!(-1, ab.signed_foo1());
+ assert_eq!(-128, ab.signed_foo2());
+ assert_eq!(0, ab.signed_foo3());
+ assert_eq!(0, ab.signed_foo4());
+
+ ab.set_foo1(0);
+ ab.set_foo2(0xFF);
+ assert_eq!([0b1111_1111, 0, 0], ab.0);
+ assert_eq!(1, ab.foo1());
+ assert_eq!(0b1111_1111, ab.foo2());
+ assert_eq!(0b1111_1110, ab.foo3());
+ assert_eq!(0b1111_0000_0000_0000, ab.foo4());
+ assert_eq!(-1, ab.signed_foo1());
+ assert_eq!(-1, ab.signed_foo2());
+ assert_eq!(-2, ab.signed_foo3());
+ assert_eq!(-4096, ab.signed_foo4());
+
+ ab.set_foo2(0);
+ ab.set_foo3(0xFF);
+ assert_eq!([0b0111_1111, 0b1000_0000, 0], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0b0111_1111, ab.foo2());
+ assert_eq!(0xFF, ab.foo3());
+ assert_eq!(0b1111_1000_0000_0000, ab.foo4());
+ assert_eq!(0, ab.signed_foo1());
+ assert_eq!(127, ab.signed_foo2());
+ assert_eq!(-1, ab.signed_foo3());
+ assert_eq!(-2048, ab.signed_foo4());
+
+ ab.set_foo3(0);
+ ab.set_foo4(0xFFFF);
+ assert_eq!([0x0F, 0xFF, 0xF0], ab.0);
+ assert_eq!(0, ab.foo1());
+ assert_eq!(0x0F, ab.foo2());
+ assert_eq!(0b0001_1111, ab.foo3());
+ assert_eq!(0xFFFF, ab.foo4());
+ assert_eq!(0, ab.signed_foo1());
+ assert_eq!(0x0F, ab.signed_foo2());
+ assert_eq!(0b0001_1111, ab.signed_foo3());
+ assert_eq!(-1, ab.signed_foo4());
+
+ ab.set_foo4(0x0);
+ ab.set_signed_foo1(0);
+ assert_eq!([0x00, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo1(-1);
+ assert_eq!([0b1000_0000, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo1(0);
+ ab.set_signed_foo2(127);
+ assert_eq!([0x7F, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(-128);
+ assert_eq!([0x80, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(1);
+ assert_eq!([0x01, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(-1);
+ assert_eq!([0xFF, 0x00, 0x00], ab.0);
+
+ ab.set_signed_foo2(0);
+ ab.set_signed_foo3(127);
+ assert_eq!([0b0011_1111, 0b1000_0000, 0], ab.0);
+
+ ab.set_signed_foo3(-1);
+ assert_eq!([0b0111_1111, 0b1000_0000, 0], ab.0);
+
+ ab.set_signed_foo3(0);
+ ab.set_signed_foo4(-1);
+ assert_eq!([0x0F, 0xFF, 0xF0], ab.0);
+}
+
+mod some_module {
+ bitfield! {
+ pub struct PubBitFieldInAModule(u32);
+ impl Debug;
+ /// Attribute works on pub fields
+ pub field1, set_field1: 1;
+ pub field2, _: 1;
+ pub _, set_field3: 1;
+ pub u16, field4, set_field4: 1;
+ /// Check if multiple attributes are applied
+ #[cfg(not(test))]
+ pub u16, field4, set_field4: 1;
+ pub u16, _, set_field5: 1;
+ pub u16, field6, _: 1;
+ pub field7, set_field7: 1;
+ pub field8, set_field8: 1, 1;
+ #[cfg(not(test))]
+ /// And make sure not only the last attributes is applied
+ pub field8, set_field8: 1, 1;
+ pub field9, set_field9: 1, 1, 1;
+ pub u32, field10, set_field10: 1;
+ pub u32, field11, set_field11: 1, 1;
+ pub u32, field12, set_field12: 1, 1, 1;
+ }
+
+}
+
+#[test]
+fn struct_can_be_public() {
+ let _ = some_module::PubBitFieldInAModule(0);
+}
+#[test]
+fn field_can_be_public() {
+ let mut a = some_module::PubBitFieldInAModule(0);
+ let _ = a.field1();
+ a.set_field1(true);
+ let _ = a.field2();
+ a.set_field3(true);
+ let _ = a.field4();
+ a.set_field4(true);
+ a.set_field5(true);
+ let _ = a.field6();
+ let _ = a.field7();
+ a.set_field7(true);
+ let _ = a.field8();
+ a.set_field8(0);
+ let _ = a.field9(0);
+ a.set_field9(0, 0);
+ let _ = a.field10();
+ a.set_field10(true);
+ let _ = a.field11();
+ a.set_field11(0);
+ let _ = a.field12(0);
+ a.set_field12(0, 0);
+}
+
+// Everything in this module is to make sure that its possible to specify types
+// in most of the possible ways.
+#[allow(dead_code)]
+mod test_types {
+ use bitfield::BitRange;
+ use std;
+ use std::sync::atomic::{self, AtomicUsize};
+
+ struct Foo;
+
+ impl Foo {
+ bitfield_fields! {
+ std::sync::atomic::AtomicUsize, field1, set_field1: 0, 0;
+ std::sync::atomic::AtomicUsize;
+ field2, set_field2: 0, 0;
+ ::std::sync::atomic::AtomicUsize, field3, set_field3: 0, 0;
+ ::std::sync::atomic::AtomicUsize;
+ field4, set_field4: 0, 0;
+ atomic::AtomicUsize, field5, set_field5: 0, 0;
+ atomic::AtomicUsize;
+ field6, set_field6: 0, 0;
+ AtomicUsize, field7, set_field7: 0, 0;
+ AtomicUsize;
+ field8, set_field8: 0, 0;
+ Vec, field9, set_field9: 0, 0;
+ Vec;
+ field10, set_field10: 0, 0;
+ Vec<::std::sync::atomic::AtomicUsize>, field11, set_field11: 0, 0;
+ Vec<::std::sync::atomic::AtomicUsize>;
+ field12, set_field12: 0, 0;
+ Vec, field13, set_field13: 0, 0;
+ Vec;
+ field14, set_field14: 0, 0;
+ Vec, field15, set_field15: 0, 0;
+ Vec;
+ field16, set_field16: 0, 0;
+ &str, field17, set_field17: 0, 0;
+ &str;
+ field18, set_field18: 0, 0;
+ &'static str, field19, set_field19: 0, 0;
+ &'static str;
+ field20, set_field20: 0, 0;
+ }
+ }
+
+ impl BitRange for Foo {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> AtomicUsize {
+ AtomicUsize::new(0)
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: AtomicUsize) {}
+ }
+
+ impl BitRange> for Foo {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> Vec {
+ vec![AtomicUsize::new(0)]
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: Vec) {}
+ }
+
+ impl<'a> BitRange<&'a str> for Foo {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> &'a str {
+ ""
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: &'a str) {}
+ }
+
+ #[test]
+ fn test_field_type() {
+ let test = Foo;
+ let _: AtomicUsize = test.field1();
+ let _: AtomicUsize = test.field2();
+ let _: AtomicUsize = test.field3();
+ let _: AtomicUsize = test.field4();
+ let _: AtomicUsize = test.field5();
+ let _: AtomicUsize = test.field6();
+ let _: AtomicUsize = test.field7();
+ let _: AtomicUsize = test.field8();
+ let _: Vec = test.field9();
+ let _: Vec = test.field10();
+ let _: Vec = test.field11();
+ let _: Vec = test.field12();
+ let _: Vec = test.field13();
+ let _: Vec = test.field14();
+ let _: Vec = test.field15();
+ let _: Vec = test.field16();
+ let _: &str = test.field17();
+ let _: &str = test.field18();
+ let _: &'static str = test.field19();
+ let _: &'static str = test.field20();
+ }
+}
+
+#[allow(dead_code)]
+mod test_no_default_bitrange {
+ use bitfield::BitRange;
+ use std::fmt::Debug;
+ use std::fmt::Error;
+ use std::fmt::Formatter;
+ bitfield! {
+ #[derive(Eq, PartialEq)]
+ pub struct BitField1(u16);
+ no default BitRange;
+ impl Debug;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 2;
+ }
+
+ impl BitRange for BitField1 {
+ fn bit_range(&self, msb: usize, lsb: usize) -> u8 {
+ (msb + lsb) as u8
+ }
+ fn set_bit_range(&mut self, msb: usize, lsb: usize, value: u8) {
+ self.0 = msb as u16 + lsb as u16 + u16::from(value)
+ }
+ }
+
+ #[allow(unknown_lints)]
+ #[allow(identity_op)]
+ #[test]
+ fn custom_bitrange_implementation_is_used() {
+ let mut bf = BitField1(0);
+ assert_eq!(bf.field1(), 10 + 0);
+ assert_eq!(bf.field2(), 12 + 3);
+ assert_eq!(bf.field3(), true);
+ bf.set_field1(42);
+ assert_eq!(bf, BitField1(10 + 0 + 42));
+ }
+
+ bitfield! {
+ pub struct BitField2(u16);
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField2 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ // Make sure Debug wasn't implemented by implementing it.
+ impl Debug for BitField2 {
+ fn fmt(&self, _: &mut Formatter) -> Result<(), Error> {
+ unimplemented!()
+ }
+ }
+
+ // Check that we can put `impl Debug` before `no default BitRange`
+ bitfield! {
+ pub struct BitField3(u16);
+ impl Debug;
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField3 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ bitfield! {
+ #[derive(Eq, PartialEq)]
+ pub struct BitField4([u16]);
+ no default BitRange;
+ impl Debug;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 2;
+ }
+
+ impl BitRange for BitField4 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ bitfield! {
+ pub struct BitField5([u16]);
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField5 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ // Make sure Debug wasn't implemented by implementing it.
+ impl Debug for BitField5 {
+ fn fmt(&self, _: &mut Formatter) -> Result<(), Error> {
+ unimplemented!()
+ }
+ }
+
+ // Check that we can put `impl Debug` before `no default BitRange`
+ bitfield! {
+ pub struct BitField6([u16]);
+ impl Debug;
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField6 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ bitfield! {
+ #[derive(Eq, PartialEq)]
+ pub struct BitField7(MSB0 [u16]);
+ no default BitRange;
+ impl Debug;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 2;
+ }
+
+ impl BitRange for BitField7 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ bitfield! {
+ pub struct BitField8(MSB0 [u16]);
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField8 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ // Make sure Debug wasn't implemented by implementing it.
+ impl Debug for BitField8 {
+ fn fmt(&self, _: &mut Formatter) -> Result<(), Error> {
+ unimplemented!()
+ }
+ }
+
+ // Check that we can put `impl Debug` before `no default BitRange`
+ bitfield! {
+ pub struct BitField9([u16]);
+ impl Debug;
+ no default BitRange;
+ u8;
+ field1, set_field1: 10, 0;
+ pub field2, _ : 12, 3;
+ field3, set_field3: 0;
+ }
+
+ impl BitRange for BitField9 {
+ fn bit_range(&self, _msb: usize, _lsb: usize) -> u8 {
+ 0
+ }
+ fn set_bit_range(&mut self, _msb: usize, _lsb: usize, _value: u8) {}
+ }
+
+ #[test]
+ fn test_debug_is_implemented_with_no_default_bitrange() {
+ format!("{:?}", BitField1(0));
+ format!("{:?}", BitField3(0));
+ format!("{:?}", BitField4([0; 1]));
+ format!("{:?}", BitField6([0; 1]));
+ format!("{:?}", BitField7([0; 1]));
+ format!("{:?}", BitField9([0; 1]));
+ }
+}
diff --git a/src/rust/vendor/cortex-m/.cargo-checksum.json b/src/rust/vendor/cortex-m/.cargo-checksum.json
new file mode 100644
index 000000000..373e91981
--- /dev/null
+++ b/src/rust/vendor/cortex-m/.cargo-checksum.json
@@ -0,0 +1 @@
+{"files":{"CHANGELOG.md":"0746746ed1d76c49bcb397844b176e7bb14511d56df37a66dc38cea98bcff9e6","CODE_OF_CONDUCT.md":"3746a267d008534cec6d7242f09ca6284c5d37e48306531b878461e6c4d33fee","Cargo.toml":"e50cf067b659be75f8f927b342d3446243bdcd86a4b10ea4daff819f6d9ddebd","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"0a633406531735659d570e7903394ec52f7707195a3c4af10279f06eec7acc76","README.md":"2e2e489b55b8506d174782d8587db6708162f82f1fb4c3272746d079e5972837","asm-toolchain":"5479b28f90ce42bbefc9fed083182e0de8eafc1b69f92fbfc5221d766862682f","asm/inline.rs":"6e9605c595d6c8deddcd2bcb1f9f4d270a10492823e82955888be8ce3b553f3b","asm/lib.rs":"8d7eb1aee66e9d8ff8b5ff5d0a8687d6d255d3403b9c238d7b5620a01602d9b5","bin/thumbv6m-none-eabi-lto.a":"4d13ae583b33613b476b8d2dbf435d363ad25a2135388eb5257b132868461535","bin/thumbv6m-none-eabi.a":"b5002ce424d6baf97639ee188f12a9c7152b1dbc90a1b4393556049f8475be54","bin/thumbv7em-none-eabi-lto.a":"a3dd53e4e7a874f9d7d27acb273eaa17ae72222205e939a7dccc4c02ebb7a3ba","bin/thumbv7em-none-eabi.a":"2ae0edab580e3d1c95bedb4bc9a8a646e2b3ea9fa5bf2768d859fb5a30b8741a","bin/thumbv7em-none-eabihf-lto.a":"71d4bd5af31d3933969aee825400ac105f3d3b826abfb968bfa2c8f3658ecd50","bin/thumbv7em-none-eabihf.a":"0ebff8ad140fe35494fb51ed4184ce19ffab41e9ac62af6a5c5c7b0ea26b9e53","bin/thumbv7m-none-eabi-lto.a":"da3ed6ec4b9fc8803336e280cd71524911fee459e2dac88e29d0e82d043b90d7","bin/thumbv7m-none-eabi.a":"3de841b242e33066555fa06e127f01008711ec08728a2f75d525731d04ae442e","bin/thumbv8m.base-none-eabi-lto.a":"73189749f0c42de0990e76ce531621e17370c0dd00940297e6044a622302df38","bin/thumbv8m.base-none-eabi.a":"b2ff0d88db4aa539ead58c1fb37ef3d171d08462b979a8b7c897ef4148119572","bin/thumbv8m.main-none-eabi-lto.a":"12b43c063fcdb33fb57335c5fc2a0c62e2a427a3c6e8727fb2a4437615e2c07d","bin/thumbv8m.main-none-eabi.a":"9217dcbbd3ef909caa39a5ee75684127b4cf4f51447e41027164462e42a33018","bin/thumbv8m.main-none-eabihf-lto.a":"18aae9047b017ad4003e2311d74b0888701b84333ccf4379d09e6890b1799174","bin/thumbv8m.main-none-eabihf.a":"39fdf845895f31e473fc863fbb21f3590237a571dbfb97c66c7190475a615efb","build.rs":"fe2c0bd4cd56adabf5d3912a9810546e6a32708f165d9ad2dcfe0407df092516","src/asm.rs":"079aaf47d696aa1995ddcf5b38f59ed22e8a48967c261d14676d1757308b534a","src/call_asm.rs":"024486575710dc864613c3c2917887ec9527ae2b66cde183b3233f663a9e790b","src/cmse.rs":"9e868ec9ac0a6296b1e9d943478f6343a3f43979c0f068780adf114f52643707","src/critical_section.rs":"2013adbc5289d71c9f5ccb8353fbc47a2df7f7003180be65b6b07b8c948ca684","src/delay.rs":"27f2adeaf9dc924ac16f1fea9da510469e5c4961d9223aab253d95ee3e0af191","src/interrupt.rs":"41d6338784b86d4c4837e3673c5eff26dd35c851962e741c00f7c933a7e8d218","src/itm.rs":"b50d9d6cad2ec4c841d0f3008df93edb378f8038696a9f096763a488d14df53c","src/lib.rs":"079c614ec9e3ba81bb5ab9638f7f465551805555ab41aab04e62f9015d85fe4a","src/macros.rs":"22d8af93ebe59cd9f6e468cf78c8a342c4444d83f263043d3044358811ffec41","src/peripheral/ac.rs":"b6b228637b09f2fa3ac5344a0759cd4fb852782050f252c2c9bd670660b9a768","src/peripheral/cbp.rs":"9a42efc512ac43278a5efd2fb650043a5672be031e7121c748b41ead97ed4e39","src/peripheral/cpuid.rs":"94285da95a2e6a00f09b049f59c9aede6683dab4be4b218d8bd0cd4183e7a746","src/peripheral/dcb.rs":"3c630cc575c93791f1553607565fdc76d91bb990351deedf9de3efa9184eceb0","src/peripheral/dwt.rs":"780a1094958abc2c8c7d8d196c07637eed619c72d010fcf29235b95c0b1abce3","src/peripheral/fpb.rs":"00eba6fae97614a503e9545f95158083f9e02b6b400fab647be1c0e4747293d7","src/peripheral/fpu.rs":"3bb8d5af0cdce5b25a4f9826fa6bddd75c58f4d61745afd9ed49a5d0e849c2f9","src/peripheral/icb.rs":"fcc13f2652ae39035c4b0f04468d19b3e94bfe3b05832eabc60638dcca9842ae","src/peripheral/itm.rs":"f8c8587ffa60f76e4f4fa293322386727ecf491f1b36672258b7e743b6e37a26","src/peripheral/mod.rs":"900f099bbcae804cde74bf8b5bcb9f84e9dde3018f5efe1098768c5b64021b0b","src/peripheral/mpu.rs":"20d36443ae9928cbf9a7d546298b5ce784c3cae3f2f236a8935b227f9fdbdb06","src/peripheral/nvic.rs":"ae9889495edb21d6a4f09ced401b4b21cdfb2c1ac86e593156e462cc42b5794d","src/peripheral/sau.rs":"32fb6816a3f9c81203b23b1803c8f499059a0460e2fa082e44bff3be9936f467","src/peripheral/scb.rs":"fd56eba0edace3b9278f5b26495eedd42926ed7a7dde2dad41273d26f743c8ff","src/peripheral/syst.rs":"267ac1f7e83cfa6fec1c0d651fa6babcfac90ea5254331b49f305ad0b2034711","src/peripheral/test.rs":"0f0a44af38bc78a4d3bf8995d6c79b1ee4fcb3ffb797db1c41a09d361a075828","src/peripheral/tpiu.rs":"88a86ba957ab94a38b08a5486b9a2c6c20e1b116a92f17666f4ab9b2207e63d1","src/prelude.rs":"cb4e3b91223b64a0a1c1301e6b38318f3ba031ba776ad11983444607735a8175","src/register/apsr.rs":"27ff0eda2861162b6103bb9705c37a877811bc2320785a9ab6d1ea566fa0b2cb","src/register/basepri.rs":"283b98d9befce7a1b623599cf5cfcd889425c69af2d136eef3ca7775c6809e9e","src/register/basepri_max.rs":"849f9bb522d6ea348f646930943481506c80be57613abb64f34319f0f58b6f21","src/register/control.rs":"5a666c93a82c1dea29094c7021ebe0e2a5b7dc67dd7aa7f7c923a25c6dabf992","src/register/faultmask.rs":"6a0d06a48790a25836e2c770ddc136ae5c32ce5290e3865526c8a749bf576d5a","src/register/fpscr.rs":"5d09ecb377a0231876570e3a9d95e3e013cc9ad58a338062a7000b66030f0de4","src/register/lr.rs":"02769270d9b096ccf2d9cf9101e51db142c747536e676fc04b64d4494b0a41b5","src/register/mod.rs":"8d452276b5f0751de0be7fa93ae809f4c08ab2b3db5b03210c7a9541716f1c72","src/register/msp.rs":"59a8294f63272ef1bc8b2691de2d7be51171728602733500b9e28bc285df6148","src/register/msplim.rs":"43c7648180cda21da11df04c55ccd3c5d21ec3ae08b5fa5174a0e9239ee31f61","src/register/pc.rs":"2797c309ee2a5780e8849a955dc3abd0063b442d92c878a51ca83543026a678b","src/register/primask.rs":"cfeb772e1bc6fc1ffe8192a201e12d491bdb570881d0575b5a6727876d11a766","src/register/psp.rs":"2898e262cf821b6f0d59633012e06cef0de0ee4aef0dc603000ea78f877b8de0","src/register/psplim.rs":"d4cabb7d7cfca9338e82c51658ae409e501e2fd4ba15e0162eb57e2f55649d67","triagebot.toml":"a135e10c777cd13459559bdf74fb704c1379af7c9b0f70bc49fa6f5a837daa81"},"package":"8ec610d8f49840a5b376c69663b6369e71f4b34484b9b2eb29fb918d92516cb9"}
\ No newline at end of file
diff --git a/src/rust/vendor/cortex-m/CHANGELOG.md b/src/rust/vendor/cortex-m/CHANGELOG.md
new file mode 100644
index 000000000..ca9609ce9
--- /dev/null
+++ b/src/rust/vendor/cortex-m/CHANGELOG.md
@@ -0,0 +1,807 @@
+# Change Log
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](http://keepachangelog.com/)
+and this project adheres to [Semantic Versioning](http://semver.org/).
+
+## [Unreleased]
+
+## [v0.7.7] - 2023-01-03
+
+- Add missing documentation for `critical-section-single-core` feature added
+ in v0.7.6.
+
+## [v0.7.6] - 2022-08-12
+
+- Added `critical-section-single-core` feature which provides an implementation for the `critical-section` crate for single-core systems, based on disabling all interrupts. (#448)
+
+## [v0.7.5] - 2022-05-15
+
+### Deprecated
+- the `ptr()` function on all peripherals register blocks in favor of
+ the associated constant `PTR` (#386).
+
+### Changed
+
+- The `inline-asm` feature no longer requires a nightly Rust compiler, but
+ does require Rust 1.59 or above.
+
+### Fixed
+- Fixed `singleton!()` statics sometimes ending up in `.data` instead of `.bss` (#364, #380).
+ (Backported from upcoming 0.8 release).
+
+## [v0.7.4] - 2021-12-31
+
+### Added
+
+- Added support for additional DWT counters (#349)
+ - CPI counter
+ - Exception overhead counter
+ - LSU counter
+ - Folded-instruction counter
+- Added `DWT.set_cycle_count` (#347).
+- Added support for the Cortex-M7 TCM and cache access control registers.
+ There is a feature `cm7` to enable access to these (#352).
+- Add derives for serde, Hash, and PartialOrd to VectActive behind feature
+ gates for host-platform use (#363).
+- Support host platforms besides x86_64 (#369).
+- Added `delay::Delay::with_source`, a constructor that lets you specify
+ the SysTick clock source (#374).
+
+### Fixed
+
+- Fix incorrect AIRCR PRIGROUP mask (#338, #339).
+- Fix nightly users of inline-asm breaking now that the asm macro is removed
+ from the prelude (#372).
+
+### Deprecated
+
+- `DWT::get_cycle_count` has been deprecated in favor of `DWT::cycle_count`.
+ This change was made for consistency with the [C-GETTER] convention. (#349)
+
+[C-GETTER]: https://rust-lang.github.io/api-guidelines/naming.html#c-getter
+
+## [v0.7.3] - 2021-07-03
+
+### Fixed
+
+- Fixed compilation for native targets on non-x86 host systems (#336, #337).
+
+### Added
+
+- The `Delay` struct now offers direct `delay_us()` and `delay_ms()` methods
+ without having to go through the embedded-hal traits (#344).
+
+## [v0.7.2] - 2021-03-07
+
+### Fixed
+
+- Fixed a bug where calling `asm::delay()` with an argument of 0 or 1 would
+ underflow, leading to a very long delay.
+
+## [v0.7.1] - 2021-01-25
+
+### Added
+
+- New assembly methods `asm::semihosting_syscall`, `asm::bootstrap`, and
+ `asm::bootload`.
+
+### Deprecated
+
+- `msp::write` has been deprecated in favor of `asm::bootstrap`. It was not
+ possible to use `msp::write` without causing Undefined Behavior, so all
+ existing users are encouraged to migrate.
+
+### Fixed
+
+- Fixed a bug in `asm::delay` which could lead to incorrect codegen and
+ infinite loops.
+- Improved timing guarantees of `asm::delay` on multiple-issue CPU cores.
+- Additional compiler fences added to inline assembly where necessary.
+- Fixed DWARF debug information in pre-built assembly binaries.
+
+## [v0.7.0] - 2020-11-09
+
+### Added
+
+- New `InterruptNumber` trait is now required on interrupt arguments to the
+ various NVIC functions, replacing the previous use of `Nr` from bare-metal.
+ For backwards compatibility, `InterruptNumber` is implemented for types
+ which are `Nr + Copy`, but this will be removed in a future version.
+- Associated const `PTR` is introduced to Core Peripherals to
+ eventually replace the existing `ptr()` API.
+- A delay driver based on SysTick.
+- You can now use LTO to inline assembly calls, even on stable Rust.
+ See the `asm/lib.rs` documentation for more details.
+- Initial ARMv8-M MPU support
+- ICTR and ACTLR registers added
+- Support for the Security Attribution Unit on ARMv8-M
+
+### Changed
+
+- Previously, asm calls without the `inline-asm` feature enabled used pre-built
+ objects which were built by a GCC compiler, while `inline-asm` enabled the
+ use of `llvm_asm!` calls. The asm system has been replaced with a new
+ technique which generates Rust static libs for stable calling, and uses the
+ new `asm!` macro with `inline-asm`. See the `asm/lib.rs` documentation for
+ more details.
+- Cache enabling now uses an assembly sequence to ensure correctness.
+- `ptr()` methods are now `const`.
+
+### Breaking Changes
+- `SCB::invalidate_dcache` and related methods are now unsafe, see #188
+- `Peripherals` struct is now non-exhaustive, so fields may be added in future
+ non-breaking changes
+- Removed `aligned` dependency
+- Removed const-fn feature
+- Removed previously deprecated APIs
+ - `NVIC::clear_pending`
+ - `NVIC::disable`
+ - `NVIC::enable`
+ - `NVIC::set_pending`
+ - `SCB::system_reset`
+- Removed `basepri`, `basepri_max`, and `faultmask` registers from thumbv8m.base
+
+## [v0.6.7] - 2021-01-26
+
+### Fixed
+
+- Fixed missing `peripheral::itm` reexport.
+
+## [v0.6.6] - 2021-01-26
+
+### Fixed
+
+- Fixed missing ITM reexport on `thumbv8m.base` targets.
+
+## [v0.6.5] - 2021-01-24
+
+### Changed
+
+- This release is forwards-compatible with cortex-m 0.7, and depends on and
+ re-exports many types from that version. Both 0.6.5 and 0.7 may co-exist
+ in a build.
+
+## [v0.6.4] - 2020-10-26
+
+### Changed
+
+- MSRV bumped to 1.36.0 due to `aligned` dependency.
+
+### Fixed
+
+- Drop AT&T syntax from inline asm, which was causing miscompilations with newer versions of the compiler.
+
+## [v0.6.3] - 2020-07-20
+
+### Added
+
+- Initial Cortex-M Security Extension support for armv8m
+- `UDF` intrinsic
+- Methods to enable/disable exceptions in SCB
+
+### Fixed
+
+- Fix bug in `asm::delay` not updating status clobber flags
+- Swapped to `llvm_asm!` to support inline assembly on new nightlies
+- Our precompiled assembly routines have additional debug information
+- ITM `is_fifo_ready` improved to support armv8
+- Cache enabling moved to pre-built assembly routines to prevent possible
+ undefined behaviour
+
+## [v0.6.2] - 2020-01-12
+
+### Added
+
+- Allow writing to the `CONTROL` register via `register::control::write`
+- Add `DWT::unlock()` for a safe way to unlock the DWT
+
+### Deprecation
+
+- Deprecated incorrectly included registers (`BASPRI`, `BASEPRI_MAX`, `FAULTMASK`) on `thumbv8.base`
+
+## [v0.6.1] - 2019-08-21
+
+### Fixed
+
+- Better `Debug`, `PartialEq` and `Eq` for more types
+- The `delay` function is fixed for Cortex-M0 MCUs
+
+### Added
+
+- Static version of `system_reset` as `system_reset2`
+- Now uses `links = "cortex-m"` to not link multiple versions of the crate
+- Masking of the NVIC is added `NVIC::{mask,unmask}`
+- Now Rust 2018 edition
+- `{M,P}SPLIM` access is now possible on ARMv8-M
+
+### Deprecation
+
+- `system_reset` is deprecated in favor of `sys_reset`
+
+## [v0.6.0] - 2019-03-12
+
+### Fixed
+
+- Fix numerous registers which were incorrectly included for thumbv6
+- `SHCRS` renamed to `SHCSR` in `SCB`
+
+### Added
+
+- Support for ARMv8-M (`thumbv8.base` and `thumbv8.main`)
+
+- `SCB` gained methods to set and clear `SLEEPONEXIT` bit
+
+- `NVIC` gained `STIR` register and methods to request an interrupt
+
+- `DCB` gained methods to check if debugger is attached
+
+## [v0.5.8] - 2018-10-27
+
+### Added
+
+- `SCB` gained methods to set, clear and check the pending state of the PendSV
+ exception.
+
+- `SCB` gained methods to set, clear and check the pending state of the SysTick
+ exception.
+
+- `SCB` gained methods to set and get the priority of system handlers like
+ SVCall and SysTick.
+
+- `NVIC` gained *static* methods, `pend` and `unpend`, to set and clear the
+ pending state of interrupts.
+
+### Changed
+
+- The `NVIC.{clear,set}_pending` methods have been deprecated in favor of
+ `NVIC::{unpend,pend}`.
+
+## [v0.5.7] - 2018-09-06
+
+### Added
+
+- `DCB::enable_trace()` and `DCB::disable_trace()`
+
+### Changed
+
+- `iprintln!` no longer depends on `iprint!`. `cortex_m::iprintln!` will work
+ even if `cortex_m::iprint` has not been imported.
+
+## [v0.5.6] - 2018-08-27
+
+### Fixed
+
+- Removed duplicated symbols from binary blobs
+
+- The check-blobs.sh script
+
+## [v0.5.5] - 2018-08-27 - YANKED
+
+### Changed
+
+- This crate no longer depends on `arm-none-eabi-gcc`.
+
+## [v0.5.4] - 2018-08-11
+
+### Added
+
+- A method to trigger a system reset. See `SCB.system_reset`.
+
+### Fixed
+
+- Made the VTOR register (see peripheral::SCB) available on `thumbv6m-none-eabi`. This register is
+ present on Cortex-M0+, but not on Cortex-M0.
+
+- Linking with LLD by marking all external assembly functions as `.thumb_func`. See
+ https://bugs.llvm.org/show_bug.cgi?id=38435 for details.
+
+## [v0.5.3] - 2018-08-02
+
+### Fixed
+
+- Don't assemble basepri*.s and faultmask.s for ARMv6-M. This fix the build when using `clang` as
+ the assembler.
+
+## [v0.5.2] - 2018-05-18
+
+### Added
+
+- `SCB` gained a pair of safe methods to set / clear the DEEPSLEEP bit.
+
+- `asm::delay`, delay loops whose execution time doesn't depend on the optimization level.
+
+## [v0.5.1] - 2018-05-13
+
+### Added
+
+- An opt-in `"const-fn"` feature that makes `Mutex.new` constructor into a `const fn`. This feature
+ requires a nightly toolchain.
+
+## [v0.5.0] - 2018-05-11
+
+### Added
+
+- `DebugMonitor` and `SecureFault` variants to the `Exception` enumeration.
+
+- An optional `"inline-asm"` feature
+
+### Changed
+
+- [breaking-change] This crate now requires `arm-none-eabi-gcc` to be installed and available in
+ `$PATH` when built with the `"inline-asm"` feature disabled (which is disabled by default).
+
+- [breaking-change] The `register::{apsr,lr,pc}` modules are now behind the `"inline-asm"` feature.
+
+- [breaking-change] Some variants of the `Exception` enumeration are no longer available on
+ `thumbv6m-none-eabi`. See API docs for details.
+
+- [breaking-change] Several of the variants of the `Exception` enumeration have been renamed to
+ match the CMSIS specification.
+
+- [breaking-change] fixed typo in `shcrs` field of `scb::RegisterBlock`; it was previously named
+ `shpcrs`.
+
+- [breaking-change] removed several fields from `scb::RegisterBlock` on ARMv6-M. These registers are
+ not available on that sub-architecture.
+
+- [breaking-change] changed the type of `scb::RegisterBlock.shpr` from `RW` to `RW` on
+ ARMv6-M. These registers are word accessible only on that sub-architecture.
+
+- [breaking-change] renamed the `mmar` field of `scb::RegisterBlock` to `mmfar` to match the CMSIS
+ name.
+
+- [breaking-change] removed the `iabr` field from `scb::RegisterBlock` on ARMv6-M. This register is
+ not available on that sub-architecture.
+
+- [breaking-change] removed several fields from `cpuid::RegisterBlock` on ARMv6-M. These registers
+ are not available on that sub-architecture.
+
+- [breaking-change] The `Mutex.new` constructor is not a `const fn` by default. To make it a `const
+ fn` you have to opt into the `"const-fn"` feature, which was added in v0.5.1, and switch to a
+ nightly compiler.
+
+### Removed
+
+- [breaking-change] The `exception` module has been removed. A replacement for `Exception::active`
+ can be found in `SCB::vect_active`. A modified version `exception::Exception` can be found in the
+ `peripheral::scb` module.
+
+## [v0.4.3] - 2018-01-25
+
+### Changed
+
+- The initial value of a `singleton!` no longer needs to be evaluable in const context; it can now
+ be a value computed at runtime, or even a capture of some other local variable.
+
+## [v0.4.2] - 2018-01-17
+
+### Fixed
+
+- Added a missing `Send` implementation to all the peripherals.
+
+## [v0.4.1] - 2018-01-16
+
+### Changed
+
+- `peripheral::Peripherals` is now re-exported at the root of the crate.
+
+## [v0.4.0] - 2018-01-15
+
+### Added
+
+- Formatter and Flush Control register (FFCR) accessor to the TPIU register block.
+
+- A `singleton!` macro that creates mutable reference to a statically allocated variable.
+
+- A Cargo feature, `cm7-r0p1`, to work around a silicon erratum that affects writes to BASEPRI on
+ Cortex-M7 r0p1 devices.
+
+### Changed
+
+- [breaking-change] All peripherals are now exposed as scoped singletons and they need to be `take`n
+ into scope to become accessible.
+
+- [breaking-change] The signatures of methods exposed by peripheral proxies have changed to
+ better match the new scoped singletons semantics.
+
+- All the thin wrappers around assembly instructions now panic when executed on non-ARM devices.
+
+### Removed
+
+- [breaking-change] APIs specific to ARMv7-M (`peripheral::{cbp, fpb, fpu, itm, tpiu}`, `itm`) when
+ compiling for `thumb6m-none-eabi`.
+
+## [v0.3.1] - 2017-07-20
+
+### Changed
+
+- `{basepri,basepri_max}::write` are now compiler barriers for the same reason
+ that `interrupt::{disable,enable}` are: they are used to create critical
+ sections.
+
+## [v0.3.0] - 2017-07-07
+
+### Changed
+
+- [breaking-change] Renamed `StackedRergisters` to `ExceptionFrame` to better
+ reflect the ARM documentation.
+
+- [breaking-change] Renamed the variants of `Exception` to better match the
+ ARM documentation.
+
+- [breaking-change] Renamed `Exception::current` to `Exception::active` and
+ changed the signature to return `None` when no exception is being serviced.
+
+- Moved bits non specific to the Cortex-M architecture into the [`bare-metal`]
+ crate with the goal of sharing code between this crate and crates tailored for
+ other (microcontroller) architectures.
+
+[`bare-metal`]: https://crates.io/crates/bare-metal
+
+### Removed
+
+- [breaking-change] The `ctxt` module along with the exception "tokens" in the
+ `exception` module. The `cortex-m-rt` crate v0.3.0 provides a more ergonomic
+ mechanism to add state to interrupts / exceptions; replace your uses of
+ `Local` with that.
+
+- [breaking-change] `default_handler`, `DEFAULT_HANDLERS` and `Handlers` from
+ the `exception` module as well as `Reserved` from the root of the crate.
+ `cortex-m-rt` v0.3.0 provides a mechanism to override exceptions and the
+ default exception handler. Change your use of these `Handlers` and others to
+ that.
+
+### Fixed
+
+- `interrupt::{enable,disable}` are now compiler barriers. The compiler should
+ not reorder code around these function calls for memory safety; that is the
+ case now.
+
+## [v0.2.11] - 2017-06-16
+
+### Added
+
+- An API to maintain the different caches (DCache, ICache) on Cortex M7 devices.
+
+### Fixed
+
+- the definition of the `ehprint!` macro.
+- the implementation of the FPU API.
+
+## [v0.2.10] - 2017-06-05
+
+### Added
+
+- Functions for the instructions DMB, ISB and DSB
+
+### Changed
+
+- All the functions in the `asm` module are now `inline(always)`
+
+## [v0.2.9] - 2017-05-30
+
+### Fixed
+
+- A bug in `itm::write_all` where it would ignore the length of the buffer and
+ serialize contents that come after the buffer.
+
+## [v0.2.8] - 2017-05-30 - YANKED
+
+### Added
+
+- An `itm::write_aligned` function to write 4 byte aligned buffers to an ITM
+ port. This function is faster than `itm::write_all` for small buffers but
+ requires the buffer to be aligned.
+
+## [v0.2.7] - 2017-05-23
+
+### Added
+
+- `Dwt.enable_cycle_counter`
+
+## [v0.2.6] - 2017-05-08
+
+### Fixed
+
+- [breaking-change]. MEMORY UNSAFETY. `Mutex` could be used as a channel to send
+ interrupt tokens from one interrupt to other thus breaking the context `Local`
+ abstraction. See reproduction case below. This has been fixed by making
+ `Mutex` `Sync` only if the protected data is `Send`.
+
+``` rust
+#![feature(const_fn)]
+#![feature(used)]
+#![no_std]
+
+use core::cell::RefCell;
+
+use cortex_m::ctxt::Local;
+use cortex_m::interrupt::Mutex;
+use stm32f30x::interrupt::{self, Exti0, Exti1};
+
+fn main() {
+ // ..
+
+ // trigger exti0
+ // then trigger exti0 again
+}
+
+static CHANNEL: Mutex>> = Mutex::new(RefCell::new(None));
+// Supposedly task *local* data
+static LOCAL: Local = Local::new(0);
+
+extern "C" fn exti0(mut ctxt: Exti0) {
+ static FIRST: Local = Local::new(true);
+
+ let first = *FIRST.borrow(&ctxt);
+
+ // toggle
+ if first {
+ *FIRST.borrow_mut(&mut ctxt) = false;
+ }
+
+ if first {
+ cortex_m::interrupt::free(
+ |cs| {
+ let channel = CHANNEL.borrow(cs);
+
+ // BAD: transfer interrupt token to another interrupt
+ *channel.borrow_mut() = Some(ctxt);
+ },
+ );
+
+ return;
+ }
+ let _local = LOCAL.borrow_mut(&mut ctxt);
+
+ // ..
+
+ // trigger exti1 here
+
+ // ..
+
+ // `LOCAL` mutably borrowed up to this point
+}
+
+extern "C" fn exti1(_ctxt: Exti1) {
+ cortex_m::interrupt::free(|cs| {
+ let channel = CHANNEL.borrow(cs);
+ let mut channel = channel.borrow_mut();
+
+ if let Some(mut other_task) = channel.take() {
+ // BAD: `exti1` has access to `exti0`'s interrupt token
+ // so it can now mutably access local while `exti0` is also using it
+ let _local = LOCAL.borrow_mut(&mut other_task);
+ }
+ });
+}
+
+#[allow(dead_code)]
+#[used]
+#[link_section = ".rodata.interrupts"]
+static INTERRUPTS: interrupt::Handlers = interrupt::Handlers {
+ Exti0: exti0,
+ Exti1: exti1,
+ ..interrupt::DEFAULT_HANDLERS
+};
+```
+
+## [v0.2.5] - 2017-05-07 - YANKED
+
+### Added
+
+- Higher level API for the SysTick and FPU peripherals
+
+### Fixed
+
+- [breaking-change]. MEMORY UNSAFETY. `interrupt::enable` was safe to call
+ inside an `interrupt::free` critical section thus breaking the preemption
+ protection. The `interrupt::enable` method is now `unsafe`.
+
+## [v0.2.4] - 2017-04-20 - YANKED
+
+### Fixed
+
+- [breaking-change]. MEMORY UNSAFETY. `interrupt::free` leaked the critical
+ section making it possible to access a `Mutex` when interrupts are enabled
+ (see below). This has been fixed by changing the signature of
+ `interrupt::free`.
+
+``` rust
+static FOO: Mutex = Mutex::new(false);
+
+fn main() {
+ let cs = cortex_m::interrupt::free(|cs| cs);
+ // interrupts are enabled at this point
+ let foo = FOO.borrow(&cs);
+}
+```
+
+## [v0.2.3] - 2017-04-11 - YANKED
+
+### Fixed
+
+- [breaking-change]. MEMORY UNSAFETY. Some concurrency models that use "partial"
+ critical sections (cf. BASEPRI) can be broken by changing the priority of
+ interrupts or by changing BASEPRI in some scenarios. For this reason
+ `NVIC.set_priority` and `register::basepri::write` are now `unsafe`.
+
+## [v0.2.2] - 2017-04-08 - YANKED
+
+### Fixed
+
+- [breaking-change]. MEMORY UNSAFETY. The `Mutex.borrow_mut` method has been
+ removed as it can be used to bypass Rust's borrow checker and get, for
+ example, two mutable references to the same data.
+
+``` rust
+static FOO: Mutex = Mutex::new(false);
+
+fn main() {
+ cortex_m::interrupt::free(|mut cs1| {
+ cortex_m::interrupt::free(|mut cs2| {
+ let foo: &mut bool = FOO.borrow_mut(&mut cs1);
+ let and_foo: &mut bool = FOO.borrow_mut(&mut cs2);
+ });
+ });
+}
+```
+
+## [v0.2.1] - 2017-03-12 - YANKED
+
+### Changed
+
+- The default exception handler now identifies the exception that's being
+ serviced.
+
+## [v0.2.0] - 2017-03-11 - YANKED
+
+### Added
+
+- Semihosting functionality in the `semihosting` module.
+
+- `exception::Handlers` struct that represent the section of the vector table
+ that contains the exception handlers.
+
+- A default exception handler
+
+- A high level API for the NVIC peripheral.
+
+- Context local data.
+
+- `borrow`/`borrow_mut` methods to `Mutex` that replace `lock`.
+
+- API and macros to send bytes / (formatted) strings through ITM
+
+### Changed
+
+- [breaking-change] `StackFrame` has been renamed to `StackedRegisters` and
+ moved into the `exceptions` module.
+
+- [breaking-change] Core peripherals can now be modified via a `&-` reference
+ and are no longer `Sync`.
+
+- [breaking-change] `interrupt::free`'s closure now includes a critical section
+ token, `CriticalSection`.
+
+- [breaking-change] the core register API has been revamped for type safety.
+
+- The safety of assembly wrappers like `wfi` and `interrupt::free` has been
+ reviewed. In many cases, the functions are no longer unsafe.
+
+- [breaking-change] `bkpt!` has been turned into a function. It no longer
+ accepts an immediate value.
+
+### Removed
+
+- `vector_table` and its associated `struct`, `VectorTable`. It's not a good
+ idea to give people a simple way to call the exception handlers.
+
+- `Mutex`'s `lock` method as it's unsound. You could use it to get multiple
+ `&mut -` references to the wrapped data.
+
+## [v0.1.6] - 2017-01-22
+
+### Added
+
+- `Exception` a enumeration of the kind of exceptions the processor can service.
+ There's also a `Exception::current` constructor that returns the `Exception`
+ that's currently being serviced.
+
+## [v0.1.5]
+
+### Added
+
+- `interrupt::Mutex`, a "mutex" based on critical sections.
+
+### Changed
+
+- The closure that `interrupt::free` takes can now return a value.
+
+## [v0.1.4]
+
+### Added
+
+- `asm::nop`, a wrapper over the NOP instruction
+
+## [v0.1.3]
+
+### Added
+
+- a StackFrame data structure
+
+## [v0.1.2] - 2016-10-04
+
+### Fixed
+
+- Read/write Operations on registers (lr, cr, msp, etc.) which were reversed.
+
+## [v0.1.1] - 2016-10-03 - YANKED
+
+### Changed
+
+- Small, non user visible change to make this crate compile further for $HOST (e.g. x86_64) with the
+ goal of making it possible to test, on the HOST, downstream crates that depend on this one.
+
+## v0.1.0 - 2016-09-27 - YANKED
+
+### Added
+
+- Functions to access core peripherals like NVIC, SCB and SysTick.
+- Functions to access core registers like CONTROL, MSP and PSR.
+- Functions to enable/disable interrupts
+- Functions to get the vector table
+- Wrappers over miscellaneous instructions like `bkpt`
+
+[Unreleased]: https://github.com/rust-embedded/cortex-m/compare/v0.7.7...HEAD
+[v0.7.7]: https://github.com/rust-embedded/cortex-m/compare/v0.7.6...v0.7.7
+[v0.7.6]: https://github.com/rust-embedded/cortex-m/compare/v0.7.5...v0.7.6
+[v0.7.5]: https://github.com/rust-embedded/cortex-m/compare/v0.7.4...v0.7.5
+[v0.7.4]: https://github.com/rust-embedded/cortex-m/compare/v0.7.3...v0.7.4
+[v0.7.3]: https://github.com/rust-embedded/cortex-m/compare/v0.7.2...v0.7.3
+[v0.7.2]: https://github.com/rust-embedded/cortex-m/compare/v0.7.1...v0.7.2
+[v0.7.1]: https://github.com/rust-embedded/cortex-m/compare/v0.7.0...v0.7.1
+[v0.7.0]: https://github.com/rust-embedded/cortex-m/compare/v0.6.4...v0.7.0
+[v0.6.7]: https://github.com/rust-embedded/cortex-m/compare/v0.6.6...v0.6.7
+[v0.6.6]: https://github.com/rust-embedded/cortex-m/compare/v0.6.5...v0.6.6
+[v0.6.5]: https://github.com/rust-embedded/cortex-m/compare/v0.6.4...v0.6.5
+[v0.6.4]: https://github.com/rust-embedded/cortex-m/compare/v0.6.3...v0.6.4
+[v0.6.3]: https://github.com/rust-embedded/cortex-m/compare/v0.6.2...v0.6.3
+[v0.6.2]: https://github.com/rust-embedded/cortex-m/compare/v0.6.1...v0.6.2
+[v0.6.1]: https://github.com/rust-embedded/cortex-m/compare/v0.6.0...v0.6.1
+[v0.6.0]: https://github.com/rust-embedded/cortex-m/compare/v0.5.8...v0.6.0
+[v0.5.8]: https://github.com/rust-embedded/cortex-m/compare/v0.5.7...v0.5.8
+[v0.5.7]: https://github.com/rust-embedded/cortex-m/compare/v0.5.6...v0.5.7
+[v0.5.6]: https://github.com/rust-embedded/cortex-m/compare/v0.5.5...v0.5.6
+[v0.5.5]: https://github.com/rust-embedded/cortex-m/compare/v0.5.4...v0.5.5
+[v0.5.4]: https://github.com/rust-embedded/cortex-m/compare/v0.5.3...v0.5.4
+[v0.5.3]: https://github.com/rust-embedded/cortex-m/compare/v0.5.2...v0.5.3
+[v0.5.2]: https://github.com/rust-embedded/cortex-m/compare/v0.5.1...v0.5.2
+[v0.5.1]: https://github.com/rust-embedded/cortex-m/compare/v0.5.0...v0.5.1
+[v0.5.0]: https://github.com/rust-embedded/cortex-m/compare/v0.4.3...v0.5.0
+[v0.4.3]: https://github.com/rust-embedded/cortex-m/compare/v0.4.2...v0.4.3
+[v0.4.2]: https://github.com/rust-embedded/cortex-m/compare/v0.4.1...v0.4.2
+[v0.4.1]: https://github.com/rust-embedded/cortex-m/compare/v0.4.0...v0.4.1
+[v0.4.0]: https://github.com/rust-embedded/cortex-m/compare/v0.3.1...v0.4.0
+[v0.3.1]: https://github.com/rust-embedded/cortex-m/compare/v0.3.0...v0.3.1
+[v0.3.0]: https://github.com/rust-embedded/cortex-m/compare/v0.2.11...v0.3.0
+[v0.2.11]: https://github.com/rust-embedded/cortex-m/compare/v0.2.10...v0.2.11
+[v0.2.10]: https://github.com/rust-embedded/cortex-m/compare/v0.2.9...v0.2.10
+[v0.2.9]: https://github.com/rust-embedded/cortex-m/compare/v0.2.8...v0.2.9
+[v0.2.8]: https://github.com/rust-embedded/cortex-m/compare/v0.2.7...v0.2.8
+[v0.2.7]: https://github.com/rust-embedded/cortex-m/compare/v0.2.6...v0.2.7
+[v0.2.6]: https://github.com/rust-embedded/cortex-m/compare/v0.2.5...v0.2.6
+[v0.2.5]: https://github.com/rust-embedded/cortex-m/compare/v0.2.4...v0.2.5
+[v0.2.4]: https://github.com/rust-embedded/cortex-m/compare/v0.2.3...v0.2.4
+[v0.2.3]: https://github.com/rust-embedded/cortex-m/compare/v0.2.2...v0.2.3
+[v0.2.2]: https://github.com/rust-embedded/cortex-m/compare/v0.2.1...v0.2.2
+[v0.2.1]: https://github.com/rust-embedded/cortex-m/compare/v0.2.0...v0.2.1
+[v0.2.0]: https://github.com/rust-embedded/cortex-m/compare/v0.1.6...v0.2.0
+[v0.1.6]: https://github.com/rust-embedded/cortex-m/compare/v0.1.5...v0.1.6
+[v0.1.5]: https://github.com/rust-embedded/cortex-m/compare/v0.1.4...v0.1.5
+[v0.1.4]: https://github.com/rust-embedded/cortex-m/compare/v0.1.3...v0.1.4
+[v0.1.3]: https://github.com/rust-embedded/cortex-m/compare/v0.1.2...v0.1.3
+[v0.1.2]: https://github.com/rust-embedded/cortex-m/compare/v0.1.1...v0.1.2
+[v0.1.1]: https://github.com/rust-embedded/cortex-m/compare/v0.1.0...v0.1.1
diff --git a/src/rust/vendor/cortex-m/CODE_OF_CONDUCT.md b/src/rust/vendor/cortex-m/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..3ab76c639
--- /dev/null
+++ b/src/rust/vendor/cortex-m/CODE_OF_CONDUCT.md
@@ -0,0 +1,37 @@
+# The Rust Code of Conduct
+
+## Conduct
+
+**Contact**: [Cortex-M team](https://github.com/rust-embedded/wg#the-cortex-m-team)
+
+* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
+* On IRC, please avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
+* Please be kind and courteous. There's no need to be mean or rude.
+* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
+* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
+* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behavior. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
+* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel ops or any of the [Cortex-M team][team] immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
+* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behavior is not welcome.
+
+## Moderation
+
+These are the policies for upholding our community's standards of conduct.
+
+1. Remarks that violate the Rust standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
+2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
+3. Moderators will first respond to such remarks with a warning.
+4. If the warning is unheeded, the user will be "kicked," i.e., kicked out of the communication channel to cool off.
+5. If the user comes back and continues to make trouble, they will be banned, i.e., indefinitely excluded.
+6. Moderators may choose at their discretion to un-ban the user if it was a first offense and they offer the offended party a genuine apology.
+7. If a moderator bans someone and you think it was unjustified, please take it up with that moderator, or with a different moderator, **in private**. Complaints about bans in-channel are not allowed.
+8. Moderators are held to a higher standard than other community members. If a moderator creates an inappropriate situation, they should expect less leeway than others.
+
+In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
+
+And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
+
+The enforcement policies listed above apply to all official embedded WG venues; including official IRC channels (#rust-embedded); GitHub repositories under rust-embedded; and all forums under rust-embedded.org (forum.rust-embedded.org).
+
+*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling) as well as the [Contributor Covenant v1.3.0](https://www.contributor-covenant.org/version/1/3/0/).*
+
+[team]: https://github.com/rust-embedded/wg#the-cortex-m-team
diff --git a/src/rust/vendor/cortex-m/Cargo.toml b/src/rust/vendor/cortex-m/Cargo.toml
new file mode 100644
index 000000000..a86829c68
--- /dev/null
+++ b/src/rust/vendor/cortex-m/Cargo.toml
@@ -0,0 +1,77 @@
+# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
+#
+# When uploading crates to the registry Cargo will automatically
+# "normalize" Cargo.toml files for maximal compatibility
+# with all versions of Cargo and also rewrite `path` dependencies
+# to registry (e.g., crates.io) dependencies.
+#
+# If you are reading this file be aware that the original Cargo.toml
+# will likely look very different (and much more reasonable).
+# See Cargo.toml.orig for the original contents.
+
+[package]
+edition = "2018"
+name = "cortex-m"
+version = "0.7.7"
+authors = [
+ "The Cortex-M Team ",
+ "Jorge Aparicio ",
+]
+links = "cortex-m"
+description = "Low level access to Cortex-M processors"
+documentation = "https://docs.rs/cortex-m"
+readme = "README.md"
+keywords = [
+ "arm",
+ "cortex-m",
+ "register",
+ "peripheral",
+]
+categories = [
+ "embedded",
+ "hardware-support",
+ "no-std",
+]
+license = "MIT OR Apache-2.0"
+repository = "https://github.com/rust-embedded/cortex-m"
+
+[package.metadata.docs.rs]
+targets = [
+ "thumbv8m.main-none-eabihf",
+ "thumbv6m-none-eabi",
+ "thumbv7em-none-eabi",
+ "thumbv7em-none-eabihf",
+ "thumbv7m-none-eabi",
+ "thumbv8m.base-none-eabi",
+ "thumbv8m.main-none-eabi",
+]
+
+[dependencies.bare-metal]
+version = "0.2.4"
+features = ["const-fn"]
+
+[dependencies.bitfield]
+version = "0.13.2"
+
+[dependencies.critical-section]
+version = "1.0.0"
+optional = true
+
+[dependencies.embedded-hal]
+version = "0.2.4"
+
+[dependencies.serde]
+version = "1"
+features = ["derive"]
+optional = true
+
+[dependencies.volatile-register]
+version = "0.2.0"
+
+[features]
+cm7 = []
+cm7-r0p1 = ["cm7"]
+critical-section-single-core = ["critical-section/restore-state-bool"]
+inline-asm = []
+linker-plugin-lto = []
+std = []
diff --git a/src/rust/vendor/cortex-m/LICENSE-APACHE b/src/rust/vendor/cortex-m/LICENSE-APACHE
new file mode 100644
index 000000000..16fe87b06
--- /dev/null
+++ b/src/rust/vendor/cortex-m/LICENSE-APACHE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+END OF TERMS AND CONDITIONS
+
+APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+Copyright [yyyy] [name of copyright owner]
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/src/rust/vendor/cortex-m/LICENSE-MIT b/src/rust/vendor/cortex-m/LICENSE-MIT
new file mode 100644
index 000000000..a43445e6c
--- /dev/null
+++ b/src/rust/vendor/cortex-m/LICENSE-MIT
@@ -0,0 +1,25 @@
+Copyright (c) 2016 Jorge Aparicio
+
+Permission is hereby granted, free of charge, to any
+person obtaining a copy of this software and associated
+documentation files (the "Software"), to deal in the
+Software without restriction, including without
+limitation the rights to use, copy, modify, merge,
+publish, distribute, sublicense, and/or sell copies of
+the Software, and to permit persons to whom the Software
+is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice
+shall be included in all copies or substantial portions
+of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
+ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
+TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
+IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+DEALINGS IN THE SOFTWARE.
diff --git a/src/rust/vendor/cortex-m/README.md b/src/rust/vendor/cortex-m/README.md
new file mode 100644
index 000000000..6bd8aeddc
--- /dev/null
+++ b/src/rust/vendor/cortex-m/README.md
@@ -0,0 +1,39 @@
+[![crates.io](https://img.shields.io/crates/d/cortex-m.svg)](https://crates.io/crates/cortex-m)
+[![crates.io](https://img.shields.io/crates/v/cortex-m.svg)](https://crates.io/crates/cortex-m)
+
+# `cortex-m`
+
+> Low level access to Cortex-M processors
+
+This project is developed and maintained by the [Cortex-M team][team].
+
+## [Documentation](https://docs.rs/crate/cortex-m)
+
+## Minimum Supported Rust Version (MSRV)
+
+This crate is guaranteed to compile on stable Rust 1.38 and up. It might compile with older versions but that may change in any new patch release.
+
+## License
+
+Licensed under either of
+
+- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or
+ http://www.apache.org/licenses/LICENSE-2.0)
+- MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
+
+at your option.
+
+### Contribution
+
+Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the
+work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
+additional terms or conditions.
+
+## Code of Conduct
+
+Contribution to this crate is organized under the terms of the [Rust Code of
+Conduct][CoC], the maintainer of this crate, the [Cortex-M team][team], promises
+to intervene to uphold that code of conduct.
+
+[CoC]: CODE_OF_CONDUCT.md
+[team]: https://github.com/rust-embedded/wg#the-cortex-m-team
diff --git a/src/rust/vendor/cortex-m/asm-toolchain b/src/rust/vendor/cortex-m/asm-toolchain
new file mode 100644
index 000000000..cc5dbb24a
--- /dev/null
+++ b/src/rust/vendor/cortex-m/asm-toolchain
@@ -0,0 +1 @@
+nightly-2021-12-16
diff --git a/src/rust/vendor/cortex-m/asm/inline.rs b/src/rust/vendor/cortex-m/asm/inline.rs
new file mode 100644
index 000000000..bbc04d2ba
--- /dev/null
+++ b/src/rust/vendor/cortex-m/asm/inline.rs
@@ -0,0 +1,448 @@
+//! Inline assembly implementing the routines exposed in `cortex_m::asm`.
+//!
+//! If the `inline-asm` feature is enabled, these functions will be directly called by the
+//! `cortex-m` wrappers. Otherwise, `cortex-m` links against them via prebuilt archives.
+//!
+//! All of these functions should be blanket-`unsafe`. `cortex-m` provides safe wrappers where
+//! applicable.
+
+use core::arch::asm;
+use core::sync::atomic::{compiler_fence, Ordering};
+
+#[inline(always)]
+pub unsafe fn __bkpt() {
+ asm!("bkpt", options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __control_r() -> u32 {
+ let r;
+ asm!("mrs {}, CONTROL", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+#[inline(always)]
+pub unsafe fn __control_w(w: u32) {
+ // ISB is required after writing to CONTROL,
+ // per ARM architectural requirements (see Application Note 321).
+ asm!(
+ "msr CONTROL, {}",
+ "isb",
+ in(reg) w,
+ options(nomem, nostack, preserves_flags),
+ );
+
+ // Ensure memory accesses are not reordered around the CONTROL update.
+ compiler_fence(Ordering::SeqCst);
+}
+
+#[inline(always)]
+pub unsafe fn __cpsid() {
+ asm!("cpsid i", options(nomem, nostack, preserves_flags));
+
+ // Ensure no subsequent memory accesses are reordered to before interrupts are disabled.
+ compiler_fence(Ordering::SeqCst);
+}
+
+#[inline(always)]
+pub unsafe fn __cpsie() {
+ // Ensure no preceeding memory accesses are reordered to after interrupts are enabled.
+ compiler_fence(Ordering::SeqCst);
+
+ asm!("cpsie i", options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __delay(cyc: u32) {
+ // The loop will normally take 3 to 4 CPU cycles per iteration, but superscalar cores
+ // (eg. Cortex-M7) can potentially do it in 2, so we use that as the lower bound, since delaying
+ // for more cycles is okay.
+ // Add 1 to prevent an integer underflow which would cause a long freeze
+ let real_cyc = 1 + cyc / 2;
+ asm!(
+ // Use local labels to avoid R_ARM_THM_JUMP8 relocations which fail on thumbv6m.
+ "1:",
+ "subs {}, #1",
+ "bne 1b",
+ inout(reg) real_cyc => _,
+ options(nomem, nostack),
+ );
+}
+
+#[inline(always)]
+pub unsafe fn __dmb() {
+ compiler_fence(Ordering::SeqCst);
+ asm!("dmb", options(nomem, nostack, preserves_flags));
+ compiler_fence(Ordering::SeqCst);
+}
+
+#[inline(always)]
+pub unsafe fn __dsb() {
+ compiler_fence(Ordering::SeqCst);
+ asm!("dsb", options(nomem, nostack, preserves_flags));
+ compiler_fence(Ordering::SeqCst);
+}
+
+#[inline(always)]
+pub unsafe fn __isb() {
+ compiler_fence(Ordering::SeqCst);
+ asm!("isb", options(nomem, nostack, preserves_flags));
+ compiler_fence(Ordering::SeqCst);
+}
+
+#[inline(always)]
+pub unsafe fn __msp_r() -> u32 {
+ let r;
+ asm!("mrs {}, MSP", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+#[inline(always)]
+pub unsafe fn __msp_w(val: u32) {
+ // Technically is writing to the stack pointer "not pushing any data to the stack"?
+ // In any event, if we don't set `nostack` here, this method is useless as the new
+ // stack value is immediately mutated by returning. Really this is just not a good
+ // method and its higher-level use is marked as deprecated in cortex-m.
+ asm!("msr MSP, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+}
+
+// NOTE: No FFI shim, this requires inline asm.
+#[inline(always)]
+pub unsafe fn __apsr_r() -> u32 {
+ let r;
+ asm!("mrs {}, APSR", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+#[inline(always)]
+pub unsafe fn __nop() {
+ // NOTE: This is a `pure` asm block, but applying that option allows the compiler to eliminate
+ // the nop entirely (or to collapse multiple subsequent ones). Since the user probably wants N
+ // nops when they call `nop` N times, let's not add that option.
+ asm!("nop", options(nomem, nostack, preserves_flags));
+}
+
+// NOTE: No FFI shim, this requires inline asm.
+#[inline(always)]
+pub unsafe fn __pc_r() -> u32 {
+ let r;
+ asm!("mov {}, pc", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+// NOTE: No FFI shim, this requires inline asm.
+#[inline(always)]
+pub unsafe fn __pc_w(val: u32) {
+ asm!("mov pc, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+}
+
+// NOTE: No FFI shim, this requires inline asm.
+#[inline(always)]
+pub unsafe fn __lr_r() -> u32 {
+ let r;
+ asm!("mov {}, lr", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+// NOTE: No FFI shim, this requires inline asm.
+#[inline(always)]
+pub unsafe fn __lr_w(val: u32) {
+ asm!("mov lr, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __primask_r() -> u32 {
+ let r;
+ asm!("mrs {}, PRIMASK", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+#[inline(always)]
+pub unsafe fn __psp_r() -> u32 {
+ let r;
+ asm!("mrs {}, PSP", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+}
+
+#[inline(always)]
+pub unsafe fn __psp_w(val: u32) {
+ // See comment on __msp_w. Unlike MSP, there are legitimate use-cases for modifying PSP
+ // if MSP is currently being used as the stack pointer.
+ asm!("msr PSP, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __sev() {
+ asm!("sev", options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __udf() -> ! {
+ asm!("udf #0", options(noreturn, nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __wfe() {
+ asm!("wfe", options(nomem, nostack, preserves_flags));
+}
+
+#[inline(always)]
+pub unsafe fn __wfi() {
+ asm!("wfi", options(nomem, nostack, preserves_flags));
+}
+
+/// Semihosting syscall.
+#[inline(always)]
+pub unsafe fn __sh_syscall(mut nr: u32, arg: u32) -> u32 {
+ asm!("bkpt #0xab", inout("r0") nr, in("r1") arg, options(nomem, nostack, preserves_flags));
+ nr
+}
+
+/// Set CONTROL.SPSEL to 0, write `msp` to MSP, branch to `rv`.
+#[inline(always)]
+pub unsafe fn __bootstrap(msp: u32, rv: u32) -> ! {
+ asm!(
+ "mrs {tmp}, CONTROL",
+ "bics {tmp}, {spsel}",
+ "msr CONTROL, {tmp}",
+ "isb",
+ "msr MSP, {msp}",
+ "bx {rv}",
+ // `out(reg) _` is not permitted in a `noreturn` asm! call,
+ // so instead use `in(reg) 0` and don't restore it afterwards.
+ tmp = in(reg) 0,
+ spsel = in(reg) 2,
+ msp = in(reg) msp,
+ rv = in(reg) rv,
+ options(noreturn, nomem, nostack),
+ );
+}
+
+// v7m *AND* v8m.main, but *NOT* v8m.base
+#[cfg(any(armv7m, armv8m_main))]
+pub use self::v7m::*;
+#[cfg(any(armv7m, armv8m_main))]
+mod v7m {
+ use core::arch::asm;
+ use core::sync::atomic::{compiler_fence, Ordering};
+
+ #[inline(always)]
+ pub unsafe fn __basepri_max(val: u8) {
+ asm!("msr BASEPRI_MAX, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+
+ #[inline(always)]
+ pub unsafe fn __basepri_r() -> u8 {
+ let r;
+ asm!("mrs {}, BASEPRI", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __basepri_w(val: u8) {
+ asm!("msr BASEPRI, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+
+ #[inline(always)]
+ pub unsafe fn __faultmask_r() -> u32 {
+ let r;
+ asm!("mrs {}, FAULTMASK", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __enable_icache() {
+ asm!(
+ "ldr {0}, =0xE000ED14", // CCR
+ "mrs {2}, PRIMASK", // save critical nesting info
+ "cpsid i", // mask interrupts
+ "ldr {1}, [{0}]", // read CCR
+ "orr.w {1}, {1}, #(1 << 17)", // Set bit 17, IC
+ "str {1}, [{0}]", // write it back
+ "dsb", // ensure store completes
+ "isb", // synchronize pipeline
+ "msr PRIMASK, {2}", // unnest critical section
+ out(reg) _,
+ out(reg) _,
+ out(reg) _,
+ options(nostack),
+ );
+ compiler_fence(Ordering::SeqCst);
+ }
+
+ #[inline(always)]
+ pub unsafe fn __enable_dcache() {
+ asm!(
+ "ldr {0}, =0xE000ED14", // CCR
+ "mrs {2}, PRIMASK", // save critical nesting info
+ "cpsid i", // mask interrupts
+ "ldr {1}, [{0}]", // read CCR
+ "orr.w {1}, {1}, #(1 << 16)", // Set bit 16, DC
+ "str {1}, [{0}]", // write it back
+ "dsb", // ensure store completes
+ "isb", // synchronize pipeline
+ "msr PRIMASK, {2}", // unnest critical section
+ out(reg) _,
+ out(reg) _,
+ out(reg) _,
+ options(nostack),
+ );
+ compiler_fence(Ordering::SeqCst);
+ }
+}
+
+#[cfg(armv7em)]
+pub use self::v7em::*;
+#[cfg(armv7em)]
+mod v7em {
+ use core::arch::asm;
+
+ #[inline(always)]
+ pub unsafe fn __basepri_max_cm7_r0p1(val: u8) {
+ asm!(
+ "mrs {1}, PRIMASK",
+ "cpsid i",
+ "tst.w {1}, #1",
+ "msr BASEPRI_MAX, {0}",
+ "it ne",
+ "bxne lr",
+ "cpsie i",
+ in(reg) val,
+ out(reg) _,
+ options(nomem, nostack, preserves_flags),
+ );
+ }
+
+ #[inline(always)]
+ pub unsafe fn __basepri_w_cm7_r0p1(val: u8) {
+ asm!(
+ "mrs {1}, PRIMASK",
+ "cpsid i",
+ "tst.w {1}, #1",
+ "msr BASEPRI, {0}",
+ "it ne",
+ "bxne lr",
+ "cpsie i",
+ in(reg) val,
+ out(reg) _,
+ options(nomem, nostack, preserves_flags),
+ );
+ }
+}
+
+#[cfg(armv8m)]
+pub use self::v8m::*;
+/// Baseline and Mainline.
+#[cfg(armv8m)]
+mod v8m {
+ use core::arch::asm;
+
+ #[inline(always)]
+ pub unsafe fn __tt(mut target: u32) -> u32 {
+ asm!(
+ "tt {target}, {target}",
+ target = inout(reg) target,
+ options(nomem, nostack, preserves_flags),
+ );
+ target
+ }
+
+ #[inline(always)]
+ pub unsafe fn __ttt(mut target: u32) -> u32 {
+ asm!(
+ "ttt {target}, {target}",
+ target = inout(reg) target,
+ options(nomem, nostack, preserves_flags),
+ );
+ target
+ }
+
+ #[inline(always)]
+ pub unsafe fn __tta(mut target: u32) -> u32 {
+ asm!(
+ "tta {target}, {target}",
+ target = inout(reg) target,
+ options(nomem, nostack, preserves_flags),
+ );
+ target
+ }
+
+ #[inline(always)]
+ pub unsafe fn __ttat(mut target: u32) -> u32 {
+ asm!(
+ "ttat {target}, {target}",
+ target = inout(reg) target,
+ options(nomem, nostack, preserves_flags),
+ );
+ target
+ }
+
+ #[inline(always)]
+ pub unsafe fn __msp_ns_r() -> u32 {
+ let r;
+ asm!("mrs {}, MSP_NS", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __msp_ns_w(val: u32) {
+ asm!("msr MSP_NS, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+
+ #[inline(always)]
+ pub unsafe fn __bxns(val: u32) {
+ asm!("BXNS {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+}
+
+#[cfg(armv8m_main)]
+pub use self::v8m_main::*;
+/// Mainline only.
+#[cfg(armv8m_main)]
+mod v8m_main {
+ use core::arch::asm;
+
+ #[inline(always)]
+ pub unsafe fn __msplim_r() -> u32 {
+ let r;
+ asm!("mrs {}, MSPLIM", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __msplim_w(val: u32) {
+ asm!("msr MSPLIM, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+
+ #[inline(always)]
+ pub unsafe fn __psplim_r() -> u32 {
+ let r;
+ asm!("mrs {}, PSPLIM", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __psplim_w(val: u32) {
+ asm!("msr PSPLIM, {}", in(reg) val, options(nomem, nostack, preserves_flags));
+ }
+}
+
+#[cfg(has_fpu)]
+pub use self::fpu::*;
+/// All targets with FPU.
+#[cfg(has_fpu)]
+mod fpu {
+ use core::arch::asm;
+
+ #[inline(always)]
+ pub unsafe fn __fpscr_r() -> u32 {
+ let r;
+ asm!("vmrs {}, fpscr", out(reg) r, options(nomem, nostack, preserves_flags));
+ r
+ }
+
+ #[inline(always)]
+ pub unsafe fn __fpscr_w(val: u32) {
+ asm!("vmsr fpscr, {}", in(reg) val, options(nomem, nostack));
+ }
+}
diff --git a/src/rust/vendor/cortex-m/asm/lib.rs b/src/rust/vendor/cortex-m/asm/lib.rs
new file mode 100644
index 000000000..48f3dc211
--- /dev/null
+++ b/src/rust/vendor/cortex-m/asm/lib.rs
@@ -0,0 +1,143 @@
+//! FFI shim around the inline assembly in `inline.rs`.
+//!
+//! We use this file to precompile some assembly stubs into the static libraries you can find in
+//! `bin`. Apps using the `cortex-m` crate then link against those static libraries and don't need
+//! to build this file themselves.
+//!
+//! Nowadays the assembly stubs are no longer actual assembly files, but actually just this small
+//! Rust crate that uses unstable inline assembly, coupled with the `xtask` tool to invoke rustc
+//! and build the files.
+//!
+//! Precompiling this to a static lib allows users to call assembly routines from stable Rust, but
+//! also perform [linker plugin LTO] with the precompiled artifacts to completely inline the
+//! assembly routines into their code, which brings the "outline assembly" on par with "real" inline
+//! assembly.
+//!
+//! For developers and contributors to `cortex-m`, this setup means that they don't have to install
+//! any binutils, assembler, or C compiler to hack on the crate. All they need is to run `cargo
+//! xtask assemble` to rebuild the archives from this file.
+//!
+//! Cool, right?
+//!
+//! # Rust version management
+//!
+//! Since inline assembly is still unstable, and we want to ensure that the created blobs are
+//! up-to-date in CI, we have to pin the nightly version we use for this. The nightly toolchain is
+//! stored in `asm-toolchain`.
+//!
+//! The `cargo xtask` automation will automatically install the `asm-toolchain` as well as all
+//! Cortex-M targets needed to generate the blobs.
+//!
+//! [linker plugin LTO]: https://doc.rust-lang.org/stable/rustc/linker-plugin-lto.html
+
+#![feature(asm)]
+#![no_std]
+#![crate_type = "staticlib"]
+#![deny(warnings)]
+// Don't warn about feature(asm) being stable on Rust >= 1.59.0
+#![allow(stable_features)]
+
+mod inline;
+
+macro_rules! shims {
+ (
+ $( fn $name:ident( $($arg:ident: $argty:ty),* ) $(-> $ret:ty)?; )+
+ ) => {
+ $(
+ #[no_mangle]
+ pub unsafe extern "C" fn $name(
+ $($arg: $argty),*
+ ) $(-> $ret)? {
+ crate::inline::$name($($arg),*)
+ }
+ )+
+ };
+}
+
+shims! {
+ fn __bkpt();
+ fn __control_r() -> u32;
+ fn __control_w(w: u32);
+ fn __cpsid();
+ fn __cpsie();
+ fn __delay(cyc: u32);
+ fn __dmb();
+ fn __dsb();
+ fn __isb();
+ fn __msp_r() -> u32;
+ fn __msp_w(val: u32);
+ fn __nop();
+ fn __primask_r() -> u32;
+ fn __psp_r() -> u32;
+ fn __psp_w(val: u32);
+ fn __sev();
+ fn __udf() -> !;
+ fn __wfe();
+ fn __wfi();
+ fn __sh_syscall(nr: u32, arg: u32) -> u32;
+ fn __bootstrap(msp: u32, rv: u32) -> !;
+}
+
+// v7m *AND* v8m.main, but *NOT* v8m.base
+#[cfg(any(armv7m, armv8m_main))]
+shims! {
+ fn __basepri_max(val: u8);
+ fn __basepri_r() -> u8;
+ fn __basepri_w(val: u8);
+ fn __faultmask_r() -> u32;
+ fn __enable_icache();
+ fn __enable_dcache();
+}
+
+#[cfg(armv7em)]
+shims! {
+ fn __basepri_max_cm7_r0p1(val: u8);
+ fn __basepri_w_cm7_r0p1(val: u8);
+}
+
+// Baseline and Mainline.
+#[cfg(armv8m)]
+shims! {
+ fn __tt(target: u32) -> u32;
+ fn __ttt(target: u32) -> u32;
+ fn __tta(target: u32) -> u32;
+ fn __ttat(target: u32) -> u32;
+ fn __msp_ns_r() -> u32;
+ fn __msp_ns_w(val: u32);
+ fn __bxns(val: u32);
+}
+
+// Mainline only.
+#[cfg(armv8m_main)]
+shims! {
+ fn __msplim_r() -> u32;
+ fn __msplim_w(val: u32);
+ fn __psplim_r() -> u32;
+ fn __psplim_w(val: u32);
+}
+
+// All targets with FPU.
+#[cfg(has_fpu)]
+shims! {
+ fn __fpscr_r() -> u32;
+ fn __fpscr_w(val: u32);
+}
+
+/// We *must* define a panic handler here, even though nothing here should ever be able to panic.
+///
+/// We prove that nothing will ever panic by calling a function that doesn't exist. If the panic
+/// handler gets linked in, this causes a linker error. We always build this file with optimizations
+/// enabled, but even without them the panic handler should never be linked in.
+#[panic_handler]
+#[link_section = ".text.asm_panic_handler"]
+fn panic(_: &core::panic::PanicInfo) -> ! {
+ extern "C" {
+ #[link_name = "cortex-m internal error: panic handler not optimized out, please file an \
+ issue at https://github.com/rust-embedded/cortex-m"]
+ fn __cortex_m_should_not_panic() -> !;
+ }
+
+ unsafe {
+ __cortex_m_should_not_panic();
+ }
+}
diff --git a/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi-lto.a b/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi-lto.a
new file mode 100644
index 000000000..a203d7ae8
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi.a b/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi.a
new file mode 100644
index 000000000..9640a6994
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv6m-none-eabi.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi-lto.a b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi-lto.a
new file mode 100644
index 000000000..b34ac64f1
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi.a b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi.a
new file mode 100644
index 000000000..88acbddf6
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabi.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf-lto.a b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf-lto.a
new file mode 100644
index 000000000..6de94bbf2
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf.a b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf.a
new file mode 100644
index 000000000..cf91a7a59
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7em-none-eabihf.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi-lto.a b/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi-lto.a
new file mode 100644
index 000000000..7f677a931
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi.a b/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi.a
new file mode 100644
index 000000000..ff4bf211c
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv7m-none-eabi.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi-lto.a b/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi-lto.a
new file mode 100644
index 000000000..f62acafd3
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi.a b/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi.a
new file mode 100644
index 000000000..c0cc96c4a
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.base-none-eabi.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi-lto.a b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi-lto.a
new file mode 100644
index 000000000..1a5151522
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi.a b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi.a
new file mode 100644
index 000000000..d017a15b7
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabi.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf-lto.a b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf-lto.a
new file mode 100644
index 000000000..fd3dc9283
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf-lto.a differ
diff --git a/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf.a b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf.a
new file mode 100644
index 000000000..223ff1df3
Binary files /dev/null and b/src/rust/vendor/cortex-m/bin/thumbv8m.main-none-eabihf.a differ
diff --git a/src/rust/vendor/cortex-m/build.rs b/src/rust/vendor/cortex-m/build.rs
new file mode 100644
index 000000000..23ceebad4
--- /dev/null
+++ b/src/rust/vendor/cortex-m/build.rs
@@ -0,0 +1,54 @@
+use std::path::PathBuf;
+use std::{env, fs};
+
+fn main() {
+ let target = env::var("TARGET").unwrap();
+ let host_triple = env::var("HOST").unwrap();
+ let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());
+ let name = env::var("CARGO_PKG_NAME").unwrap();
+
+ if host_triple == target {
+ println!("cargo:rustc-cfg=native");
+ }
+
+ if target.starts_with("thumb") {
+ let suffix = if env::var_os("CARGO_FEATURE_LINKER_PLUGIN_LTO").is_some() {
+ "-lto"
+ } else {
+ ""
+ };
+
+ fs::copy(
+ format!("bin/{}{}.a", target, suffix),
+ out_dir.join(format!("lib{}.a", name)),
+ )
+ .unwrap();
+
+ println!("cargo:rustc-link-lib=static={}", name);
+ println!("cargo:rustc-link-search={}", out_dir.display());
+ }
+
+ if target.starts_with("thumbv6m-") {
+ println!("cargo:rustc-cfg=cortex_m");
+ println!("cargo:rustc-cfg=armv6m");
+ } else if target.starts_with("thumbv7m-") {
+ println!("cargo:rustc-cfg=cortex_m");
+ println!("cargo:rustc-cfg=armv7m");
+ } else if target.starts_with("thumbv7em-") {
+ println!("cargo:rustc-cfg=cortex_m");
+ println!("cargo:rustc-cfg=armv7m");
+ println!("cargo:rustc-cfg=armv7em"); // (not currently used)
+ } else if target.starts_with("thumbv8m.base") {
+ println!("cargo:rustc-cfg=cortex_m");
+ println!("cargo:rustc-cfg=armv8m");
+ println!("cargo:rustc-cfg=armv8m_base");
+ } else if target.starts_with("thumbv8m.main") {
+ println!("cargo:rustc-cfg=cortex_m");
+ println!("cargo:rustc-cfg=armv8m");
+ println!("cargo:rustc-cfg=armv8m_main");
+ }
+
+ if target.ends_with("-eabihf") {
+ println!("cargo:rustc-cfg=has_fpu");
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/asm.rs b/src/rust/vendor/cortex-m/src/asm.rs
new file mode 100644
index 000000000..4dc1ab07c
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/asm.rs
@@ -0,0 +1,209 @@
+//! Miscellaneous assembly instructions
+
+// When inline assembly is enabled, pull in the assembly routines here. `call_asm!` will invoke
+// these routines.
+#[cfg(feature = "inline-asm")]
+#[path = "../asm/inline.rs"]
+pub(crate) mod inline;
+
+/// Puts the processor in Debug state. Debuggers can pick this up as a "breakpoint".
+///
+/// **NOTE** calling `bkpt` when the processor is not connected to a debugger will cause an
+/// exception.
+#[inline(always)]
+pub fn bkpt() {
+ call_asm!(__bkpt());
+}
+
+/// Blocks the program for *at least* `cycles` CPU cycles.
+///
+/// This is implemented in assembly so its execution time is independent of the optimization
+/// level, however it is dependent on the specific architecture and core configuration.
+///
+/// NOTE that the delay can take much longer if interrupts are serviced during its execution
+/// and the execution time may vary with other factors. This delay is mainly useful for simple
+/// timer-less initialization of peripherals if and only if accurate timing is not essential. In
+/// any other case please use a more accurate method to produce a delay.
+#[inline]
+pub fn delay(cycles: u32) {
+ call_asm!(__delay(cycles: u32));
+}
+
+/// A no-operation. Useful to prevent delay loops from being optimized away.
+#[inline]
+pub fn nop() {
+ call_asm!(__nop());
+}
+
+/// Generate an Undefined Instruction exception.
+///
+/// Can be used as a stable alternative to `core::intrinsics::abort`.
+#[inline]
+pub fn udf() -> ! {
+ call_asm!(__udf() -> !)
+}
+
+/// Wait For Event
+#[inline]
+pub fn wfe() {
+ call_asm!(__wfe())
+}
+
+/// Wait For Interrupt
+#[inline]
+pub fn wfi() {
+ call_asm!(__wfi())
+}
+
+/// Send Event
+#[inline]
+pub fn sev() {
+ call_asm!(__sev())
+}
+
+/// Instruction Synchronization Barrier
+///
+/// Flushes the pipeline in the processor, so that all instructions following the `ISB` are fetched
+/// from cache or memory, after the instruction has been completed.
+#[inline]
+pub fn isb() {
+ call_asm!(__isb())
+}
+
+/// Data Synchronization Barrier
+///
+/// Acts as a special kind of memory barrier. No instruction in program order after this instruction
+/// can execute until this instruction completes. This instruction completes only when both:
+///
+/// * any explicit memory access made before this instruction is complete
+/// * all cache and branch predictor maintenance operations before this instruction complete
+#[inline]
+pub fn dsb() {
+ call_asm!(__dsb())
+}
+
+/// Data Memory Barrier
+///
+/// Ensures that all explicit memory accesses that appear in program order before the `DMB`
+/// instruction are observed before any explicit memory accesses that appear in program order
+/// after the `DMB` instruction.
+#[inline]
+pub fn dmb() {
+ call_asm!(__dmb())
+}
+
+/// Test Target
+///
+/// Queries the Security state and access permissions of a memory location.
+/// Returns a Test Target Response Payload (cf section D1.2.215 of
+/// Armv8-M Architecture Reference Manual).
+#[inline]
+#[cfg(armv8m)]
+// The __tt function does not dereference the pointer received.
+#[allow(clippy::not_unsafe_ptr_arg_deref)]
+pub fn tt(addr: *mut u32) -> u32 {
+ let addr = addr as u32;
+ call_asm!(__tt(addr: u32) -> u32)
+}
+
+/// Test Target Unprivileged
+///
+/// Queries the Security state and access permissions of a memory location for an unprivileged
+/// access to that location.
+/// Returns a Test Target Response Payload (cf section D1.2.215 of
+/// Armv8-M Architecture Reference Manual).
+#[inline]
+#[cfg(armv8m)]
+// The __ttt function does not dereference the pointer received.
+#[allow(clippy::not_unsafe_ptr_arg_deref)]
+pub fn ttt(addr: *mut u32) -> u32 {
+ let addr = addr as u32;
+ call_asm!(__ttt(addr: u32) -> u32)
+}
+
+/// Test Target Alternate Domain
+///
+/// Queries the Security state and access permissions of a memory location for a Non-Secure access
+/// to that location. This instruction is only valid when executing in Secure state and is
+/// undefined if used from Non-Secure state.
+/// Returns a Test Target Response Payload (cf section D1.2.215 of
+/// Armv8-M Architecture Reference Manual).
+#[inline]
+#[cfg(armv8m)]
+// The __tta function does not dereference the pointer received.
+#[allow(clippy::not_unsafe_ptr_arg_deref)]
+pub fn tta(addr: *mut u32) -> u32 {
+ let addr = addr as u32;
+ call_asm!(__tta(addr: u32) -> u32)
+}
+
+/// Test Target Alternate Domain Unprivileged
+///
+/// Queries the Security state and access permissions of a memory location for a Non-Secure and
+/// unprivileged access to that location. This instruction is only valid when executing in Secure
+/// state and is undefined if used from Non-Secure state.
+/// Returns a Test Target Response Payload (cf section D1.2.215 of
+/// Armv8-M Architecture Reference Manual).
+#[inline]
+#[cfg(armv8m)]
+// The __ttat function does not dereference the pointer received.
+#[allow(clippy::not_unsafe_ptr_arg_deref)]
+pub fn ttat(addr: *mut u32) -> u32 {
+ let addr = addr as u32;
+ call_asm!(__ttat(addr: u32) -> u32)
+}
+
+/// Branch and Exchange Non-secure
+///
+/// See section C2.4.26 of Armv8-M Architecture Reference Manual for details.
+/// Undefined if executed in Non-Secure state.
+#[inline]
+#[cfg(armv8m)]
+pub unsafe fn bx_ns(addr: u32) {
+ call_asm!(__bxns(addr: u32));
+}
+
+/// Semihosting syscall.
+///
+/// This method is used by cortex-m-semihosting to provide semihosting syscalls.
+#[inline]
+pub unsafe fn semihosting_syscall(nr: u32, arg: u32) -> u32 {
+ call_asm!(__sh_syscall(nr: u32, arg: u32) -> u32)
+}
+
+/// Bootstrap.
+///
+/// Clears CONTROL.SPSEL (setting the main stack to be the active stack),
+/// updates the main stack pointer to the address in `msp`, then jumps
+/// to the address in `rv`.
+///
+/// # Safety
+///
+/// `msp` and `rv` must point to valid stack memory and executable code,
+/// respectively.
+#[inline]
+pub unsafe fn bootstrap(msp: *const u32, rv: *const u32) -> ! {
+ // Ensure thumb mode is set.
+ let rv = (rv as u32) | 1;
+ let msp = msp as u32;
+ call_asm!(__bootstrap(msp: u32, rv: u32) -> !);
+}
+
+/// Bootload.
+///
+/// Reads the initial stack pointer value and reset vector from
+/// the provided vector table address, sets the active stack to
+/// the main stack, sets the main stack pointer to the new initial
+/// stack pointer, then jumps to the reset vector.
+///
+/// # Safety
+///
+/// The provided `vector_table` must point to a valid vector
+/// table, with a valid stack pointer as the first word and
+/// a valid reset vector as the second word.
+#[inline]
+pub unsafe fn bootload(vector_table: *const u32) -> ! {
+ let msp = core::ptr::read_volatile(vector_table);
+ let rv = core::ptr::read_volatile(vector_table.offset(1));
+ bootstrap(msp as *const u32, rv as *const u32);
+}
diff --git a/src/rust/vendor/cortex-m/src/call_asm.rs b/src/rust/vendor/cortex-m/src/call_asm.rs
new file mode 100644
index 000000000..295277f38
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/call_asm.rs
@@ -0,0 +1,24 @@
+/// An internal macro to invoke an assembly routine.
+///
+/// Depending on whether the unstable `inline-asm` feature is enabled, this will either call into
+/// the inline assembly implementation directly, or through the FFI shim (see `asm/lib.rs`).
+macro_rules! call_asm {
+ ( $func:ident ( $($args:ident: $tys:ty),* ) $(-> $ret:ty)? ) => {{
+ #[allow(unused_unsafe)]
+ unsafe {
+ match () {
+ #[cfg(feature = "inline-asm")]
+ () => crate::asm::inline::$func($($args),*),
+
+ #[cfg(not(feature = "inline-asm"))]
+ () => {
+ extern "C" {
+ fn $func($($args: $tys),*) $(-> $ret)?;
+ }
+
+ $func($($args),*)
+ },
+ }
+ }
+ }};
+}
diff --git a/src/rust/vendor/cortex-m/src/cmse.rs b/src/rust/vendor/cortex-m/src/cmse.rs
new file mode 100644
index 000000000..36d744754
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/cmse.rs
@@ -0,0 +1,238 @@
+//! Cortex-M Security Extensions
+//!
+//! This module provides several helper functions to support Armv8-M and Armv8.1-M Security
+//! Extensions.
+//! Most of this implementation is directly inspired by the "Armv8-M Security Extensions:
+//! Requirements on Development Tools" document available here:
+//! https://developer.arm.com/docs/ecm0359818/latest
+//!
+//! Please note that the TT instructions support as described part 4 of the document linked above is
+//! not part of CMSE but is still present in this module. The TT instructions return the
+//! configuration of the Memory Protection Unit at an address.
+//!
+//! # Notes
+//!
+//! * Non-Secure Unprivileged code will always read zeroes from TestTarget and should not use it.
+//! * Non-Secure Privileged code can check current (AccessType::Current) and Non-Secure Unprivileged
+//! accesses (AccessType::Unprivileged).
+//! * Secure Unprivileged code can check Non-Secure Unprivileged accesses (AccessType::NonSecure).
+//! * Secure Privileged code can check all access types.
+//!
+//! # Example
+//!
+//! ```
+//! use cortex_m::cmse::{TestTarget, AccessType};
+//!
+//! // suspect_address was given by Non-Secure to a Secure function to write at it.
+//! // But is it allowed to?
+//! let suspect_address_test = TestTarget::check(0xDEADBEEF as *mut u32,
+//! AccessType::NonSecureUnprivileged);
+//! if suspect_address_test.ns_read_and_writable() {
+//! // Non-Secure can not read or write this address!
+//! }
+//! ```
+
+use crate::asm::{tt, tta, ttat, ttt};
+use bitfield::bitfield;
+
+/// Memory access behaviour: determine which privilege execution mode is used and which Memory
+/// Protection Unit (MPU) is used.
+#[derive(PartialEq, Copy, Clone, Debug)]
+pub enum AccessType {
+ /// Access using current privilege level and reading from current security state MPU.
+ /// Uses the TT instruction.
+ Current,
+ /// Unprivileged access reading from current security state MPU. Uses the TTT instruction.
+ Unprivileged,
+ /// Access using current privilege level reading from Non-Secure MPU. Uses the TTA instruction.
+ /// Undefined if used from Non-Secure state.
+ NonSecure,
+ /// Unprivilege access reading from Non-Secure MPU. Uses the TTAT instruction.
+ /// Undefined if used from Non-Secure state.
+ NonSecureUnprivileged,
+}
+
+/// Abstraction of TT instructions and helper functions to determine the security and privilege
+/// attribute of a target address, accessed in different ways.
+#[derive(PartialEq, Copy, Clone, Debug)]
+pub struct TestTarget {
+ tt_resp: TtResp,
+ access_type: AccessType,
+}
+
+bitfield! {
+ /// Test Target Response Payload
+ ///
+ /// Provides the response payload from a TT, TTA, TTT or TTAT instruction.
+ #[derive(PartialEq, Copy, Clone)]
+ struct TtResp(u32);
+ impl Debug;
+ mregion, _: 7, 0;
+ sregion, _: 15, 8;
+ mrvalid, _: 16;
+ srvalid, _: 17;
+ r, _: 18;
+ rw, _: 19;
+ nsr, _: 20;
+ nsrw, _: 21;
+ s, _: 22;
+ irvalid, _: 23;
+ iregion, _: 31, 24;
+}
+
+impl TestTarget {
+ /// Creates a Test Target Response Payload by testing addr using access_type.
+ #[inline]
+ pub fn check(addr: *mut u32, access_type: AccessType) -> Self {
+ let tt_resp = match access_type {
+ AccessType::Current => TtResp(tt(addr)),
+ AccessType::Unprivileged => TtResp(ttt(addr)),
+ AccessType::NonSecure => TtResp(tta(addr)),
+ AccessType::NonSecureUnprivileged => TtResp(ttat(addr)),
+ };
+
+ TestTarget {
+ tt_resp,
+ access_type,
+ }
+ }
+
+ /// Creates a Test Target Response Payload by testing the zone from addr to addr + size - 1
+ /// using access_type.
+ /// Returns None if:
+ /// * the address zone overlaps SAU, IDAU or MPU region boundaries
+ /// * size is 0
+ /// * addr + size - 1 overflows
+ #[inline]
+ pub fn check_range(addr: *mut u32, size: usize, access_type: AccessType) -> Option {
+ let begin: usize = addr as usize;
+ // Last address of the range (addr + size - 1). This also checks if size is 0.
+ let end: usize = begin.checked_add(size.checked_sub(1)?)?;
+
+ // Regions are aligned at 32-byte boundaries. If the address range fits in one 32-byte
+ // address line, a single TT instruction suffices. This is the case when the following
+ // constraint holds.
+ let single_check: bool = (begin % 32).checked_add(size)? <= 32usize;
+
+ let test_start = TestTarget::check(addr, access_type);
+
+ if single_check {
+ Some(test_start)
+ } else {
+ let test_end = TestTarget::check(end as *mut u32, access_type);
+ // Check that the range does not cross SAU, IDAU or MPU region boundaries.
+ if test_start != test_end {
+ None
+ } else {
+ Some(test_start)
+ }
+ }
+ }
+
+ /// Access type that was used for this test target.
+ #[inline]
+ pub fn access_type(self) -> AccessType {
+ self.access_type
+ }
+
+ /// Get the raw u32 value returned by the TT instruction used.
+ #[inline]
+ pub fn as_u32(self) -> u32 {
+ self.tt_resp.0
+ }
+
+ /// Read accessibility of the target address. Only returns the MPU settings without checking
+ /// the Security state of the target.
+ /// For Unprivileged and NonSecureUnprivileged access types, returns the permissions for
+ /// unprivileged access, regardless of whether the current mode is privileged or unprivileged.
+ /// Returns false if the TT instruction was executed from an unprivileged mode
+ /// and the NonSecure access type was not specified.
+ /// Returns false if the address matches multiple MPU regions.
+ #[inline]
+ pub fn readable(self) -> bool {
+ self.tt_resp.r()
+ }
+
+ /// Read and write accessibility of the target address. Only returns the MPU settings without
+ /// checking the Security state of the target.
+ /// For Unprivileged and NonSecureUnprivileged access types, returns the permissions for
+ /// unprivileged access, regardless of whether the current mode is privileged or unprivileged.
+ /// Returns false if the TT instruction was executed from an unprivileged mode
+ /// and the NonSecure access type was not specified.
+ /// Returns false if the address matches multiple MPU regions.
+ #[inline]
+ pub fn read_and_writable(self) -> bool {
+ self.tt_resp.rw()
+ }
+
+ /// Indicate the MPU region number containing the target address.
+ /// Returns None if the value is not valid:
+ /// * the MPU is not implemented or MPU_CTRL.ENABLE is set to zero
+ /// * the register argument specified by the MREGION field does not match any enabled MPU regions
+ /// * the address matched multiple MPU regions
+ /// * the address specified by the SREGION field is exempt from the secure memory attribution
+ /// * the TT instruction was executed from an unprivileged mode and the A flag was not specified.
+ #[inline]
+ pub fn mpu_region(self) -> Option {
+ if self.tt_resp.srvalid() {
+ // Cast is safe as SREGION field is defined on 8 bits.
+ Some(self.tt_resp.sregion() as u8)
+ } else {
+ None
+ }
+ }
+
+ /// Indicates the Security attribute of the target address. Independent of AccessType.
+ /// Always zero when the test target is done in the Non-Secure state.
+ #[inline]
+ pub fn secure(self) -> bool {
+ self.tt_resp.s()
+ }
+
+ /// Non-Secure Read accessibility of the target address.
+ /// Same as readable() && !secure()
+ #[inline]
+ pub fn ns_readable(self) -> bool {
+ self.tt_resp.nsr()
+ }
+
+ /// Non-Secure Read and Write accessibility of the target address.
+ /// Same as read_and_writable() && !secure()
+ #[inline]
+ pub fn ns_read_and_writable(self) -> bool {
+ self.tt_resp.nsrw()
+ }
+
+ /// Indicate the IDAU region number containing the target address. Independent of AccessType.
+ /// Returns None if the value is not valid:
+ /// * the IDAU cannot provide a region number
+ /// * the address is exempt from security attribution
+ /// * the test target is done from Non-Secure state
+ #[inline]
+ pub fn idau_region(self) -> Option {
+ if self.tt_resp.irvalid() {
+ // Cast is safe as IREGION field is defined on 8 bits.
+ Some(self.tt_resp.iregion() as u8)
+ } else {
+ None
+ }
+ }
+
+ /// Indicate the SAU region number containing the target address. Independent of AccessType.
+ /// Returns None if the value is not valid:
+ /// * SAU_CTRL.ENABLE is set to zero
+ /// * the register argument specified in the SREGION field does not match any enabled SAU regions
+ /// * the address specified matches multiple enabled SAU regions
+ /// * the address specified by the SREGION field is exempt from the secure memory attribution
+ /// * the TT instruction was executed from the Non-secure state or the Security Extension is not
+ /// implemented
+ #[inline]
+ pub fn sau_region(self) -> Option {
+ if self.tt_resp.srvalid() {
+ // Cast is safe as SREGION field is defined on 8 bits.
+ Some(self.tt_resp.sregion() as u8)
+ } else {
+ None
+ }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/critical_section.rs b/src/rust/vendor/cortex-m/src/critical_section.rs
new file mode 100644
index 000000000..d33e90ff6
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/critical_section.rs
@@ -0,0 +1,25 @@
+#[cfg(all(cortex_m, feature = "critical-section-single-core"))]
+mod single_core_critical_section {
+ use critical_section::{set_impl, Impl, RawRestoreState};
+
+ use crate::interrupt;
+ use crate::register::primask;
+
+ struct SingleCoreCriticalSection;
+ set_impl!(SingleCoreCriticalSection);
+
+ unsafe impl Impl for SingleCoreCriticalSection {
+ unsafe fn acquire() -> RawRestoreState {
+ let was_active = primask::read().is_active();
+ interrupt::disable();
+ was_active
+ }
+
+ unsafe fn release(was_active: RawRestoreState) {
+ // Only re-enable interrupts if they were enabled before the critical section.
+ if was_active {
+ interrupt::enable()
+ }
+ }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/delay.rs b/src/rust/vendor/cortex-m/src/delay.rs
new file mode 100644
index 000000000..66a63bf67
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/delay.rs
@@ -0,0 +1,136 @@
+//! A delay driver based on SysTick.
+
+use crate::peripheral::{syst::SystClkSource, SYST};
+use embedded_hal::blocking::delay::{DelayMs, DelayUs};
+
+/// System timer (SysTick) as a delay provider.
+pub struct Delay {
+ syst: SYST,
+ frequency: u32,
+}
+
+impl Delay {
+ /// Configures the system timer (SysTick) as a delay provider.
+ ///
+ /// `ahb_frequency` is a frequency of the AHB bus in Hz.
+ #[inline]
+ pub fn new(syst: SYST, ahb_frequency: u32) -> Self {
+ Self::with_source(syst, ahb_frequency, SystClkSource::Core)
+ }
+
+ /// Configures the system timer (SysTick) as a delay provider
+ /// with a clock source.
+ ///
+ /// `frequency` is the frequency of your `clock_source` in Hz.
+ #[inline]
+ pub fn with_source(mut syst: SYST, frequency: u32, clock_source: SystClkSource) -> Self {
+ syst.set_clock_source(clock_source);
+
+ Delay { syst, frequency }
+ }
+
+ /// Releases the system timer (SysTick) resource.
+ #[inline]
+ pub fn free(self) -> SYST {
+ self.syst
+ }
+
+ /// Delay using the Cortex-M systick for a certain duration, in µs.
+ #[allow(clippy::missing_inline_in_public_items)]
+ pub fn delay_us(&mut self, us: u32) {
+ let ticks = (u64::from(us)) * (u64::from(self.frequency)) / 1_000_000;
+
+ let full_cycles = ticks >> 24;
+ if full_cycles > 0 {
+ self.syst.set_reload(0xffffff);
+ self.syst.clear_current();
+ self.syst.enable_counter();
+
+ for _ in 0..full_cycles {
+ while !self.syst.has_wrapped() {}
+ }
+ }
+
+ let ticks = (ticks & 0xffffff) as u32;
+ if ticks > 1 {
+ self.syst.set_reload(ticks - 1);
+ self.syst.clear_current();
+ self.syst.enable_counter();
+
+ while !self.syst.has_wrapped() {}
+ }
+
+ self.syst.disable_counter();
+ }
+
+ /// Delay using the Cortex-M systick for a certain duration, in ms.
+ #[inline]
+ pub fn delay_ms(&mut self, mut ms: u32) {
+ // 4294967 is the highest u32 value which you can multiply by 1000 without overflow
+ while ms > 4294967 {
+ self.delay_us(4294967000u32);
+ ms -= 4294967;
+ }
+ self.delay_us(ms * 1_000);
+ }
+}
+
+impl DelayMs for Delay {
+ #[inline]
+ fn delay_ms(&mut self, ms: u32) {
+ Delay::delay_ms(self, ms);
+ }
+}
+
+// This is a workaround to allow `delay_ms(42)` construction without specifying a type.
+impl DelayMs for Delay {
+ #[inline(always)]
+ fn delay_ms(&mut self, ms: i32) {
+ assert!(ms >= 0);
+ Delay::delay_ms(self, ms as u32);
+ }
+}
+
+impl DelayMs for Delay {
+ #[inline(always)]
+ fn delay_ms(&mut self, ms: u16) {
+ Delay::delay_ms(self, u32::from(ms));
+ }
+}
+
+impl DelayMs for Delay {
+ #[inline(always)]
+ fn delay_ms(&mut self, ms: u8) {
+ Delay::delay_ms(self, u32::from(ms));
+ }
+}
+
+impl DelayUs for Delay {
+ #[inline]
+ fn delay_us(&mut self, us: u32) {
+ Delay::delay_us(self, us);
+ }
+}
+
+// This is a workaround to allow `delay_us(42)` construction without specifying a type.
+impl DelayUs for Delay {
+ #[inline(always)]
+ fn delay_us(&mut self, us: i32) {
+ assert!(us >= 0);
+ Delay::delay_us(self, us as u32);
+ }
+}
+
+impl DelayUs for Delay {
+ #[inline(always)]
+ fn delay_us(&mut self, us: u16) {
+ Delay::delay_us(self, u32::from(us))
+ }
+}
+
+impl DelayUs for Delay {
+ #[inline(always)]
+ fn delay_us(&mut self, us: u8) {
+ Delay::delay_us(self, u32::from(us))
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/interrupt.rs b/src/rust/vendor/cortex-m/src/interrupt.rs
new file mode 100644
index 000000000..0fd1284b3
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/interrupt.rs
@@ -0,0 +1,73 @@
+//! Interrupts
+
+pub use bare_metal::{CriticalSection, Mutex, Nr};
+
+/// Trait for enums of external interrupt numbers.
+///
+/// This trait should be implemented by a peripheral access crate (PAC)
+/// on its enum of available external interrupts for a specific device.
+/// Each variant must convert to a u16 of its interrupt number,
+/// which is its exception number - 16.
+///
+/// # Safety
+///
+/// This trait must only be implemented on enums of device interrupts. Each
+/// enum variant must represent a distinct value (no duplicates are permitted),
+/// and must always return the same value (do not change at runtime).
+///
+/// These requirements ensure safe nesting of critical sections.
+pub unsafe trait InterruptNumber: Copy {
+ /// Return the interrupt number associated with this variant.
+ ///
+ /// See trait documentation for safety requirements.
+ fn number(self) -> u16;
+}
+
+/// Implement InterruptNumber for the old bare_metal::Nr trait.
+/// This implementation is for backwards compatibility only and will be removed in cortex-m 0.8.
+unsafe impl InterruptNumber for T {
+ #[inline]
+ fn number(self) -> u16 {
+ self.nr() as u16
+ }
+}
+
+/// Disables all interrupts
+#[inline]
+pub fn disable() {
+ call_asm!(__cpsid());
+}
+
+/// Enables all the interrupts
+///
+/// # Safety
+///
+/// - Do not call this function inside an `interrupt::free` critical section
+#[inline]
+pub unsafe fn enable() {
+ call_asm!(__cpsie());
+}
+
+/// Execute closure `f` in an interrupt-free context.
+///
+/// This as also known as a "critical section".
+#[inline]
+pub fn free(f: F) -> R
+where
+ F: FnOnce(&CriticalSection) -> R,
+{
+ let primask = crate::register::primask::read();
+
+ // disable interrupts
+ disable();
+
+ let r = f(unsafe { &CriticalSection::new() });
+
+ // If the interrupts were active before our `disable` call, then re-enable
+ // them. Otherwise, keep them disabled
+ if primask.is_active() {
+ unsafe { enable() }
+ }
+
+ r
+}
diff --git a/src/rust/vendor/cortex-m/src/itm.rs b/src/rust/vendor/cortex-m/src/itm.rs
new file mode 100644
index 000000000..72cb0d9a8
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/itm.rs
@@ -0,0 +1,158 @@
+//! Instrumentation Trace Macrocell
+//!
+//! **NOTE** This module is only available on ARMv7-M and newer.
+
+use core::{fmt, ptr, slice};
+
+use crate::peripheral::itm::Stim;
+
+// NOTE assumes that `bytes` is 32-bit aligned
+unsafe fn write_words(stim: &mut Stim, bytes: &[u32]) {
+ let mut p = bytes.as_ptr();
+ for _ in 0..bytes.len() {
+ while !stim.is_fifo_ready() {}
+ stim.write_u32(ptr::read(p));
+ p = p.offset(1);
+ }
+}
+
+/// Writes an aligned byte slice to the ITM.
+///
+/// `buffer` must be 4-byte aligned.
+unsafe fn write_aligned_impl(port: &mut Stim, buffer: &[u8]) {
+ let len = buffer.len();
+
+ if len == 0 {
+ return;
+ }
+
+ let split = len & !0b11;
+ #[allow(clippy::cast_ptr_alignment)]
+ write_words(
+ port,
+ slice::from_raw_parts(buffer.as_ptr() as *const u32, split >> 2),
+ );
+
+ // 3 bytes or less left
+ let mut left = len & 0b11;
+ let mut ptr = buffer.as_ptr().add(split);
+
+ // at least 2 bytes left
+ if left > 1 {
+ while !port.is_fifo_ready() {}
+
+ #[allow(clippy::cast_ptr_alignment)]
+ port.write_u16(ptr::read(ptr as *const u16));
+
+ ptr = ptr.offset(2);
+ left -= 2;
+ }
+
+ // final byte
+ if left == 1 {
+ while !port.is_fifo_ready() {}
+ port.write_u8(*ptr);
+ }
+}
+
+struct Port<'p>(&'p mut Stim);
+
+impl<'p> fmt::Write for Port<'p> {
+ #[inline]
+ fn write_str(&mut self, s: &str) -> fmt::Result {
+ write_all(self.0, s.as_bytes());
+ Ok(())
+ }
+}
+
+/// A wrapper type that aligns its contents on a 4-Byte boundary.
+///
+/// ITM transfers are most efficient when the data is 4-Byte-aligned. This type provides an easy
+/// way to accomplish and enforce such an alignment.
+#[repr(align(4))]
+pub struct Aligned(pub T);
+
+/// Writes `buffer` to an ITM port.
+#[allow(clippy::missing_inline_in_public_items)]
+pub fn write_all(port: &mut Stim, buffer: &[u8]) {
+ unsafe {
+ let mut len = buffer.len();
+ let mut ptr = buffer.as_ptr();
+
+ if len == 0 {
+ return;
+ }
+
+ // 0x01 OR 0x03
+ if ptr as usize % 2 == 1 {
+ while !port.is_fifo_ready() {}
+ port.write_u8(*ptr);
+
+ // 0x02 OR 0x04
+ ptr = ptr.offset(1);
+ len -= 1;
+ }
+
+ // 0x02
+ if ptr as usize % 4 == 2 {
+ if len > 1 {
+ // at least 2 bytes
+ while !port.is_fifo_ready() {}
+
+ // We checked the alignment above, so this is safe
+ #[allow(clippy::cast_ptr_alignment)]
+ port.write_u16(ptr::read(ptr as *const u16));
+
+ // 0x04
+ ptr = ptr.offset(2);
+ len -= 2;
+ } else {
+ if len == 1 {
+ // last byte
+ while !port.is_fifo_ready() {}
+ port.write_u8(*ptr);
+ }
+
+ return;
+ }
+ }
+
+ // The remaining data is 4-byte aligned, but might not be a multiple of 4 bytes
+ write_aligned_impl(port, slice::from_raw_parts(ptr, len));
+ }
+}
+
+/// Writes a 4-byte aligned `buffer` to an ITM port.
+///
+/// # Examples
+///
+/// ```no_run
+/// # use cortex_m::{itm::{self, Aligned}, peripheral::ITM};
+/// # let port = unsafe { &mut (*ITM::PTR).stim[0] };
+/// let mut buffer = Aligned([0; 14]);
+///
+/// buffer.0.copy_from_slice(b"Hello, world!\n");
+///
+/// itm::write_aligned(port, &buffer);
+///
+/// // Or equivalently
+/// itm::write_aligned(port, &Aligned(*b"Hello, world!\n"));
+/// ```
+#[allow(clippy::missing_inline_in_public_items)]
+pub fn write_aligned(port: &mut Stim, buffer: &Aligned<[u8]>) {
+ unsafe { write_aligned_impl(port, &buffer.0) }
+}
+
+/// Writes `fmt::Arguments` to the ITM `port`
+#[inline]
+pub fn write_fmt(port: &mut Stim, args: fmt::Arguments) {
+ use core::fmt::Write;
+
+ Port(port).write_fmt(args).ok();
+}
+
+/// Writes a string to the ITM `port`
+#[inline]
+pub fn write_str(port: &mut Stim, string: &str) {
+ write_all(port, string.as_bytes())
+}
diff --git a/src/rust/vendor/cortex-m/src/lib.rs b/src/rust/vendor/cortex-m/src/lib.rs
new file mode 100644
index 000000000..044085ed0
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/lib.rs
@@ -0,0 +1,112 @@
+//! Low level access to Cortex-M processors
+//!
+//! This crate provides:
+//!
+//! - Access to core peripherals like NVIC, SCB and SysTick.
+//! - Access to core registers like CONTROL, MSP and PSR.
+//! - Interrupt manipulation mechanisms
+//! - Safe wrappers around Cortex-M specific instructions like `bkpt`
+//!
+//! # Optional features
+//!
+//! ## `inline-asm`
+//!
+//! When this feature is enabled the implementation of all the functions inside the `asm` and
+//! `register` modules use inline assembly (`asm!`) instead of external assembly (FFI into separate
+//! assembly files pre-compiled using `arm-none-eabi-gcc`). The advantages of enabling `inline-asm`
+//! are:
+//!
+//! - Reduced overhead. FFI eliminates the possibility of inlining so all operations include a
+//! function call overhead when `inline-asm` is not enabled.
+//!
+//! - Some of the `register` API only becomes available only when `inline-asm` is enabled. Check the
+//! API docs for details.
+//!
+//! The disadvantage is that `inline-asm` requires a Rust version at least 1.59 to use the `asm!()`
+//! macro. In the future 0.8 and above versions of `cortex-m`, this feature will always be enabled.
+//!
+//! ## `critical-section-single-core`
+//!
+//! This feature enables a [`critical-section`](https://github.com/rust-embedded/critical-section)
+//! implementation suitable for single-core targets, based on disabling interrupts globally.
+//!
+//! It is **unsound** to enable it on multi-core targets or for code running in unprivileged mode,
+//! and may cause functional problems in systems where some interrupts must be not be disabled
+//! or critical sections are managed as part of an RTOS. In these cases, you should use
+//! a target-specific implementation instead, typically provided by a HAL or RTOS crate.
+//!
+//! ## `cm7-r0p1`
+//!
+//! This feature enables workarounds for errata found on Cortex-M7 chips with revision r0p1. Some
+//! functions in this crate only work correctly on those chips if this Cargo feature is enabled
+//! (the functions are documented accordingly).
+//!
+//! ## `linker-plugin-lto`
+//!
+//! This feature links against prebuilt assembly blobs that are compatible with [Linker-Plugin LTO].
+//! This allows inlining assembly routines into the caller, even without the `inline-asm` feature,
+//! and works on stable Rust (but note the drawbacks below!).
+//!
+//! If you want to use this feature, you need to be aware of a few things:
+//!
+//! - You need to make sure that `-Clinker-plugin-lto` is passed to rustc. Please refer to the
+//! [Linker-Plugin LTO] documentation for details.
+//!
+//! - You have to use a Rust version whose LLVM version is compatible with the toolchain in
+//! `asm-toolchain`.
+//!
+//! - Due to a [Rust bug][rust-lang/rust#75940] in compiler versions **before 1.49**, this option
+//! does not work with optimization levels `s` and `z`.
+//!
+//! [Linker-Plugin LTO]: https://doc.rust-lang.org/stable/rustc/linker-plugin-lto.html
+//! [rust-lang/rust#75940]: https://github.com/rust-lang/rust/issues/75940
+//!
+//! # Minimum Supported Rust Version (MSRV)
+//!
+//! This crate is guaranteed to compile on stable Rust 1.38 and up. It *might*
+//! compile with older versions but that may change in any new patch release.
+
+#![deny(missing_docs)]
+#![no_std]
+#![allow(clippy::identity_op)]
+#![allow(clippy::missing_safety_doc)]
+// Prevent clippy from complaining about empty match expression that are used for cfg gating.
+#![allow(clippy::match_single_binding)]
+// This makes clippy warn about public functions which are not #[inline].
+//
+// Almost all functions in this crate result in trivial or even no assembly.
+// These functions should be #[inline].
+//
+// If you do add a function that's not supposed to be #[inline], you can add
+// #[allow(clippy::missing_inline_in_public_items)] in front of it to add an
+// exception to clippy's rules.
+//
+// This should be done in case of:
+// - A function containing non-trivial logic (such as itm::write_all); or
+// - A generated #[derive(Debug)] function (in which case the attribute needs
+// to be applied to the struct).
+#![deny(clippy::missing_inline_in_public_items)]
+// Don't warn about feature(asm) being stable on Rust >= 1.59.0
+#![allow(stable_features)]
+
+extern crate bare_metal;
+extern crate volatile_register;
+
+#[macro_use]
+mod call_asm;
+#[macro_use]
+mod macros;
+
+pub mod asm;
+#[cfg(armv8m)]
+pub mod cmse;
+mod critical_section;
+pub mod delay;
+pub mod interrupt;
+#[cfg(all(not(armv6m), not(armv8m_base)))]
+pub mod itm;
+pub mod peripheral;
+pub mod prelude;
+pub mod register;
+
+pub use crate::peripheral::Peripherals;
diff --git a/src/rust/vendor/cortex-m/src/macros.rs b/src/rust/vendor/cortex-m/src/macros.rs
new file mode 100644
index 000000000..512c93234
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/macros.rs
@@ -0,0 +1,114 @@
+/// Macro for sending a formatted string through an ITM channel
+#[macro_export]
+macro_rules! iprint {
+ ($channel:expr, $s:expr) => {
+ $crate::itm::write_str($channel, $s);
+ };
+ ($channel:expr, $($arg:tt)*) => {
+ $crate::itm::write_fmt($channel, format_args!($($arg)*));
+ };
+}
+
+/// Macro for sending a formatted string through an ITM channel, with a newline.
+#[macro_export]
+macro_rules! iprintln {
+ ($channel:expr) => {
+ $crate::itm::write_str($channel, "\n");
+ };
+ ($channel:expr, $fmt:expr) => {
+ $crate::itm::write_str($channel, concat!($fmt, "\n"));
+ };
+ ($channel:expr, $fmt:expr, $($arg:tt)*) => {
+ $crate::itm::write_fmt($channel, format_args!(concat!($fmt, "\n"), $($arg)*));
+ };
+}
+
+/// Macro to create a mutable reference to a statically allocated value
+///
+/// This macro returns a value with type `Option<&'static mut $ty>`. `Some($expr)` will be returned
+/// the first time the macro is executed; further calls will return `None`. To avoid `unwrap`ping a
+/// `None` variant the caller must ensure that the macro is called from a function that's executed
+/// at most once in the whole lifetime of the program.
+///
+/// # Notes
+/// This macro is unsound on multi core systems.
+///
+/// For debuggability, you can set an explicit name for a singleton. This name only shows up the
+/// the debugger and is not referencable from other code. See example below.
+///
+/// # Example
+///
+/// ``` no_run
+/// use cortex_m::singleton;
+///
+/// fn main() {
+/// // OK if `main` is executed only once
+/// let x: &'static mut bool = singleton!(: bool = false).unwrap();
+///
+/// let y = alias();
+/// // BAD this second call to `alias` will definitively `panic!`
+/// let y_alias = alias();
+/// }
+///
+/// fn alias() -> &'static mut bool {
+/// singleton!(: bool = false).unwrap()
+/// }
+///
+/// fn singleton_with_name() {
+/// // A name only for debugging purposes
+/// singleton!(FOO_BUFFER: [u8; 1024] = [0u8; 1024]);
+/// }
+/// ```
+#[macro_export]
+macro_rules! singleton {
+ ($name:ident: $ty:ty = $expr:expr) => {
+ $crate::interrupt::free(|_| {
+ // this is a tuple of a MaybeUninit and a bool because using an Option here is
+ // problematic: Due to niche-optimization, an Option could end up producing a non-zero
+ // initializer value which would move the entire static from `.bss` into `.data`...
+ static mut $name: (::core::mem::MaybeUninit<$ty>, bool) =
+ (::core::mem::MaybeUninit::uninit(), false);
+
+ #[allow(unsafe_code)]
+ let used = unsafe { $name.1 };
+ if used {
+ None
+ } else {
+ let expr = $expr;
+
+ #[allow(unsafe_code)]
+ unsafe {
+ $name.1 = true;
+ $name.0 = ::core::mem::MaybeUninit::new(expr);
+ Some(&mut *$name.0.as_mut_ptr())
+ }
+ }
+ })
+ };
+ (: $ty:ty = $expr:expr) => {
+ $crate::singleton!(VAR: $ty = $expr)
+ };
+}
+
+/// ``` compile_fail
+/// use cortex_m::singleton;
+///
+/// fn foo() {
+/// // check that the call to `uninitialized` requires unsafe
+/// singleton!(: u8 = std::mem::uninitialized());
+/// }
+/// ```
+#[allow(dead_code)]
+const CFAIL: () = ();
+
+/// ```
+/// #![deny(unsafe_code)]
+/// use cortex_m::singleton;
+///
+/// fn foo() {
+/// // check that calls to `singleton!` don't trip the `unsafe_code` lint
+/// singleton!(: u8 = 0);
+/// }
+/// ```
+#[allow(dead_code)]
+const CPASS: () = ();
diff --git a/src/rust/vendor/cortex-m/src/peripheral/ac.rs b/src/rust/vendor/cortex-m/src/peripheral/ac.rs
new file mode 100644
index 000000000..1ac5be108
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/ac.rs
@@ -0,0 +1,93 @@
+//! Cortex-M7 TCM and Cache access control.
+
+use volatile_register::RW;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Instruction Tightly-Coupled Memory Control Register
+ pub itcmcr: RW,
+ /// Data Tightly-Coupled Memory Control Register
+ pub dtcmcr: RW,
+ /// AHBP Control Register
+ pub ahbpcr: RW,
+ /// L1 Cache Control Register
+ pub cacr: RW,
+ /// AHB Slave Control Register
+ pub ahbscr: RW,
+ reserved0: u32,
+ /// Auxilary Bus Fault Status Register
+ pub abfsr: RW,
+}
+
+/// ITCMCR and DTCMCR TCM enable bit.
+pub const TCM_EN: u32 = 1;
+
+/// ITCMCR and DTCMCR TCM read-modify-write bit.
+pub const TCM_RMW: u32 = 2;
+
+/// ITCMCR and DTCMCR TCM rety phase enable bit.
+pub const TCM_RETEN: u32 = 4;
+
+/// ITCMCR and DTCMCR TCM size mask.
+pub const TCM_SZ_MASK: u32 = 0x78;
+
+/// ITCMCR and DTCMCR TCM shift.
+pub const TCM_SZ_SHIFT: usize = 3;
+
+/// AHBPCR AHBP enable bit.
+pub const AHBPCR_EN: u32 = 1;
+
+/// AHBPCR AHBP size mask.
+pub const AHBPCR_SZ_MASK: u32 = 0x0e;
+
+/// AHBPCR AHBP size shit.
+pub const AHBPCR_SZ_SHIFT: usize = 1;
+
+/// CACR Shared cachedable-is-WT for data cache.
+pub const CACR_SIWT: u32 = 1;
+
+/// CACR ECC in the instruction and data cache (disable).
+pub const CACR_ECCDIS: u32 = 2;
+
+/// CACR Force Write-Through in the data cache.
+pub const CACR_FORCEWT: u32 = 4;
+
+/// AHBSCR AHBS prioritization control mask.
+pub const AHBSCR_CTL_MASK: u32 = 0x03;
+
+/// AHBSCR AHBS prioritization control shift.
+pub const AHBSCR_CTL_SHIFT: usize = 0;
+
+/// AHBSCR Threshold execution prioity for AHBS traffic demotion, mask.
+pub const AHBSCR_TPRI_MASK: u32 = 0x7fc;
+
+/// AHBSCR Threshold execution prioity for AHBS traffic demotion, shift.
+pub const AHBSCR_TPRI_SHIFT: usize = 2;
+
+/// AHBSCR Failness counter initialization value, mask.
+pub const AHBSCR_INITCOUNT_MASK: u32 = 0xf800;
+
+/// AHBSCR Failness counter initialization value, shift.
+pub const AHBSCR_INITCOUNT_SHIFT: usize = 11;
+
+/// ABFSR Async fault on ITCM interface.
+pub const ABFSR_ITCM: u32 = 1;
+
+/// ABFSR Async fault on DTCM interface.
+pub const ABFSR_DTCM: u32 = 2;
+
+/// ABFSR Async fault on AHBP interface.
+pub const ABFSR_AHBP: u32 = 4;
+
+/// ABFSR Async fault on AXIM interface.
+pub const ABFSR_AXIM: u32 = 8;
+
+/// ABFSR Async fault on EPPB interface.
+pub const ABFSR_EPPB: u32 = 16;
+
+/// ABFSR Indicates the type of fault on the AXIM interface, mask.
+pub const ABFSR_AXIMTYPE_MASK: u32 = 0x300;
+
+/// ABFSR Indicates the type of fault on the AXIM interface, shift.
+pub const ABFSR_AXIMTYPE_SHIFT: usize = 8;
diff --git a/src/rust/vendor/cortex-m/src/peripheral/cbp.rs b/src/rust/vendor/cortex-m/src/peripheral/cbp.rs
new file mode 100644
index 000000000..5aee5444b
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/cbp.rs
@@ -0,0 +1,138 @@
+//! Cache and branch predictor maintenance operations
+//!
+//! *NOTE* Not available on Armv6-M.
+
+use volatile_register::WO;
+
+use crate::peripheral::CBP;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// I-cache invalidate all to PoU
+ pub iciallu: WO,
+ reserved0: u32,
+ /// I-cache invalidate by MVA to PoU
+ pub icimvau: WO,
+ /// D-cache invalidate by MVA to PoC
+ pub dcimvac: WO,
+ /// D-cache invalidate by set-way
+ pub dcisw: WO,
+ /// D-cache clean by MVA to PoU
+ pub dccmvau: WO,
+ /// D-cache clean by MVA to PoC
+ pub dccmvac: WO,
+ /// D-cache clean by set-way
+ pub dccsw: WO,
+ /// D-cache clean and invalidate by MVA to PoC
+ pub dccimvac: WO,
+ /// D-cache clean and invalidate by set-way
+ pub dccisw: WO,
+ /// Branch predictor invalidate all
+ pub bpiall: WO,
+}
+
+const CBP_SW_WAY_POS: u32 = 30;
+const CBP_SW_WAY_MASK: u32 = 0x3 << CBP_SW_WAY_POS;
+const CBP_SW_SET_POS: u32 = 5;
+const CBP_SW_SET_MASK: u32 = 0x1FF << CBP_SW_SET_POS;
+
+impl CBP {
+ /// I-cache invalidate all to PoU
+ #[inline(always)]
+ pub fn iciallu(&mut self) {
+ unsafe { self.iciallu.write(0) };
+ }
+
+ /// I-cache invalidate by MVA to PoU
+ #[inline(always)]
+ pub fn icimvau(&mut self, mva: u32) {
+ unsafe { self.icimvau.write(mva) };
+ }
+
+ /// D-cache invalidate by MVA to PoC
+ #[inline(always)]
+ pub unsafe fn dcimvac(&mut self, mva: u32) {
+ self.dcimvac.write(mva);
+ }
+
+ /// D-cache invalidate by set-way
+ ///
+ /// `set` is masked to be between 0 and 3, and `way` between 0 and 511.
+ #[inline(always)]
+ pub unsafe fn dcisw(&mut self, set: u16, way: u16) {
+ // The ARMv7-M Architecture Reference Manual, as of Revision E.b, says these set/way
+ // operations have a register data format which depends on the implementation's
+ // associativity and number of sets. Specifically the 'way' and 'set' fields have
+ // offsets 32-log2(ASSOCIATIVITY) and log2(LINELEN) respectively.
+ //
+ // However, in Cortex-M7 devices, these offsets are fixed at 30 and 5, as per the Cortex-M7
+ // Generic User Guide section 4.8.3. Since no other ARMv7-M implementations except the
+ // Cortex-M7 have a DCACHE or ICACHE at all, it seems safe to do the same thing as the
+ // CMSIS-Core implementation and use fixed values.
+ self.dcisw.write(
+ ((u32::from(way) & (CBP_SW_WAY_MASK >> CBP_SW_WAY_POS)) << CBP_SW_WAY_POS)
+ | ((u32::from(set) & (CBP_SW_SET_MASK >> CBP_SW_SET_POS)) << CBP_SW_SET_POS),
+ );
+ }
+
+ /// D-cache clean by MVA to PoU
+ #[inline(always)]
+ pub fn dccmvau(&mut self, mva: u32) {
+ unsafe {
+ self.dccmvau.write(mva);
+ }
+ }
+
+ /// D-cache clean by MVA to PoC
+ #[inline(always)]
+ pub fn dccmvac(&mut self, mva: u32) {
+ unsafe {
+ self.dccmvac.write(mva);
+ }
+ }
+
+ /// D-cache clean by set-way
+ ///
+ /// `set` is masked to be between 0 and 3, and `way` between 0 and 511.
+ #[inline(always)]
+ pub fn dccsw(&mut self, set: u16, way: u16) {
+ // See comment for dcisw() about the format here
+ unsafe {
+ self.dccsw.write(
+ ((u32::from(way) & (CBP_SW_WAY_MASK >> CBP_SW_WAY_POS)) << CBP_SW_WAY_POS)
+ | ((u32::from(set) & (CBP_SW_SET_MASK >> CBP_SW_SET_POS)) << CBP_SW_SET_POS),
+ );
+ }
+ }
+
+ /// D-cache clean and invalidate by MVA to PoC
+ #[inline(always)]
+ pub fn dccimvac(&mut self, mva: u32) {
+ unsafe {
+ self.dccimvac.write(mva);
+ }
+ }
+
+ /// D-cache clean and invalidate by set-way
+ ///
+ /// `set` is masked to be between 0 and 3, and `way` between 0 and 511.
+ #[inline(always)]
+ pub fn dccisw(&mut self, set: u16, way: u16) {
+ // See comment for dcisw() about the format here
+ unsafe {
+ self.dccisw.write(
+ ((u32::from(way) & (CBP_SW_WAY_MASK >> CBP_SW_WAY_POS)) << CBP_SW_WAY_POS)
+ | ((u32::from(set) & (CBP_SW_SET_MASK >> CBP_SW_SET_POS)) << CBP_SW_SET_POS),
+ );
+ }
+ }
+
+ /// Branch predictor invalidate all
+ #[inline(always)]
+ pub fn bpiall(&mut self) {
+ unsafe {
+ self.bpiall.write(0);
+ }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/cpuid.rs b/src/rust/vendor/cortex-m/src/peripheral/cpuid.rs
new file mode 100644
index 000000000..db85566ea
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/cpuid.rs
@@ -0,0 +1,140 @@
+//! CPUID
+
+use volatile_register::RO;
+#[cfg(not(armv6m))]
+use volatile_register::RW;
+
+#[cfg(not(armv6m))]
+use crate::peripheral::CPUID;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// CPUID base
+ pub base: RO,
+
+ _reserved0: [u32; 15],
+
+ /// Processor Feature (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub pfr: [RO; 2],
+ #[cfg(armv6m)]
+ _reserved1: [u32; 2],
+
+ /// Debug Feature (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub dfr: RO,
+ #[cfg(armv6m)]
+ _reserved2: u32,
+
+ /// Auxiliary Feature (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub afr: RO,
+ #[cfg(armv6m)]
+ _reserved3: u32,
+
+ /// Memory Model Feature (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub mmfr: [RO; 4],
+ #[cfg(armv6m)]
+ _reserved4: [u32; 4],
+
+ /// Instruction Set Attribute (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub isar: [RO; 5],
+ #[cfg(armv6m)]
+ _reserved5: [u32; 5],
+
+ _reserved6: u32,
+
+ /// Cache Level ID (only present on Cortex-M7)
+ #[cfg(not(armv6m))]
+ pub clidr: RO,
+
+ /// Cache Type (only present on Cortex-M7)
+ #[cfg(not(armv6m))]
+ pub ctr: RO,
+
+ /// Cache Size ID (only present on Cortex-M7)
+ #[cfg(not(armv6m))]
+ pub ccsidr: RO,
+
+ /// Cache Size Selection (only present on Cortex-M7)
+ #[cfg(not(armv6m))]
+ pub csselr: RW,
+}
+
+/// Type of cache to select on CSSELR writes.
+#[cfg(not(armv6m))]
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum CsselrCacheType {
+ /// Select DCache or unified cache
+ DataOrUnified = 0,
+ /// Select ICache
+ Instruction = 1,
+}
+
+#[cfg(not(armv6m))]
+impl CPUID {
+ /// Selects the current CCSIDR
+ ///
+ /// * `level`: the required cache level minus 1, e.g. 0 for L1, 1 for L2
+ /// * `ind`: select instruction cache or data/unified cache
+ ///
+ /// `level` is masked to be between 0 and 7.
+ #[inline]
+ pub fn select_cache(&mut self, level: u8, ind: CsselrCacheType) {
+ const CSSELR_IND_POS: u32 = 0;
+ const CSSELR_IND_MASK: u32 = 1 << CSSELR_IND_POS;
+ const CSSELR_LEVEL_POS: u32 = 1;
+ const CSSELR_LEVEL_MASK: u32 = 0x7 << CSSELR_LEVEL_POS;
+
+ unsafe {
+ self.csselr.write(
+ ((u32::from(level) << CSSELR_LEVEL_POS) & CSSELR_LEVEL_MASK)
+ | (((ind as u32) << CSSELR_IND_POS) & CSSELR_IND_MASK),
+ )
+ }
+ }
+
+ /// Returns the number of sets and ways in the selected cache
+ #[inline]
+ pub fn cache_num_sets_ways(&mut self, level: u8, ind: CsselrCacheType) -> (u16, u16) {
+ const CCSIDR_NUMSETS_POS: u32 = 13;
+ const CCSIDR_NUMSETS_MASK: u32 = 0x7FFF << CCSIDR_NUMSETS_POS;
+ const CCSIDR_ASSOCIATIVITY_POS: u32 = 3;
+ const CCSIDR_ASSOCIATIVITY_MASK: u32 = 0x3FF << CCSIDR_ASSOCIATIVITY_POS;
+
+ self.select_cache(level, ind);
+ crate::asm::dsb();
+ let ccsidr = self.ccsidr.read();
+ (
+ (1 + ((ccsidr & CCSIDR_NUMSETS_MASK) >> CCSIDR_NUMSETS_POS)) as u16,
+ (1 + ((ccsidr & CCSIDR_ASSOCIATIVITY_MASK) >> CCSIDR_ASSOCIATIVITY_POS)) as u16,
+ )
+ }
+
+ /// Returns log2 of the number of words in the smallest cache line of all the data cache and
+ /// unified caches that are controlled by the processor.
+ ///
+ /// This is the `DminLine` field of the CTR register.
+ #[inline(always)]
+ pub fn cache_dminline() -> u32 {
+ const CTR_DMINLINE_POS: u32 = 16;
+ const CTR_DMINLINE_MASK: u32 = 0xF << CTR_DMINLINE_POS;
+ let ctr = unsafe { (*Self::PTR).ctr.read() };
+ (ctr & CTR_DMINLINE_MASK) >> CTR_DMINLINE_POS
+ }
+
+ /// Returns log2 of the number of words in the smallest cache line of all the instruction
+ /// caches that are controlled by the processor.
+ ///
+ /// This is the `IminLine` field of the CTR register.
+ #[inline(always)]
+ pub fn cache_iminline() -> u32 {
+ const CTR_IMINLINE_POS: u32 = 0;
+ const CTR_IMINLINE_MASK: u32 = 0xF << CTR_IMINLINE_POS;
+ let ctr = unsafe { (*Self::PTR).ctr.read() };
+ (ctr & CTR_IMINLINE_MASK) >> CTR_IMINLINE_POS
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/dcb.rs b/src/rust/vendor/cortex-m/src/peripheral/dcb.rs
new file mode 100644
index 000000000..4a63c8895
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/dcb.rs
@@ -0,0 +1,60 @@
+//! Debug Control Block
+
+use volatile_register::{RW, WO};
+
+use crate::peripheral::DCB;
+use core::ptr;
+
+const DCB_DEMCR_TRCENA: u32 = 1 << 24;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Debug Halting Control and Status
+ pub dhcsr: RW,
+ /// Debug Core Register Selector
+ pub dcrsr: WO,
+ /// Debug Core Register Data
+ pub dcrdr: RW,
+ /// Debug Exception and Monitor Control
+ pub demcr: RW,
+}
+
+impl DCB {
+ /// Enables TRACE. This is for example required by the
+ /// `peripheral::DWT` cycle counter to work properly.
+ /// As by STM documentation, this flag is not reset on
+ /// soft-reset, only on power reset.
+ #[inline]
+ pub fn enable_trace(&mut self) {
+ // set bit 24 / TRCENA
+ unsafe {
+ self.demcr.modify(|w| w | DCB_DEMCR_TRCENA);
+ }
+ }
+
+ /// Disables TRACE. See `DCB::enable_trace()` for more details
+ #[inline]
+ pub fn disable_trace(&mut self) {
+ // unset bit 24 / TRCENA
+ unsafe {
+ self.demcr.modify(|w| w & !DCB_DEMCR_TRCENA);
+ }
+ }
+
+ /// Is there a debugger attached? (see note)
+ ///
+ /// Note: This function is [reported not to
+ /// work](http://web.archive.org/web/20180821191012/https://community.nxp.com/thread/424925#comment-782843)
+ /// on Cortex-M0 devices. Per the ARM v6-M Architecture Reference Manual, "Access to the DHCSR
+ /// from software running on the processor is IMPLEMENTATION DEFINED". Indeed, from the
+ /// [Cortex-M0+ r0p1 Technical Reference Manual](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0484c/BABJHEIG.html), "Note Software cannot access the debug registers."
+ #[inline]
+ pub fn is_debugger_attached() -> bool {
+ unsafe {
+ // do an 8-bit read of the 32-bit DHCSR register, and get the LSB
+ let value = ptr::read_volatile(Self::PTR as *const u8);
+ value & 0x1 == 1
+ }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/dwt.rs b/src/rust/vendor/cortex-m/src/peripheral/dwt.rs
new file mode 100644
index 000000000..58d91fd3b
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/dwt.rs
@@ -0,0 +1,268 @@
+//! Data Watchpoint and Trace unit
+
+#[cfg(not(armv6m))]
+use volatile_register::WO;
+use volatile_register::{RO, RW};
+
+use crate::peripheral::DWT;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Control
+ pub ctrl: RW,
+ /// Cycle Count
+ #[cfg(not(armv6m))]
+ pub cyccnt: RW,
+ /// CPI Count
+ #[cfg(not(armv6m))]
+ pub cpicnt: RW,
+ /// Exception Overhead Count
+ #[cfg(not(armv6m))]
+ pub exccnt: RW,
+ /// Sleep Count
+ #[cfg(not(armv6m))]
+ pub sleepcnt: RW,
+ /// LSU Count
+ #[cfg(not(armv6m))]
+ pub lsucnt: RW,
+ /// Folded-instruction Count
+ #[cfg(not(armv6m))]
+ pub foldcnt: RW,
+ /// Cortex-M0(+) does not have these parts
+ #[cfg(armv6m)]
+ reserved: [u32; 6],
+ /// Program Counter Sample
+ pub pcsr: RO,
+ /// Comparators
+ #[cfg(armv6m)]
+ pub c: [Comparator; 2],
+ #[cfg(not(armv6m))]
+ /// Comparators
+ pub c: [Comparator; 16],
+ #[cfg(not(armv6m))]
+ reserved: [u32; 932],
+ /// Lock Access
+ #[cfg(not(armv6m))]
+ pub lar: WO,
+ /// Lock Status
+ #[cfg(not(armv6m))]
+ pub lsr: RO,
+}
+
+/// Comparator
+#[repr(C)]
+pub struct Comparator {
+ /// Comparator
+ pub comp: RW,
+ /// Comparator Mask
+ pub mask: RW,
+ /// Comparator Function
+ pub function: RW,
+ reserved: u32,
+}
+
+// DWT CTRL register fields
+const NUMCOMP_OFFSET: u32 = 28;
+const NOTRCPKT: u32 = 1 << 27;
+const NOEXTTRIG: u32 = 1 << 26;
+const NOCYCCNT: u32 = 1 << 25;
+const NOPRFCNT: u32 = 1 << 24;
+const CYCCNTENA: u32 = 1 << 0;
+
+impl DWT {
+ /// Number of comparators implemented
+ ///
+ /// A value of zero indicates no comparator support.
+ #[inline]
+ pub fn num_comp() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { ((*Self::PTR).ctrl.read() >> NUMCOMP_OFFSET) as u8 }
+ }
+
+ /// Returns `true` if the the implementation supports sampling and exception tracing
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn has_exception_trace() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ctrl.read() & NOTRCPKT == 0 }
+ }
+
+ /// Returns `true` if the implementation includes external match signals
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn has_external_match() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ctrl.read() & NOEXTTRIG == 0 }
+ }
+
+ /// Returns `true` if the implementation supports a cycle counter
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn has_cycle_counter() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ctrl.read() & NOCYCCNT == 0 }
+ }
+
+ /// Returns `true` if the implementation the profiling counters
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn has_profiling_counter() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ctrl.read() & NOPRFCNT == 0 }
+ }
+
+ /// Enables the cycle counter
+ ///
+ /// The global trace enable ([`DCB::enable_trace`]) should be set before
+ /// enabling the cycle counter, the processor may ignore writes to the
+ /// cycle counter enable if the global trace is disabled
+ /// (implementation defined behaviour).
+ ///
+ /// [`DCB::enable_trace`]: crate::peripheral::DCB::enable_trace
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn enable_cycle_counter(&mut self) {
+ unsafe { self.ctrl.modify(|r| r | CYCCNTENA) }
+ }
+
+ /// Disables the cycle counter
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn disable_cycle_counter(&mut self) {
+ unsafe { self.ctrl.modify(|r| r & !CYCCNTENA) }
+ }
+
+ /// Returns `true` if the cycle counter is enabled
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn cycle_counter_enabled() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ctrl.read() & CYCCNTENA != 0 }
+ }
+
+ /// Returns the current clock cycle count
+ #[cfg(not(armv6m))]
+ #[inline]
+ #[deprecated(
+ since = "0.7.4",
+ note = "Use `cycle_count` which follows the C-GETTER convention"
+ )]
+ pub fn get_cycle_count() -> u32 {
+ Self::cycle_count()
+ }
+
+ /// Returns the current clock cycle count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn cycle_count() -> u32 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).cyccnt.read() }
+ }
+
+ /// Set the cycle count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_cycle_count(&mut self, count: u32) {
+ unsafe { self.cyccnt.write(count) }
+ }
+
+ /// Removes the software lock on the DWT
+ ///
+ /// Some devices, like the STM32F7, software lock the DWT after a power cycle.
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn unlock() {
+ // NOTE(unsafe) atomic write to a stateless, write-only register
+ unsafe { (*Self::PTR).lar.write(0xC5AC_CE55) }
+ }
+
+ /// Get the CPI count
+ ///
+ /// Counts additional cycles required to execute multi-cycle instructions,
+ /// except those recorded by [`lsu_count`], and counts any instruction fetch
+ /// stalls.
+ ///
+ /// [`lsu_count`]: DWT::lsu_count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn cpi_count() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).cpicnt.read() as u8 }
+ }
+
+ /// Set the CPI count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_cpi_count(&mut self, count: u8) {
+ unsafe { self.cpicnt.write(count as u32) }
+ }
+
+ /// Get the total cycles spent in exception processing
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn exception_count() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).exccnt.read() as u8 }
+ }
+
+ /// Set the exception count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_exception_count(&mut self, count: u8) {
+ unsafe { self.exccnt.write(count as u32) }
+ }
+
+ /// Get the total number of cycles that the processor is sleeping
+ ///
+ /// ARM recommends that this counter counts all cycles when the processor is sleeping,
+ /// regardless of whether a WFI or WFE instruction, or the sleep-on-exit functionality,
+ /// caused the entry to sleep mode.
+ /// However, all sleep features are implementation defined and therefore when
+ /// this counter counts is implementation defined.
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn sleep_count() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).sleepcnt.read() as u8 }
+ }
+
+ /// Set the sleep count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_sleep_count(&mut self, count: u8) {
+ unsafe { self.sleepcnt.write(count as u32) }
+ }
+
+ /// Get the additional cycles required to execute all load or store instructions
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn lsu_count() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).lsucnt.read() as u8 }
+ }
+
+ /// Set the lsu count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_lsu_count(&mut self, count: u8) {
+ unsafe { self.lsucnt.write(count as u32) }
+ }
+
+ /// Get the folded instruction count
+ ///
+ /// Increments on each instruction that takes 0 cycles.
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn fold_count() -> u8 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).foldcnt.read() as u8 }
+ }
+
+ /// Set the folded instruction count
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn set_fold_count(&mut self, count: u8) {
+ unsafe { self.foldcnt.write(count as u32) }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/fpb.rs b/src/rust/vendor/cortex-m/src/peripheral/fpb.rs
new file mode 100644
index 000000000..b86b8b2bd
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/fpb.rs
@@ -0,0 +1,21 @@
+//! Flash Patch and Breakpoint unit
+//!
+//! *NOTE* Not available on Armv6-M.
+
+use volatile_register::{RO, RW, WO};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Control
+ pub ctrl: RW,
+ /// Remap
+ pub remap: RW,
+ /// Comparator
+ pub comp: [RW; 127],
+ reserved: [u32; 875],
+ /// Lock Access
+ pub lar: WO,
+ /// Lock Status
+ pub lsr: RO,
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/fpu.rs b/src/rust/vendor/cortex-m/src/peripheral/fpu.rs
new file mode 100644
index 000000000..9a047d86c
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/fpu.rs
@@ -0,0 +1,19 @@
+//! Floating Point Unit
+//!
+//! *NOTE* Available only on targets with a Floating Point Unit (FPU) extension.
+
+use volatile_register::{RO, RW};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ reserved: u32,
+ /// Floating Point Context Control
+ pub fpccr: RW,
+ /// Floating Point Context Address
+ pub fpcar: RW,
+ /// Floating Point Default Status Control
+ pub fpdscr: RW,
+ /// Media and FP Feature
+ pub mvfr: [RO; 3],
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/icb.rs b/src/rust/vendor/cortex-m/src/peripheral/icb.rs
new file mode 100644
index 000000000..e1de33b38
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/icb.rs
@@ -0,0 +1,32 @@
+//! Implementation Control Block
+
+#[cfg(any(armv7m, armv8m, native))]
+use volatile_register::RO;
+use volatile_register::RW;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Interrupt Controller Type Register
+ ///
+ /// The bottom four bits of this register give the number of implemented
+ /// interrupt lines, divided by 32. So a value of `0b0010` indicates 64
+ /// interrupts.
+ #[cfg(any(armv7m, armv8m, native))]
+ pub ictr: RO,
+
+ /// The ICTR is not defined in the ARMv6-M Architecture Reference manual, so
+ /// we replace it with this.
+ #[cfg(not(any(armv7m, armv8m, native)))]
+ _reserved: u32,
+
+ /// Auxiliary Control Register
+ ///
+ /// This register is entirely implementation defined -- the standard gives
+ /// it an address, but does not define its role or contents.
+ pub actlr: RW,
+
+ /// Coprocessor Power Control Register
+ #[cfg(armv8m)]
+ pub cppwr: RW,
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/itm.rs b/src/rust/vendor/cortex-m/src/peripheral/itm.rs
new file mode 100644
index 000000000..c0d560f5c
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/itm.rs
@@ -0,0 +1,71 @@
+//! Instrumentation Trace Macrocell
+//!
+//! *NOTE* Not available on Armv6-M and Armv8-M Baseline.
+
+use core::cell::UnsafeCell;
+use core::ptr;
+
+use volatile_register::{RO, RW, WO};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Stimulus Port
+ pub stim: [Stim; 256],
+ reserved0: [u32; 640],
+ /// Trace Enable
+ pub ter: [RW; 8],
+ reserved1: [u32; 8],
+ /// Trace Privilege
+ pub tpr: RW,
+ reserved2: [u32; 15],
+ /// Trace Control
+ pub tcr: RW,
+ reserved3: [u32; 75],
+ /// Lock Access
+ pub lar: WO,
+ /// Lock Status
+ pub lsr: RO,
+}
+
+/// Stimulus Port
+pub struct Stim {
+ register: UnsafeCell,
+}
+
+impl Stim {
+ /// Writes an `u8` payload into the stimulus port
+ #[inline]
+ pub fn write_u8(&mut self, value: u8) {
+ unsafe { ptr::write_volatile(self.register.get() as *mut u8, value) }
+ }
+
+ /// Writes an `u16` payload into the stimulus port
+ #[inline]
+ pub fn write_u16(&mut self, value: u16) {
+ unsafe { ptr::write_volatile(self.register.get() as *mut u16, value) }
+ }
+
+ /// Writes an `u32` payload into the stimulus port
+ #[inline]
+ pub fn write_u32(&mut self, value: u32) {
+ unsafe { ptr::write_volatile(self.register.get(), value) }
+ }
+
+ /// Returns `true` if the stimulus port is ready to accept more data
+ #[cfg(not(armv8m))]
+ #[inline]
+ pub fn is_fifo_ready(&self) -> bool {
+ unsafe { ptr::read_volatile(self.register.get()) & 0b1 == 1 }
+ }
+
+ /// Returns `true` if the stimulus port is ready to accept more data
+ #[cfg(armv8m)]
+ #[inline]
+ pub fn is_fifo_ready(&self) -> bool {
+ // ARMv8-M adds a disabled bit; we indicate that we are ready to
+ // proceed with a stimulus write if the port is either ready (bit 0) or
+ // disabled (bit 1).
+ unsafe { ptr::read_volatile(self.register.get()) & 0b11 != 0 }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/mod.rs b/src/rust/vendor/cortex-m/src/peripheral/mod.rs
new file mode 100644
index 000000000..d8fd2d465
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/mod.rs
@@ -0,0 +1,685 @@
+//! Core peripherals.
+//!
+//! # API
+//!
+//! To use (most of) the peripheral API first you must get an *instance* of the peripheral. All the
+//! core peripherals are modeled as singletons (there can only ever be, at most, one instance of any
+//! one of them at any given point in time) and the only way to get an instance of them is through
+//! the [`Peripherals::take`](struct.Peripherals.html#method.take) method.
+//!
+//! ``` no_run
+//! # use cortex_m::peripheral::Peripherals;
+//! let mut peripherals = Peripherals::take().unwrap();
+//! peripherals.DCB.enable_trace();
+//! ```
+//!
+//! This method can only be successfully called *once* -- this is why the method returns an
+//! `Option`. Subsequent calls to the method will result in a `None` value being returned.
+//!
+//! ``` no_run, should_panic
+//! # use cortex_m::peripheral::Peripherals;
+//! let ok = Peripherals::take().unwrap();
+//! let panics = Peripherals::take().unwrap();
+//! ```
+//! A part of the peripheral API doesn't require access to a peripheral instance. This part of the
+//! API is provided as static methods on the peripheral types. One example is the
+//! [`DWT::cycle_count`](struct.DWT.html#method.cycle_count) method.
+//!
+//! ``` no_run
+//! # use cortex_m::peripheral::{DWT, Peripherals};
+//! {
+//! let mut peripherals = Peripherals::take().unwrap();
+//! peripherals.DCB.enable_trace();
+//! peripherals.DWT.enable_cycle_counter();
+//! } // all the peripheral singletons are destroyed here
+//!
+//! // but this method can be called without a DWT instance
+//! let cyccnt = DWT::cycle_count();
+//! ```
+//!
+//! The singleton property can be *unsafely* bypassed using the `ptr` static method which is
+//! available on all the peripheral types. This method is a useful building block for implementing
+//! safe higher level abstractions.
+//!
+//! ``` no_run
+//! # use cortex_m::peripheral::{DWT, Peripherals};
+//! {
+//! let mut peripherals = Peripherals::take().unwrap();
+//! peripherals.DCB.enable_trace();
+//! peripherals.DWT.enable_cycle_counter();
+//! } // all the peripheral singletons are destroyed here
+//!
+//! // actually safe because this is an atomic read with no side effects
+//! let cyccnt = unsafe { (*DWT::PTR).cyccnt.read() };
+//! ```
+//!
+//! # References
+//!
+//! - ARMv7-M Architecture Reference Manual (Issue E.b) - Chapter B3
+
+use core::marker::PhantomData;
+use core::ops;
+
+use crate::interrupt;
+
+#[cfg(cm7)]
+pub mod ac;
+#[cfg(not(armv6m))]
+pub mod cbp;
+pub mod cpuid;
+pub mod dcb;
+pub mod dwt;
+#[cfg(not(armv6m))]
+pub mod fpb;
+// NOTE(native) is for documentation purposes
+#[cfg(any(has_fpu, native))]
+pub mod fpu;
+pub mod icb;
+#[cfg(all(not(armv6m), not(armv8m_base)))]
+pub mod itm;
+pub mod mpu;
+pub mod nvic;
+#[cfg(armv8m)]
+pub mod sau;
+pub mod scb;
+pub mod syst;
+#[cfg(not(armv6m))]
+pub mod tpiu;
+
+#[cfg(test)]
+mod test;
+
+// NOTE the `PhantomData` used in the peripherals proxy is to make them `Send` but *not* `Sync`
+
+/// Core peripherals
+#[allow(non_snake_case)]
+#[allow(clippy::manual_non_exhaustive)]
+pub struct Peripherals {
+ /// Cortex-M7 TCM and cache access control.
+ #[cfg(cm7)]
+ pub AC: AC,
+
+ /// Cache and branch predictor maintenance operations.
+ /// Not available on Armv6-M.
+ pub CBP: CBP,
+
+ /// CPUID
+ pub CPUID: CPUID,
+
+ /// Debug Control Block
+ pub DCB: DCB,
+
+ /// Data Watchpoint and Trace unit
+ pub DWT: DWT,
+
+ /// Flash Patch and Breakpoint unit.
+ /// Not available on Armv6-M.
+ pub FPB: FPB,
+
+ /// Floating Point Unit.
+ pub FPU: FPU,
+
+ /// Implementation Control Block.
+ ///
+ /// The name is from the v8-M spec, but the block existed in earlier
+ /// revisions, without a name.
+ pub ICB: ICB,
+
+ /// Instrumentation Trace Macrocell.
+ /// Not available on Armv6-M and Armv8-M Baseline.
+ pub ITM: ITM,
+
+ /// Memory Protection Unit
+ pub MPU: MPU,
+
+ /// Nested Vector Interrupt Controller
+ pub NVIC: NVIC,
+
+ /// Security Attribution Unit
+ pub SAU: SAU,
+
+ /// System Control Block
+ pub SCB: SCB,
+
+ /// SysTick: System Timer
+ pub SYST: SYST,
+
+ /// Trace Port Interface Unit.
+ /// Not available on Armv6-M.
+ pub TPIU: TPIU,
+
+ // Private field making `Peripherals` non-exhaustive. We don't use `#[non_exhaustive]` so we
+ // can support older Rust versions.
+ _priv: (),
+}
+
+// NOTE `no_mangle` is used here to prevent linking different minor versions of this crate as that
+// would let you `take` the core peripherals more than once (one per minor version)
+#[no_mangle]
+static CORE_PERIPHERALS: () = ();
+
+/// Set to `true` when `take` or `steal` was called to make `Peripherals` a singleton.
+static mut TAKEN: bool = false;
+
+impl Peripherals {
+ /// Returns all the core peripherals *once*
+ #[inline]
+ pub fn take() -> Option {
+ interrupt::free(|_| {
+ if unsafe { TAKEN } {
+ None
+ } else {
+ Some(unsafe { Peripherals::steal() })
+ }
+ })
+ }
+
+ /// Unchecked version of `Peripherals::take`
+ #[inline]
+ pub unsafe fn steal() -> Self {
+ TAKEN = true;
+
+ Peripherals {
+ #[cfg(cm7)]
+ AC: AC {
+ _marker: PhantomData,
+ },
+ CBP: CBP {
+ _marker: PhantomData,
+ },
+ CPUID: CPUID {
+ _marker: PhantomData,
+ },
+ DCB: DCB {
+ _marker: PhantomData,
+ },
+ DWT: DWT {
+ _marker: PhantomData,
+ },
+ FPB: FPB {
+ _marker: PhantomData,
+ },
+ FPU: FPU {
+ _marker: PhantomData,
+ },
+ ICB: ICB {
+ _marker: PhantomData,
+ },
+ ITM: ITM {
+ _marker: PhantomData,
+ },
+ MPU: MPU {
+ _marker: PhantomData,
+ },
+ NVIC: NVIC {
+ _marker: PhantomData,
+ },
+ SAU: SAU {
+ _marker: PhantomData,
+ },
+ SCB: SCB {
+ _marker: PhantomData,
+ },
+ SYST: SYST {
+ _marker: PhantomData,
+ },
+ TPIU: TPIU {
+ _marker: PhantomData,
+ },
+ _priv: (),
+ }
+ }
+}
+
+/// Access control
+#[cfg(cm7)]
+pub struct AC {
+ _marker: PhantomData<*const ()>,
+}
+
+#[cfg(cm7)]
+unsafe impl Send for AC {}
+
+#[cfg(cm7)]
+impl AC {
+ /// Pointer to the register block
+ pub const PTR: *const self::ac::RegisterBlock = 0xE000_EF90 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const self::ac::RegisterBlock {
+ Self::PTR
+ }
+}
+
+/// Cache and branch predictor maintenance operations
+pub struct CBP {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for CBP {}
+
+#[cfg(not(armv6m))]
+impl CBP {
+ #[inline(always)]
+ pub(crate) const unsafe fn new() -> Self {
+ CBP {
+ _marker: PhantomData,
+ }
+ }
+
+ /// Pointer to the register block
+ pub const PTR: *const self::cbp::RegisterBlock = 0xE000_EF50 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const self::cbp::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(not(armv6m))]
+impl ops::Deref for CBP {
+ type Target = self::cbp::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// CPUID
+pub struct CPUID {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for CPUID {}
+
+impl CPUID {
+ /// Pointer to the register block
+ pub const PTR: *const self::cpuid::RegisterBlock = 0xE000_ED00 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const self::cpuid::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for CPUID {
+ type Target = self::cpuid::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Debug Control Block
+pub struct DCB {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for DCB {}
+
+impl DCB {
+ /// Pointer to the register block
+ pub const PTR: *const dcb::RegisterBlock = 0xE000_EDF0 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const dcb::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for DCB {
+ type Target = self::dcb::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*DCB::PTR }
+ }
+}
+
+/// Data Watchpoint and Trace unit
+pub struct DWT {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for DWT {}
+
+impl DWT {
+ /// Pointer to the register block
+ pub const PTR: *const dwt::RegisterBlock = 0xE000_1000 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const dwt::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for DWT {
+ type Target = self::dwt::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Flash Patch and Breakpoint unit
+pub struct FPB {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for FPB {}
+
+#[cfg(not(armv6m))]
+impl FPB {
+ /// Pointer to the register block
+ pub const PTR: *const fpb::RegisterBlock = 0xE000_2000 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const fpb::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(not(armv6m))]
+impl ops::Deref for FPB {
+ type Target = self::fpb::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Floating Point Unit
+pub struct FPU {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for FPU {}
+
+#[cfg(any(has_fpu, native))]
+impl FPU {
+ /// Pointer to the register block
+ pub const PTR: *const fpu::RegisterBlock = 0xE000_EF30 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const fpu::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(any(has_fpu, native))]
+impl ops::Deref for FPU {
+ type Target = self::fpu::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Implementation Control Block.
+///
+/// This block contains implementation-defined registers like `ictr` and
+/// `actlr`. It's called the "implementation control block" in the ARMv8-M
+/// standard, but earlier standards contained the registers, just without a
+/// name.
+pub struct ICB {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for ICB {}
+
+impl ICB {
+ /// Pointer to the register block
+ pub const PTR: *mut icb::RegisterBlock = 0xE000_E004 as *mut _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *mut icb::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for ICB {
+ type Target = self::icb::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+impl ops::DerefMut for ICB {
+ #[inline(always)]
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ unsafe { &mut *Self::PTR }
+ }
+}
+
+/// Instrumentation Trace Macrocell
+pub struct ITM {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for ITM {}
+
+#[cfg(all(not(armv6m), not(armv8m_base)))]
+impl ITM {
+ /// Pointer to the register block
+ pub const PTR: *mut itm::RegisterBlock = 0xE000_0000 as *mut _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *mut itm::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(all(not(armv6m), not(armv8m_base)))]
+impl ops::Deref for ITM {
+ type Target = self::itm::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+#[cfg(all(not(armv6m), not(armv8m_base)))]
+impl ops::DerefMut for ITM {
+ #[inline(always)]
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ unsafe { &mut *Self::PTR }
+ }
+}
+
+/// Memory Protection Unit
+pub struct MPU {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for MPU {}
+
+impl MPU {
+ /// Pointer to the register block
+ pub const PTR: *const mpu::RegisterBlock = 0xE000_ED90 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const mpu::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for MPU {
+ type Target = self::mpu::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Nested Vector Interrupt Controller
+pub struct NVIC {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for NVIC {}
+
+impl NVIC {
+ /// Pointer to the register block
+ pub const PTR: *const nvic::RegisterBlock = 0xE000_E100 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const nvic::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for NVIC {
+ type Target = self::nvic::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Security Attribution Unit
+pub struct SAU {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for SAU {}
+
+#[cfg(armv8m)]
+impl SAU {
+ /// Pointer to the register block
+ pub const PTR: *const sau::RegisterBlock = 0xE000_EDD0 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const sau::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(armv8m)]
+impl ops::Deref for SAU {
+ type Target = self::sau::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// System Control Block
+pub struct SCB {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for SCB {}
+
+impl SCB {
+ /// Pointer to the register block
+ pub const PTR: *const scb::RegisterBlock = 0xE000_ED04 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const scb::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for SCB {
+ type Target = self::scb::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// SysTick: System Timer
+pub struct SYST {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for SYST {}
+
+impl SYST {
+ /// Pointer to the register block
+ pub const PTR: *const syst::RegisterBlock = 0xE000_E010 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const syst::RegisterBlock {
+ Self::PTR
+ }
+}
+
+impl ops::Deref for SYST {
+ type Target = self::syst::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
+
+/// Trace Port Interface Unit
+pub struct TPIU {
+ _marker: PhantomData<*const ()>,
+}
+
+unsafe impl Send for TPIU {}
+
+#[cfg(not(armv6m))]
+impl TPIU {
+ /// Pointer to the register block
+ pub const PTR: *const tpiu::RegisterBlock = 0xE004_0000 as *const _;
+
+ /// Returns a pointer to the register block
+ #[inline(always)]
+ #[deprecated(since = "0.7.5", note = "Use the associated constant `PTR` instead")]
+ pub const fn ptr() -> *const tpiu::RegisterBlock {
+ Self::PTR
+ }
+}
+
+#[cfg(not(armv6m))]
+impl ops::Deref for TPIU {
+ type Target = self::tpiu::RegisterBlock;
+
+ #[inline(always)]
+ fn deref(&self) -> &Self::Target {
+ unsafe { &*Self::PTR }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/mpu.rs b/src/rust/vendor/cortex-m/src/peripheral/mpu.rs
new file mode 100644
index 000000000..3a5f5b4d9
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/mpu.rs
@@ -0,0 +1,65 @@
+//! Memory Protection Unit
+
+use volatile_register::{RO, RW};
+
+/// Register block for ARMv7-M
+#[cfg(not(armv8m))]
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Type
+ pub _type: RO,
+ /// Control
+ pub ctrl: RW,
+ /// Region Number
+ pub rnr: RW,
+ /// Region Base Address
+ pub rbar: RW,
+ /// Region Attribute and Size
+ pub rasr: RW,
+ /// Alias 1 of RBAR
+ pub rbar_a1: RW,
+ /// Alias 1 of RASR
+ pub rasr_a1: RW,
+ /// Alias 2 of RBAR
+ pub rbar_a2: RW,
+ /// Alias 2 of RASR
+ pub rasr_a2: RW,
+ /// Alias 3 of RBAR
+ pub rbar_a3: RW,
+ /// Alias 3 of RASR
+ pub rasr_a3: RW,
+}
+
+/// Register block for ARMv8-M
+#[cfg(armv8m)]
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Type
+ pub _type: RO,
+ /// Control
+ pub ctrl: RW,
+ /// Region Number
+ pub rnr: RW,
+ /// Region Base Address
+ pub rbar: RW,
+ /// Region Limit Address
+ pub rlar: RW,
+ /// Alias 1 of RBAR
+ pub rbar_a1: RW,
+ /// Alias 1 of RLAR
+ pub rlar_a1: RW,
+ /// Alias 2 of RBAR
+ pub rbar_a2: RW,
+ /// Alias 2 of RLAR
+ pub rlar_a2: RW,
+ /// Alias 3 of RBAR
+ pub rbar_a3: RW,
+ /// Alias 3 of RLAR
+ pub rlar_a3: RW,
+
+ // Reserved word at offset 0xBC
+ _reserved: u32,
+
+ /// Memory Attribute Indirection register 0 and 1
+ pub mair: [RW; 2],
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/nvic.rs b/src/rust/vendor/cortex-m/src/peripheral/nvic.rs
new file mode 100644
index 000000000..57fa94b70
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/nvic.rs
@@ -0,0 +1,265 @@
+//! Nested Vector Interrupt Controller
+
+use volatile_register::RW;
+#[cfg(not(armv6m))]
+use volatile_register::{RO, WO};
+
+use crate::interrupt::InterruptNumber;
+use crate::peripheral::NVIC;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Interrupt Set-Enable
+ pub iser: [RW; 16],
+
+ _reserved0: [u32; 16],
+
+ /// Interrupt Clear-Enable
+ pub icer: [RW; 16],
+
+ _reserved1: [u32; 16],
+
+ /// Interrupt Set-Pending
+ pub ispr: [RW; 16],
+
+ _reserved2: [u32; 16],
+
+ /// Interrupt Clear-Pending
+ pub icpr: [RW; 16],
+
+ _reserved3: [u32; 16],
+
+ /// Interrupt Active Bit (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub iabr: [RO; 16],
+ #[cfg(armv6m)]
+ _reserved4: [u32; 16],
+
+ _reserved5: [u32; 48],
+
+ /// Interrupt Priority
+ ///
+ /// On ARMv7-M, 124 word-sized registers are available. Each of those
+ /// contains of 4 interrupt priorities of 8 byte each.The architecture
+ /// specifically allows accessing those along byte boundaries, so they are
+ /// represented as 496 byte-sized registers, for convenience, and to allow
+ /// atomic priority updates.
+ ///
+ /// On ARMv6-M, the registers must only be accessed along word boundaries,
+ /// so convenient byte-sized representation wouldn't work on that
+ /// architecture.
+ #[cfg(not(armv6m))]
+ pub ipr: [RW; 496],
+
+ /// Interrupt Priority
+ ///
+ /// On ARMv7-M, 124 word-sized registers are available. Each of those
+ /// contains of 4 interrupt priorities of 8 byte each.The architecture
+ /// specifically allows accessing those along byte boundaries, so they are
+ /// represented as 496 byte-sized registers, for convenience, and to allow
+ /// atomic priority updates.
+ ///
+ /// On ARMv6-M, the registers must only be accessed along word boundaries,
+ /// so convenient byte-sized representation wouldn't work on that
+ /// architecture.
+ #[cfg(armv6m)]
+ pub ipr: [RW; 8],
+
+ #[cfg(not(armv6m))]
+ _reserved6: [u32; 580],
+
+ /// Software Trigger Interrupt
+ #[cfg(not(armv6m))]
+ pub stir: WO,
+}
+
+impl NVIC {
+ /// Request an IRQ in software
+ ///
+ /// Writing a value to the INTID field is the same as manually pending an interrupt by setting
+ /// the corresponding interrupt bit in an Interrupt Set Pending Register. This is similar to
+ /// [`NVIC::pend`].
+ ///
+ /// This method is not available on ARMv6-M chips.
+ ///
+ /// [`NVIC::pend`]: #method.pend
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn request(&mut self, interrupt: I)
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+
+ unsafe {
+ self.stir.write(u32::from(nr));
+ }
+ }
+
+ /// Disables `interrupt`
+ #[inline]
+ pub fn mask(interrupt: I)
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+ // NOTE(unsafe) this is a write to a stateless register
+ unsafe { (*Self::PTR).icer[usize::from(nr / 32)].write(1 << (nr % 32)) }
+ }
+
+ /// Enables `interrupt`
+ ///
+ /// This function is `unsafe` because it can break mask-based critical sections
+ #[inline]
+ pub unsafe fn unmask(interrupt: I)
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+ // NOTE(ptr) this is a write to a stateless register
+ (*Self::PTR).iser[usize::from(nr / 32)].write(1 << (nr % 32))
+ }
+
+ /// Returns the NVIC priority of `interrupt`
+ ///
+ /// *NOTE* NVIC encodes priority in the highest bits of a byte so values like `1` and `2` map
+ /// to the same priority. Also for NVIC priorities, a lower value (e.g. `16`) has higher
+ /// priority (urgency) than a larger value (e.g. `32`).
+ #[inline]
+ pub fn get_priority(interrupt: I) -> u8
+ where
+ I: InterruptNumber,
+ {
+ #[cfg(not(armv6m))]
+ {
+ let nr = interrupt.number();
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ipr[usize::from(nr)].read() }
+ }
+
+ #[cfg(armv6m)]
+ {
+ // NOTE(unsafe) atomic read with no side effects
+ let ipr_n = unsafe { (*Self::PTR).ipr[Self::ipr_index(interrupt)].read() };
+ let prio = (ipr_n >> Self::ipr_shift(interrupt)) & 0x0000_00ff;
+ prio as u8
+ }
+ }
+
+ /// Is `interrupt` active or pre-empted and stacked
+ #[cfg(not(armv6m))]
+ #[inline]
+ pub fn is_active(interrupt: I) -> bool
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+ let mask = 1 << (nr % 32);
+
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { ((*Self::PTR).iabr[usize::from(nr / 32)].read() & mask) == mask }
+ }
+
+ /// Checks if `interrupt` is enabled
+ #[inline]
+ pub fn is_enabled(interrupt: I) -> bool
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+ let mask = 1 << (nr % 32);
+
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { ((*Self::PTR).iser[usize::from(nr / 32)].read() & mask) == mask }
+ }
+
+ /// Checks if `interrupt` is pending
+ #[inline]
+ pub fn is_pending(interrupt: I) -> bool
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+ let mask = 1 << (nr % 32);
+
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { ((*Self::PTR).ispr[usize::from(nr / 32)].read() & mask) == mask }
+ }
+
+ /// Forces `interrupt` into pending state
+ #[inline]
+ pub fn pend(interrupt: I)
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+
+ // NOTE(unsafe) atomic stateless write; ICPR doesn't store any state
+ unsafe { (*Self::PTR).ispr[usize::from(nr / 32)].write(1 << (nr % 32)) }
+ }
+
+ /// Sets the "priority" of `interrupt` to `prio`
+ ///
+ /// *NOTE* See [`get_priority`](struct.NVIC.html#method.get_priority) method for an explanation
+ /// of how NVIC priorities work.
+ ///
+ /// On ARMv6-M, updating an interrupt priority requires a read-modify-write operation. On
+ /// ARMv7-M, the operation is performed in a single atomic write operation.
+ ///
+ /// # Unsafety
+ ///
+ /// Changing priority levels can break priority-based critical sections (see
+ /// [`register::basepri`](crate::register::basepri)) and compromise memory safety.
+ #[inline]
+ pub unsafe fn set_priority(&mut self, interrupt: I, prio: u8)
+ where
+ I: InterruptNumber,
+ {
+ #[cfg(not(armv6m))]
+ {
+ let nr = interrupt.number();
+ self.ipr[usize::from(nr)].write(prio)
+ }
+
+ #[cfg(armv6m)]
+ {
+ self.ipr[Self::ipr_index(interrupt)].modify(|value| {
+ let mask = 0x0000_00ff << Self::ipr_shift(interrupt);
+ let prio = u32::from(prio) << Self::ipr_shift(interrupt);
+
+ (value & !mask) | prio
+ })
+ }
+ }
+
+ /// Clears `interrupt`'s pending state
+ #[inline]
+ pub fn unpend(interrupt: I)
+ where
+ I: InterruptNumber,
+ {
+ let nr = interrupt.number();
+
+ // NOTE(unsafe) atomic stateless write; ICPR doesn't store any state
+ unsafe { (*Self::PTR).icpr[usize::from(nr / 32)].write(1 << (nr % 32)) }
+ }
+
+ #[cfg(armv6m)]
+ #[inline]
+ fn ipr_index(interrupt: I) -> usize
+ where
+ I: InterruptNumber,
+ {
+ usize::from(interrupt.number()) / 4
+ }
+
+ #[cfg(armv6m)]
+ #[inline]
+ fn ipr_shift(interrupt: I) -> usize
+ where
+ I: InterruptNumber,
+ {
+ (usize::from(interrupt.number()) % 4) * 8
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/sau.rs b/src/rust/vendor/cortex-m/src/peripheral/sau.rs
new file mode 100644
index 000000000..da91aca9b
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/sau.rs
@@ -0,0 +1,243 @@
+//! Security Attribution Unit
+//!
+//! *NOTE* Available only on Armv8-M and Armv8.1-M, for the following Rust target triples:
+//! * `thumbv8m.base-none-eabi`
+//! * `thumbv8m.main-none-eabi`
+//! * `thumbv8m.main-none-eabihf`
+//!
+//! For reference please check the section B8.3 of the Armv8-M Architecture Reference Manual.
+
+use crate::interrupt;
+use crate::peripheral::SAU;
+use bitfield::bitfield;
+use volatile_register::{RO, RW};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Control Register
+ pub ctrl: RW,
+ /// Type Register
+ pub _type: RO,
+ /// Region Number Register
+ pub rnr: RW,
+ /// Region Base Address Register
+ pub rbar: RW,
+ /// Region Limit Address Register
+ pub rlar: RW,
+ /// Secure Fault Status Register
+ pub sfsr: RO,
+ /// Secure Fault Address Register
+ pub sfar: RO,
+}
+
+bitfield! {
+ /// Control Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Ctrl(u32);
+ get_enable, set_enable: 0;
+ get_allns, set_allns: 1;
+}
+
+bitfield! {
+ /// Type Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Type(u32);
+ u8;
+ sregion, _: 7, 0;
+}
+
+bitfield! {
+ /// Region Number Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Rnr(u32);
+ u8;
+ get_region, set_region: 7, 0;
+}
+
+bitfield! {
+ /// Region Base Address Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Rbar(u32);
+ u32;
+ get_baddr, set_baddr: 31, 5;
+}
+
+bitfield! {
+ /// Region Limit Address Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Rlar(u32);
+ u32;
+ get_laddr, set_laddr: 31, 5;
+ get_nsc, set_nsc: 1;
+ get_enable, set_enable: 0;
+}
+
+bitfield! {
+ /// Secure Fault Status Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Sfsr(u32);
+ invep, _: 0;
+ invis, _: 1;
+ inver, _: 2;
+ auviol, _: 3;
+ invtran, _: 4;
+ lsperr, _: 5;
+ sfarvalid, _: 6;
+ lserr, _: 7;
+}
+
+bitfield! {
+ /// Secure Fault Address Register description
+ #[repr(C)]
+ #[derive(Copy, Clone)]
+ pub struct Sfar(u32);
+ u32;
+ address, _: 31, 0;
+}
+
+/// Possible attribute of a SAU region.
+#[derive(Debug)]
+pub enum SauRegionAttribute {
+ /// SAU region is Secure
+ Secure,
+ /// SAU region is Non-Secure Callable
+ NonSecureCallable,
+ /// SAU region is Non-Secure
+ NonSecure,
+}
+
+/// Description of a SAU region.
+#[derive(Debug)]
+pub struct SauRegion {
+ /// First address of the region, its 5 least significant bits must be set to zero.
+ pub base_address: u32,
+ /// Last address of the region, its 5 least significant bits must be set to one.
+ pub limit_address: u32,
+ /// Attribute of the region.
+ pub attribute: SauRegionAttribute,
+}
+
+/// Possible error values returned by the SAU methods.
+#[derive(Debug)]
+pub enum SauError {
+ /// The region number parameter to set or get a region must be between 0 and
+ /// region_numbers() - 1.
+ RegionNumberTooBig,
+ /// Bits 0 to 4 of the base address of a SAU region must be set to zero.
+ WrongBaseAddress,
+ /// Bits 0 to 4 of the limit address of a SAU region must be set to one.
+ WrongLimitAddress,
+}
+
+impl SAU {
+ /// Get the number of implemented SAU regions.
+ #[inline]
+ pub fn region_numbers(&self) -> u8 {
+ self._type.read().sregion()
+ }
+
+ /// Enable the SAU.
+ #[inline]
+ pub fn enable(&mut self) {
+ unsafe {
+ self.ctrl.modify(|mut ctrl| {
+ ctrl.set_enable(true);
+ ctrl
+ });
+ }
+ }
+
+ /// Set a SAU region to a region number.
+ /// SAU regions must be 32 bytes aligned and their sizes must be a multiple of 32 bytes. It
+ /// means that the 5 least significant bits of the base address of a SAU region must be set to
+ /// zero and the 5 least significant bits of the limit address must be set to one.
+ /// The region number must be valid.
+ /// This function is executed under a critical section to prevent having inconsistent results.
+ #[inline]
+ pub fn set_region(&mut self, region_number: u8, region: SauRegion) -> Result<(), SauError> {
+ interrupt::free(|_| {
+ let base_address = region.base_address;
+ let limit_address = region.limit_address;
+ let attribute = region.attribute;
+
+ if region_number >= self.region_numbers() {
+ Err(SauError::RegionNumberTooBig)
+ } else if base_address & 0x1F != 0 {
+ Err(SauError::WrongBaseAddress)
+ } else if limit_address & 0x1F != 0x1F {
+ Err(SauError::WrongLimitAddress)
+ } else {
+ // All fields of these registers are going to be modified so we don't need to read them
+ // before.
+ let mut rnr = Rnr(0);
+ let mut rbar = Rbar(0);
+ let mut rlar = Rlar(0);
+
+ rnr.set_region(region_number);
+ rbar.set_baddr(base_address >> 5);
+ rlar.set_laddr(limit_address >> 5);
+
+ match attribute {
+ SauRegionAttribute::Secure => {
+ rlar.set_nsc(false);
+ rlar.set_enable(false);
+ }
+ SauRegionAttribute::NonSecureCallable => {
+ rlar.set_nsc(true);
+ rlar.set_enable(true);
+ }
+ SauRegionAttribute::NonSecure => {
+ rlar.set_nsc(false);
+ rlar.set_enable(true);
+ }
+ }
+
+ unsafe {
+ self.rnr.write(rnr);
+ self.rbar.write(rbar);
+ self.rlar.write(rlar);
+ }
+
+ Ok(())
+ }
+ })
+ }
+
+ /// Get a region from the SAU.
+ /// The region number must be valid.
+ /// This function is executed under a critical section to prevent having inconsistent results.
+ #[inline]
+ pub fn get_region(&mut self, region_number: u8) -> Result {
+ interrupt::free(|_| {
+ if region_number >= self.region_numbers() {
+ Err(SauError::RegionNumberTooBig)
+ } else {
+ unsafe {
+ self.rnr.write(Rnr(region_number.into()));
+ }
+
+ let rbar = self.rbar.read();
+ let rlar = self.rlar.read();
+
+ let attribute = match (rlar.get_enable(), rlar.get_nsc()) {
+ (false, _) => SauRegionAttribute::Secure,
+ (true, false) => SauRegionAttribute::NonSecure,
+ (true, true) => SauRegionAttribute::NonSecureCallable,
+ };
+
+ Ok(SauRegion {
+ base_address: rbar.get_baddr() << 5,
+ limit_address: (rlar.get_laddr() << 5) | 0x1F,
+ attribute,
+ })
+ }
+ })
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/scb.rs b/src/rust/vendor/cortex-m/src/peripheral/scb.rs
new file mode 100644
index 000000000..f998b17c5
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/scb.rs
@@ -0,0 +1,1109 @@
+//! System Control Block
+
+use core::ptr;
+
+use volatile_register::RW;
+
+#[cfg(not(armv6m))]
+use super::cpuid::CsselrCacheType;
+#[cfg(not(armv6m))]
+use super::CBP;
+#[cfg(not(armv6m))]
+use super::CPUID;
+use super::SCB;
+#[cfg(feature = "serde")]
+use serde::{Deserialize, Serialize};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Interrupt Control and State
+ pub icsr: RW,
+
+ /// Vector Table Offset (not present on Cortex-M0 variants)
+ pub vtor: RW,
+
+ /// Application Interrupt and Reset Control
+ pub aircr: RW,
+
+ /// System Control
+ pub scr: RW,
+
+ /// Configuration and Control
+ pub ccr: RW,
+
+ /// System Handler Priority (word accessible only on Cortex-M0 variants)
+ ///
+ /// On ARMv7-M, `shpr[0]` points to SHPR1
+ ///
+ /// On ARMv6-M, `shpr[0]` points to SHPR2
+ #[cfg(not(armv6m))]
+ pub shpr: [RW; 12],
+ #[cfg(armv6m)]
+ _reserved1: u32,
+ /// System Handler Priority (word accessible only on Cortex-M0 variants)
+ ///
+ /// On ARMv7-M, `shpr[0]` points to SHPR1
+ ///
+ /// On ARMv6-M, `shpr[0]` points to SHPR2
+ #[cfg(armv6m)]
+ pub shpr: [RW; 2],
+
+ /// System Handler Control and State
+ pub shcsr: RW,
+
+ /// Configurable Fault Status (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub cfsr: RW,
+ #[cfg(armv6m)]
+ _reserved2: u32,
+
+ /// HardFault Status (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub hfsr: RW,
+ #[cfg(armv6m)]
+ _reserved3: u32,
+
+ /// Debug Fault Status (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub dfsr: RW,
+ #[cfg(armv6m)]
+ _reserved4: u32,
+
+ /// MemManage Fault Address (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub mmfar: RW,
+ #[cfg(armv6m)]
+ _reserved5: u32,
+
+ /// BusFault Address (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub bfar: RW,
+ #[cfg(armv6m)]
+ _reserved6: u32,
+
+ /// Auxiliary Fault Status (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub afsr: RW,
+ #[cfg(armv6m)]
+ _reserved7: u32,
+
+ _reserved8: [u32; 18],
+
+ /// Coprocessor Access Control (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ pub cpacr: RW,
+ #[cfg(armv6m)]
+ _reserved9: u32,
+}
+
+/// FPU access mode
+#[cfg(has_fpu)]
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub enum FpuAccessMode {
+ /// FPU is not accessible
+ Disabled,
+ /// FPU is accessible in Privileged and User mode
+ Enabled,
+ /// FPU is accessible in Privileged mode only
+ Privileged,
+}
+
+#[cfg(has_fpu)]
+mod fpu_consts {
+ pub const SCB_CPACR_FPU_MASK: u32 = 0b11_11 << 20;
+ pub const SCB_CPACR_FPU_ENABLE: u32 = 0b01_01 << 20;
+ pub const SCB_CPACR_FPU_USER: u32 = 0b10_10 << 20;
+}
+
+#[cfg(has_fpu)]
+use self::fpu_consts::*;
+
+#[cfg(has_fpu)]
+impl SCB {
+ /// Shorthand for `set_fpu_access_mode(FpuAccessMode::Disabled)`
+ #[inline]
+ pub fn disable_fpu(&mut self) {
+ self.set_fpu_access_mode(FpuAccessMode::Disabled)
+ }
+
+ /// Shorthand for `set_fpu_access_mode(FpuAccessMode::Enabled)`
+ #[inline]
+ pub fn enable_fpu(&mut self) {
+ self.set_fpu_access_mode(FpuAccessMode::Enabled)
+ }
+
+ /// Gets FPU access mode
+ #[inline]
+ pub fn fpu_access_mode() -> FpuAccessMode {
+ // NOTE(unsafe) atomic read operation with no side effects
+ let cpacr = unsafe { (*Self::PTR).cpacr.read() };
+
+ if cpacr & SCB_CPACR_FPU_MASK == SCB_CPACR_FPU_ENABLE | SCB_CPACR_FPU_USER {
+ FpuAccessMode::Enabled
+ } else if cpacr & SCB_CPACR_FPU_MASK == SCB_CPACR_FPU_ENABLE {
+ FpuAccessMode::Privileged
+ } else {
+ FpuAccessMode::Disabled
+ }
+ }
+
+ /// Sets FPU access mode
+ ///
+ /// *IMPORTANT* Any function that runs fully or partly with the FPU disabled must *not* take any
+ /// floating-point arguments or have any floating-point local variables. Because the compiler
+ /// might inline such a function into a caller that does have floating-point arguments or
+ /// variables, any such function must be also marked #[inline(never)].
+ #[inline]
+ pub fn set_fpu_access_mode(&mut self, mode: FpuAccessMode) {
+ let mut cpacr = self.cpacr.read() & !SCB_CPACR_FPU_MASK;
+ match mode {
+ FpuAccessMode::Disabled => (),
+ FpuAccessMode::Privileged => cpacr |= SCB_CPACR_FPU_ENABLE,
+ FpuAccessMode::Enabled => cpacr |= SCB_CPACR_FPU_ENABLE | SCB_CPACR_FPU_USER,
+ }
+ unsafe { self.cpacr.write(cpacr) }
+ }
+}
+
+impl SCB {
+ /// Returns the active exception number
+ #[inline]
+ pub fn vect_active() -> VectActive {
+ let icsr = unsafe { ptr::read(&(*SCB::PTR).icsr as *const _ as *const u32) };
+
+ match icsr as u8 {
+ 0 => VectActive::ThreadMode,
+ 2 => VectActive::Exception(Exception::NonMaskableInt),
+ 3 => VectActive::Exception(Exception::HardFault),
+ #[cfg(not(armv6m))]
+ 4 => VectActive::Exception(Exception::MemoryManagement),
+ #[cfg(not(armv6m))]
+ 5 => VectActive::Exception(Exception::BusFault),
+ #[cfg(not(armv6m))]
+ 6 => VectActive::Exception(Exception::UsageFault),
+ #[cfg(any(armv8m, native))]
+ 7 => VectActive::Exception(Exception::SecureFault),
+ 11 => VectActive::Exception(Exception::SVCall),
+ #[cfg(not(armv6m))]
+ 12 => VectActive::Exception(Exception::DebugMonitor),
+ 14 => VectActive::Exception(Exception::PendSV),
+ 15 => VectActive::Exception(Exception::SysTick),
+ irqn => VectActive::Interrupt { irqn: irqn - 16 },
+ }
+ }
+}
+
+/// Processor core exceptions (internal interrupts)
+#[derive(Clone, Copy, Debug, Eq, PartialEq)]
+#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
+#[cfg_attr(feature = "std", derive(PartialOrd, Hash))]
+pub enum Exception {
+ /// Non maskable interrupt
+ NonMaskableInt,
+
+ /// Hard fault interrupt
+ HardFault,
+
+ /// Memory management interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ MemoryManagement,
+
+ /// Bus fault interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ BusFault,
+
+ /// Usage fault interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ UsageFault,
+
+ /// Secure fault interrupt (only on ARMv8-M)
+ #[cfg(any(armv8m, native))]
+ SecureFault,
+
+ /// SV call interrupt
+ SVCall,
+
+ /// Debug monitor interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ DebugMonitor,
+
+ /// Pend SV interrupt
+ PendSV,
+
+ /// System Tick interrupt
+ SysTick,
+}
+
+impl Exception {
+ /// Returns the IRQ number of this `Exception`
+ ///
+ /// The return value is always within the closed range `[-1, -14]`
+ #[inline]
+ pub fn irqn(self) -> i8 {
+ match self {
+ Exception::NonMaskableInt => -14,
+ Exception::HardFault => -13,
+ #[cfg(not(armv6m))]
+ Exception::MemoryManagement => -12,
+ #[cfg(not(armv6m))]
+ Exception::BusFault => -11,
+ #[cfg(not(armv6m))]
+ Exception::UsageFault => -10,
+ #[cfg(any(armv8m, native))]
+ Exception::SecureFault => -9,
+ Exception::SVCall => -5,
+ #[cfg(not(armv6m))]
+ Exception::DebugMonitor => -4,
+ Exception::PendSV => -2,
+ Exception::SysTick => -1,
+ }
+ }
+}
+
+/// Active exception number
+#[derive(Clone, Copy, Debug, Eq, PartialEq)]
+#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
+#[cfg_attr(feature = "std", derive(PartialOrd, Hash))]
+pub enum VectActive {
+ /// Thread mode
+ ThreadMode,
+
+ /// Processor core exception (internal interrupts)
+ Exception(Exception),
+
+ /// Device specific exception (external interrupts)
+ Interrupt {
+ /// Interrupt number. This number is always within half open range `[0, 240)`
+ irqn: u8,
+ },
+}
+
+impl VectActive {
+ /// Converts a `byte` into `VectActive`
+ #[inline]
+ pub fn from(vect_active: u8) -> Option {
+ Some(match vect_active {
+ 0 => VectActive::ThreadMode,
+ 2 => VectActive::Exception(Exception::NonMaskableInt),
+ 3 => VectActive::Exception(Exception::HardFault),
+ #[cfg(not(armv6m))]
+ 4 => VectActive::Exception(Exception::MemoryManagement),
+ #[cfg(not(armv6m))]
+ 5 => VectActive::Exception(Exception::BusFault),
+ #[cfg(not(armv6m))]
+ 6 => VectActive::Exception(Exception::UsageFault),
+ #[cfg(any(armv8m, native))]
+ 7 => VectActive::Exception(Exception::SecureFault),
+ 11 => VectActive::Exception(Exception::SVCall),
+ #[cfg(not(armv6m))]
+ 12 => VectActive::Exception(Exception::DebugMonitor),
+ 14 => VectActive::Exception(Exception::PendSV),
+ 15 => VectActive::Exception(Exception::SysTick),
+ irqn if irqn >= 16 => VectActive::Interrupt { irqn },
+ _ => return None,
+ })
+ }
+}
+
+#[cfg(not(armv6m))]
+mod scb_consts {
+ pub const SCB_CCR_IC_MASK: u32 = 1 << 17;
+ pub const SCB_CCR_DC_MASK: u32 = 1 << 16;
+}
+
+#[cfg(not(armv6m))]
+use self::scb_consts::*;
+
+#[cfg(not(armv6m))]
+impl SCB {
+ /// Enables I-cache if currently disabled.
+ ///
+ /// This operation first invalidates the entire I-cache.
+ #[inline]
+ pub fn enable_icache(&mut self) {
+ // Don't do anything if I-cache is already enabled
+ if Self::icache_enabled() {
+ return;
+ }
+
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ // Invalidate I-cache
+ cbp.iciallu();
+
+ // Enable I-cache
+ extern "C" {
+ // see asm-v7m.s
+ fn __enable_icache();
+ }
+
+ // NOTE(unsafe): The asm routine manages exclusive access to the SCB
+ // registers and applies the proper barriers; it is technically safe on
+ // its own, and is only `unsafe` here because it's `extern "C"`.
+ unsafe {
+ __enable_icache();
+ }
+ }
+
+ /// Disables I-cache if currently enabled.
+ ///
+ /// This operation invalidates the entire I-cache after disabling.
+ #[inline]
+ pub fn disable_icache(&mut self) {
+ // Don't do anything if I-cache is already disabled
+ if !Self::icache_enabled() {
+ return;
+ }
+
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ // Disable I-cache
+ // NOTE(unsafe): We have synchronised access by &mut self
+ unsafe { self.ccr.modify(|r| r & !SCB_CCR_IC_MASK) };
+
+ // Invalidate I-cache
+ cbp.iciallu();
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Returns whether the I-cache is currently enabled.
+ #[inline(always)]
+ pub fn icache_enabled() -> bool {
+ crate::asm::dsb();
+ crate::asm::isb();
+
+ // NOTE(unsafe): atomic read with no side effects
+ unsafe { (*Self::PTR).ccr.read() & SCB_CCR_IC_MASK == SCB_CCR_IC_MASK }
+ }
+
+ /// Invalidates the entire I-cache.
+ #[inline]
+ pub fn invalidate_icache(&mut self) {
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ // Invalidate I-cache
+ cbp.iciallu();
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Enables D-cache if currently disabled.
+ ///
+ /// This operation first invalidates the entire D-cache, ensuring it does
+ /// not contain stale values before being enabled.
+ #[inline]
+ pub fn enable_dcache(&mut self, cpuid: &mut CPUID) {
+ // Don't do anything if D-cache is already enabled
+ if Self::dcache_enabled() {
+ return;
+ }
+
+ // Invalidate anything currently in the D-cache
+ unsafe { self.invalidate_dcache(cpuid) };
+
+ // Now turn on the D-cache
+ extern "C" {
+ // see asm-v7m.s
+ fn __enable_dcache();
+ }
+
+ // NOTE(unsafe): The asm routine manages exclusive access to the SCB
+ // registers and applies the proper barriers; it is technically safe on
+ // its own, and is only `unsafe` here because it's `extern "C"`.
+ unsafe {
+ __enable_dcache();
+ }
+ }
+
+ /// Disables D-cache if currently enabled.
+ ///
+ /// This operation subsequently cleans and invalidates the entire D-cache,
+ /// ensuring all contents are safely written back to main memory after disabling.
+ #[inline]
+ pub fn disable_dcache(&mut self, cpuid: &mut CPUID) {
+ // Don't do anything if D-cache is already disabled
+ if !Self::dcache_enabled() {
+ return;
+ }
+
+ // Turn off the D-cache
+ // NOTE(unsafe): We have synchronised access by &mut self
+ unsafe { self.ccr.modify(|r| r & !SCB_CCR_DC_MASK) };
+
+ // Clean and invalidate whatever was left in it
+ self.clean_invalidate_dcache(cpuid);
+ }
+
+ /// Returns whether the D-cache is currently enabled.
+ #[inline]
+ pub fn dcache_enabled() -> bool {
+ crate::asm::dsb();
+ crate::asm::isb();
+
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).ccr.read() & SCB_CCR_DC_MASK == SCB_CCR_DC_MASK }
+ }
+
+ /// Invalidates the entire D-cache.
+ ///
+ /// Note that calling this while the dcache is enabled will probably wipe out the
+ /// stack, depending on optimisations, therefore breaking returning to the call point.
+ ///
+ /// It's used immediately before enabling the dcache, but not exported publicly.
+ #[inline]
+ unsafe fn invalidate_dcache(&mut self, cpuid: &mut CPUID) {
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = CBP::new();
+
+ // Read number of sets and ways
+ let (sets, ways) = cpuid.cache_num_sets_ways(0, CsselrCacheType::DataOrUnified);
+
+ // Invalidate entire D-cache
+ for set in 0..sets {
+ for way in 0..ways {
+ cbp.dcisw(set, way);
+ }
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Cleans the entire D-cache.
+ ///
+ /// This function causes everything in the D-cache to be written back to main memory,
+ /// overwriting whatever is already there.
+ #[inline]
+ pub fn clean_dcache(&mut self, cpuid: &mut CPUID) {
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ // Read number of sets and ways
+ let (sets, ways) = cpuid.cache_num_sets_ways(0, CsselrCacheType::DataOrUnified);
+
+ for set in 0..sets {
+ for way in 0..ways {
+ cbp.dccsw(set, way);
+ }
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Cleans and invalidates the entire D-cache.
+ ///
+ /// This function causes everything in the D-cache to be written back to main memory,
+ /// and then marks the entire D-cache as invalid, causing future reads to first fetch
+ /// from main memory.
+ #[inline]
+ pub fn clean_invalidate_dcache(&mut self, cpuid: &mut CPUID) {
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ // Read number of sets and ways
+ let (sets, ways) = cpuid.cache_num_sets_ways(0, CsselrCacheType::DataOrUnified);
+
+ for set in 0..sets {
+ for way in 0..ways {
+ cbp.dccisw(set, way);
+ }
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Invalidates D-cache by address.
+ ///
+ /// * `addr`: The address to invalidate, which must be cache-line aligned.
+ /// * `size`: Number of bytes to invalidate, which must be a multiple of the cache line size.
+ ///
+ /// Invalidates D-cache cache lines, starting from the first line containing `addr`,
+ /// finishing once at least `size` bytes have been invalidated.
+ ///
+ /// Invalidation causes the next read access to memory to be fetched from main memory instead
+ /// of the cache.
+ ///
+ /// # Cache Line Sizes
+ ///
+ /// Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed
+ /// to 32 bytes, which means `addr` must be 32-byte aligned and `size` must be a multiple
+ /// of 32. At the time of writing, no other Cortex-M cores have data caches.
+ ///
+ /// If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size,
+ /// other data before or after the desired memory would also be invalidated, which can very
+ /// easily cause memory corruption and undefined behaviour.
+ ///
+ /// # Safety
+ ///
+ /// After invalidating, the next read of invalidated data will be from main memory. This may
+ /// cause recent writes to be lost, potentially including writes that initialized objects.
+ /// Therefore, this method may cause uninitialized memory or invalid values to be read,
+ /// resulting in undefined behaviour. You must ensure that main memory contains valid and
+ /// initialized values before invalidating.
+ ///
+ /// `addr` **must** be aligned to the size of the cache lines, and `size` **must** be a
+ /// multiple of the cache line size, otherwise this function will invalidate other memory,
+ /// easily leading to memory corruption and undefined behaviour. This precondition is checked
+ /// in debug builds using a `debug_assert!()`, but not checked in release builds to avoid
+ /// a runtime-dependent `panic!()` call.
+ #[inline]
+ pub unsafe fn invalidate_dcache_by_address(&mut self, addr: usize, size: usize) {
+ // No-op zero sized operations
+ if size == 0 {
+ return;
+ }
+
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = CBP::new();
+
+ // dminline is log2(num words), so 2**dminline * 4 gives size in bytes
+ let dminline = CPUID::cache_dminline();
+ let line_size = (1 << dminline) * 4;
+
+ debug_assert!((addr & (line_size - 1)) == 0);
+ debug_assert!((size & (line_size - 1)) == 0);
+
+ crate::asm::dsb();
+
+ // Find number of cache lines to invalidate
+ let num_lines = ((size - 1) / line_size) + 1;
+
+ // Compute address of first cache line
+ let mask = 0xFFFF_FFFF - (line_size - 1);
+ let mut addr = addr & mask;
+
+ for _ in 0..num_lines {
+ cbp.dcimvac(addr as u32);
+ addr += line_size;
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Invalidates an object from the D-cache.
+ ///
+ /// * `obj`: The object to invalidate.
+ ///
+ /// Invalidates D-cache starting from the first cache line containing `obj`,
+ /// continuing to invalidate cache lines until all of `obj` has been invalidated.
+ ///
+ /// Invalidation causes the next read access to memory to be fetched from main memory instead
+ /// of the cache.
+ ///
+ /// # Cache Line Sizes
+ ///
+ /// Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed
+ /// to 32 bytes, which means `obj` must be 32-byte aligned, and its size must be a multiple
+ /// of 32 bytes. At the time of writing, no other Cortex-M cores have data caches.
+ ///
+ /// If `obj` is not cache-line aligned, or its size is not a multiple of the cache line size,
+ /// other data before or after the desired memory would also be invalidated, which can very
+ /// easily cause memory corruption and undefined behaviour.
+ ///
+ /// # Safety
+ ///
+ /// After invalidating, `obj` will be read from main memory on next access. This may cause
+ /// recent writes to `obj` to be lost, potentially including the write that initialized it.
+ /// Therefore, this method may cause uninitialized memory or invalid values to be read,
+ /// resulting in undefined behaviour. You must ensure that main memory contains a valid and
+ /// initialized value for T before invalidating `obj`.
+ ///
+ /// `obj` **must** be aligned to the size of the cache lines, and its size **must** be a
+ /// multiple of the cache line size, otherwise this function will invalidate other memory,
+ /// easily leading to memory corruption and undefined behaviour. This precondition is checked
+ /// in debug builds using a `debug_assert!()`, but not checked in release builds to avoid
+ /// a runtime-dependent `panic!()` call.
+ #[inline]
+ pub unsafe fn invalidate_dcache_by_ref(&mut self, obj: &mut T) {
+ self.invalidate_dcache_by_address(obj as *const T as usize, core::mem::size_of::());
+ }
+
+ /// Invalidates a slice from the D-cache.
+ ///
+ /// * `slice`: The slice to invalidate.
+ ///
+ /// Invalidates D-cache starting from the first cache line containing members of `slice`,
+ /// continuing to invalidate cache lines until all of `slice` has been invalidated.
+ ///
+ /// Invalidation causes the next read access to memory to be fetched from main memory instead
+ /// of the cache.
+ ///
+ /// # Cache Line Sizes
+ ///
+ /// Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed
+ /// to 32 bytes, which means `slice` must be 32-byte aligned, and its size must be a multiple
+ /// of 32 bytes. At the time of writing, no other Cortex-M cores have data caches.
+ ///
+ /// If `slice` is not cache-line aligned, or its size is not a multiple of the cache line size,
+ /// other data before or after the desired memory would also be invalidated, which can very
+ /// easily cause memory corruption and undefined behaviour.
+ ///
+ /// # Safety
+ ///
+ /// After invalidating, `slice` will be read from main memory on next access. This may cause
+ /// recent writes to `slice` to be lost, potentially including the write that initialized it.
+ /// Therefore, this method may cause uninitialized memory or invalid values to be read,
+ /// resulting in undefined behaviour. You must ensure that main memory contains valid and
+ /// initialized values for T before invalidating `slice`.
+ ///
+ /// `slice` **must** be aligned to the size of the cache lines, and its size **must** be a
+ /// multiple of the cache line size, otherwise this function will invalidate other memory,
+ /// easily leading to memory corruption and undefined behaviour. This precondition is checked
+ /// in debug builds using a `debug_assert!()`, but not checked in release builds to avoid
+ /// a runtime-dependent `panic!()` call.
+ #[inline]
+ pub unsafe fn invalidate_dcache_by_slice(&mut self, slice: &mut [T]) {
+ self.invalidate_dcache_by_address(
+ slice.as_ptr() as usize,
+ slice.len() * core::mem::size_of::(),
+ );
+ }
+
+ /// Cleans D-cache by address.
+ ///
+ /// * `addr`: The address to start cleaning at.
+ /// * `size`: The number of bytes to clean.
+ ///
+ /// Cleans D-cache cache lines, starting from the first line containing `addr`,
+ /// finishing once at least `size` bytes have been invalidated.
+ ///
+ /// Cleaning the cache causes whatever data is present in the cache to be immediately written
+ /// to main memory, overwriting whatever was in main memory.
+ ///
+ /// # Cache Line Sizes
+ ///
+ /// Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed
+ /// to 32 bytes, which means `addr` should generally be 32-byte aligned and `size` should be a
+ /// multiple of 32. At the time of writing, no other Cortex-M cores have data caches.
+ ///
+ /// If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size,
+ /// other data before or after the desired memory will also be cleaned. From the point of view
+ /// of the core executing this function, memory remains consistent, so this is not unsound,
+ /// but is worth knowing about.
+ #[inline]
+ pub fn clean_dcache_by_address(&mut self, addr: usize, size: usize) {
+ // No-op zero sized operations
+ if size == 0 {
+ return;
+ }
+
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ crate::asm::dsb();
+
+ let dminline = CPUID::cache_dminline();
+ let line_size = (1 << dminline) * 4;
+ let num_lines = ((size - 1) / line_size) + 1;
+
+ let mask = 0xFFFF_FFFF - (line_size - 1);
+ let mut addr = addr & mask;
+
+ for _ in 0..num_lines {
+ cbp.dccmvac(addr as u32);
+ addr += line_size;
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+
+ /// Cleans an object from the D-cache.
+ ///
+ /// * `obj`: The object to clean.
+ ///
+ /// Cleans D-cache starting from the first cache line containing `obj`,
+ /// continuing to clean cache lines until all of `obj` has been cleaned.
+ ///
+ /// It is recommended that `obj` is both aligned to the cache line size and a multiple of
+ /// the cache line size long, otherwise surrounding data will also be cleaned.
+ ///
+ /// Cleaning the cache causes whatever data is present in the cache to be immediately written
+ /// to main memory, overwriting whatever was in main memory.
+ #[inline]
+ pub fn clean_dcache_by_ref(&mut self, obj: &T) {
+ self.clean_dcache_by_address(obj as *const T as usize, core::mem::size_of::());
+ }
+
+ /// Cleans a slice from D-cache.
+ ///
+ /// * `slice`: The slice to clean.
+ ///
+ /// Cleans D-cache starting from the first cache line containing members of `slice`,
+ /// continuing to clean cache lines until all of `slice` has been cleaned.
+ ///
+ /// It is recommended that `slice` is both aligned to the cache line size and a multiple of
+ /// the cache line size long, otherwise surrounding data will also be cleaned.
+ ///
+ /// Cleaning the cache causes whatever data is present in the cache to be immediately written
+ /// to main memory, overwriting whatever was in main memory.
+ #[inline]
+ pub fn clean_dcache_by_slice(&mut self, slice: &[T]) {
+ self.clean_dcache_by_address(
+ slice.as_ptr() as usize,
+ slice.len() * core::mem::size_of::(),
+ );
+ }
+
+ /// Cleans and invalidates D-cache by address.
+ ///
+ /// * `addr`: The address to clean and invalidate.
+ /// * `size`: The number of bytes to clean and invalidate.
+ ///
+ /// Cleans and invalidates D-cache starting from the first cache line containing `addr`,
+ /// finishing once at least `size` bytes have been cleaned and invalidated.
+ ///
+ /// It is recommended that `addr` is aligned to the cache line size and `size` is a multiple of
+ /// the cache line size, otherwise surrounding data will also be cleaned.
+ ///
+ /// Cleaning and invalidating causes data in the D-cache to be written back to main memory,
+ /// and then marks that data in the D-cache as invalid, causing future reads to first fetch
+ /// from main memory.
+ #[inline]
+ pub fn clean_invalidate_dcache_by_address(&mut self, addr: usize, size: usize) {
+ // No-op zero sized operations
+ if size == 0 {
+ return;
+ }
+
+ // NOTE(unsafe): No races as all CBP registers are write-only and stateless
+ let mut cbp = unsafe { CBP::new() };
+
+ crate::asm::dsb();
+
+ // Cache lines are fixed to 32 bit on Cortex-M7 and not present in earlier Cortex-M
+ const LINESIZE: usize = 32;
+ let num_lines = ((size - 1) / LINESIZE) + 1;
+
+ let mut addr = addr & 0xFFFF_FFE0;
+
+ for _ in 0..num_lines {
+ cbp.dccimvac(addr as u32);
+ addr += LINESIZE;
+ }
+
+ crate::asm::dsb();
+ crate::asm::isb();
+ }
+}
+
+const SCB_SCR_SLEEPDEEP: u32 = 0x1 << 2;
+
+impl SCB {
+ /// Set the SLEEPDEEP bit in the SCR register
+ #[inline]
+ pub fn set_sleepdeep(&mut self) {
+ unsafe {
+ self.scr.modify(|scr| scr | SCB_SCR_SLEEPDEEP);
+ }
+ }
+
+ /// Clear the SLEEPDEEP bit in the SCR register
+ #[inline]
+ pub fn clear_sleepdeep(&mut self) {
+ unsafe {
+ self.scr.modify(|scr| scr & !SCB_SCR_SLEEPDEEP);
+ }
+ }
+}
+
+const SCB_SCR_SLEEPONEXIT: u32 = 0x1 << 1;
+
+impl SCB {
+ /// Set the SLEEPONEXIT bit in the SCR register
+ #[inline]
+ pub fn set_sleeponexit(&mut self) {
+ unsafe {
+ self.scr.modify(|scr| scr | SCB_SCR_SLEEPONEXIT);
+ }
+ }
+
+ /// Clear the SLEEPONEXIT bit in the SCR register
+ #[inline]
+ pub fn clear_sleeponexit(&mut self) {
+ unsafe {
+ self.scr.modify(|scr| scr & !SCB_SCR_SLEEPONEXIT);
+ }
+ }
+}
+
+const SCB_AIRCR_VECTKEY: u32 = 0x05FA << 16;
+const SCB_AIRCR_PRIGROUP_MASK: u32 = 0x7 << 8;
+const SCB_AIRCR_SYSRESETREQ: u32 = 1 << 2;
+
+impl SCB {
+ /// Initiate a system reset request to reset the MCU
+ #[inline]
+ pub fn sys_reset() -> ! {
+ crate::asm::dsb();
+ unsafe {
+ (*Self::PTR).aircr.modify(
+ |r| {
+ SCB_AIRCR_VECTKEY | // otherwise the write is ignored
+ r & SCB_AIRCR_PRIGROUP_MASK | // keep priority group unchanged
+ SCB_AIRCR_SYSRESETREQ
+ }, // set the bit
+ )
+ };
+ crate::asm::dsb();
+ loop {
+ // wait for the reset
+ crate::asm::nop(); // avoid rust-lang/rust#28728
+ }
+ }
+}
+
+const SCB_ICSR_PENDSVSET: u32 = 1 << 28;
+const SCB_ICSR_PENDSVCLR: u32 = 1 << 27;
+
+const SCB_ICSR_PENDSTSET: u32 = 1 << 26;
+const SCB_ICSR_PENDSTCLR: u32 = 1 << 25;
+
+impl SCB {
+ /// Set the PENDSVSET bit in the ICSR register which will pend the PendSV interrupt
+ #[inline]
+ pub fn set_pendsv() {
+ unsafe {
+ (*Self::PTR).icsr.write(SCB_ICSR_PENDSVSET);
+ }
+ }
+
+ /// Check if PENDSVSET bit in the ICSR register is set meaning PendSV interrupt is pending
+ #[inline]
+ pub fn is_pendsv_pending() -> bool {
+ unsafe { (*Self::PTR).icsr.read() & SCB_ICSR_PENDSVSET == SCB_ICSR_PENDSVSET }
+ }
+
+ /// Set the PENDSVCLR bit in the ICSR register which will clear a pending PendSV interrupt
+ #[inline]
+ pub fn clear_pendsv() {
+ unsafe {
+ (*Self::PTR).icsr.write(SCB_ICSR_PENDSVCLR);
+ }
+ }
+
+ /// Set the PENDSTSET bit in the ICSR register which will pend a SysTick interrupt
+ #[inline]
+ pub fn set_pendst() {
+ unsafe {
+ (*Self::PTR).icsr.write(SCB_ICSR_PENDSTSET);
+ }
+ }
+
+ /// Check if PENDSTSET bit in the ICSR register is set meaning SysTick interrupt is pending
+ #[inline]
+ pub fn is_pendst_pending() -> bool {
+ unsafe { (*Self::PTR).icsr.read() & SCB_ICSR_PENDSTSET == SCB_ICSR_PENDSTSET }
+ }
+
+ /// Set the PENDSTCLR bit in the ICSR register which will clear a pending SysTick interrupt
+ #[inline]
+ pub fn clear_pendst() {
+ unsafe {
+ (*Self::PTR).icsr.write(SCB_ICSR_PENDSTCLR);
+ }
+ }
+}
+
+/// System handlers, exceptions with configurable priority
+#[derive(Clone, Copy, Debug, Eq, PartialEq)]
+#[repr(u8)]
+pub enum SystemHandler {
+ // NonMaskableInt, // priority is fixed
+ // HardFault, // priority is fixed
+ /// Memory management interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ MemoryManagement = 4,
+
+ /// Bus fault interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ BusFault = 5,
+
+ /// Usage fault interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ UsageFault = 6,
+
+ /// Secure fault interrupt (only on ARMv8-M)
+ #[cfg(any(armv8m, native))]
+ SecureFault = 7,
+
+ /// SV call interrupt
+ SVCall = 11,
+
+ /// Debug monitor interrupt (not present on Cortex-M0 variants)
+ #[cfg(not(armv6m))]
+ DebugMonitor = 12,
+
+ /// Pend SV interrupt
+ PendSV = 14,
+
+ /// System Tick interrupt
+ SysTick = 15,
+}
+
+impl SCB {
+ /// Returns the hardware priority of `system_handler`
+ ///
+ /// *NOTE*: Hardware priority does not exactly match logical priority levels. See
+ /// [`NVIC.get_priority`](struct.NVIC.html#method.get_priority) for more details.
+ #[inline]
+ pub fn get_priority(system_handler: SystemHandler) -> u8 {
+ let index = system_handler as u8;
+
+ #[cfg(not(armv6m))]
+ {
+ // NOTE(unsafe) atomic read with no side effects
+
+ // NOTE(unsafe): Index is bounded to [4,15] by SystemHandler design.
+ // TODO: Review it after rust-lang/rust/issues/13926 will be fixed.
+ let priority_ref = unsafe { (*Self::PTR).shpr.get_unchecked(usize::from(index - 4)) };
+
+ priority_ref.read()
+ }
+
+ #[cfg(armv6m)]
+ {
+ // NOTE(unsafe) atomic read with no side effects
+
+ // NOTE(unsafe): Index is bounded to [11,15] by SystemHandler design.
+ // TODO: Review it after rust-lang/rust/issues/13926 will be fixed.
+ let priority_ref = unsafe {
+ (*Self::PTR)
+ .shpr
+ .get_unchecked(usize::from((index - 8) / 4))
+ };
+
+ let shpr = priority_ref.read();
+ let prio = (shpr >> (8 * (index % 4))) & 0x0000_00ff;
+ prio as u8
+ }
+ }
+
+ /// Sets the hardware priority of `system_handler` to `prio`
+ ///
+ /// *NOTE*: Hardware priority does not exactly match logical priority levels. See
+ /// [`NVIC.get_priority`](struct.NVIC.html#method.get_priority) for more details.
+ ///
+ /// On ARMv6-M, updating a system handler priority requires a read-modify-write operation. On
+ /// ARMv7-M, the operation is performed in a single, atomic write operation.
+ ///
+ /// # Unsafety
+ ///
+ /// Changing priority levels can break priority-based critical sections (see
+ /// [`register::basepri`](crate::register::basepri)) and compromise memory safety.
+ #[inline]
+ pub unsafe fn set_priority(&mut self, system_handler: SystemHandler, prio: u8) {
+ let index = system_handler as u8;
+
+ #[cfg(not(armv6m))]
+ {
+ // NOTE(unsafe): Index is bounded to [4,15] by SystemHandler design.
+ // TODO: Review it after rust-lang/rust/issues/13926 will be fixed.
+ let priority_ref = (*Self::PTR).shpr.get_unchecked(usize::from(index - 4));
+
+ priority_ref.write(prio)
+ }
+
+ #[cfg(armv6m)]
+ {
+ // NOTE(unsafe): Index is bounded to [11,15] by SystemHandler design.
+ // TODO: Review it after rust-lang/rust/issues/13926 will be fixed.
+ let priority_ref = (*Self::PTR)
+ .shpr
+ .get_unchecked(usize::from((index - 8) / 4));
+
+ priority_ref.modify(|value| {
+ let shift = 8 * (index % 4);
+ let mask = 0x0000_00ff << shift;
+ let prio = u32::from(prio) << shift;
+
+ (value & !mask) | prio
+ });
+ }
+ }
+
+ /// Return the bit position of the exception enable bit in the SHCSR register
+ #[inline]
+ #[cfg(not(any(armv6m, armv8m_base)))]
+ fn shcsr_enable_shift(exception: Exception) -> Option {
+ match exception {
+ Exception::MemoryManagement => Some(16),
+ Exception::BusFault => Some(17),
+ Exception::UsageFault => Some(18),
+ #[cfg(armv8m_main)]
+ Exception::SecureFault => Some(19),
+ _ => None,
+ }
+ }
+
+ /// Enable the exception
+ ///
+ /// If the exception is enabled, when the exception is triggered, the exception handler will be executed instead of the
+ /// HardFault handler.
+ /// This function is only allowed on the following exceptions:
+ /// * `MemoryManagement`
+ /// * `BusFault`
+ /// * `UsageFault`
+ /// * `SecureFault` (can only be enabled from Secure state)
+ ///
+ /// Calling this function with any other exception will do nothing.
+ #[inline]
+ #[cfg(not(any(armv6m, armv8m_base)))]
+ pub fn enable(&mut self, exception: Exception) {
+ if let Some(shift) = SCB::shcsr_enable_shift(exception) {
+ // The mutable reference to SCB makes sure that only this code is currently modifying
+ // the register.
+ unsafe { self.shcsr.modify(|value| value | (1 << shift)) }
+ }
+ }
+
+ /// Disable the exception
+ ///
+ /// If the exception is disabled, when the exception is triggered, the HardFault handler will be executed instead of the
+ /// exception handler.
+ /// This function is only allowed on the following exceptions:
+ /// * `MemoryManagement`
+ /// * `BusFault`
+ /// * `UsageFault`
+ /// * `SecureFault` (can not be changed from Non-secure state)
+ ///
+ /// Calling this function with any other exception will do nothing.
+ #[inline]
+ #[cfg(not(any(armv6m, armv8m_base)))]
+ pub fn disable(&mut self, exception: Exception) {
+ if let Some(shift) = SCB::shcsr_enable_shift(exception) {
+ // The mutable reference to SCB makes sure that only this code is currently modifying
+ // the register.
+ unsafe { self.shcsr.modify(|value| value & !(1 << shift)) }
+ }
+ }
+
+ /// Check if an exception is enabled
+ ///
+ /// This function is only allowed on the following exception:
+ /// * `MemoryManagement`
+ /// * `BusFault`
+ /// * `UsageFault`
+ /// * `SecureFault` (can not be read from Non-secure state)
+ ///
+ /// Calling this function with any other exception will read `false`.
+ #[inline]
+ #[cfg(not(any(armv6m, armv8m_base)))]
+ pub fn is_enabled(&self, exception: Exception) -> bool {
+ if let Some(shift) = SCB::shcsr_enable_shift(exception) {
+ (self.shcsr.read() & (1 << shift)) > 0
+ } else {
+ false
+ }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/syst.rs b/src/rust/vendor/cortex-m/src/peripheral/syst.rs
new file mode 100644
index 000000000..345acc2ff
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/syst.rs
@@ -0,0 +1,185 @@
+//! SysTick: System Timer
+
+use volatile_register::{RO, RW};
+
+use crate::peripheral::SYST;
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Control and Status
+ pub csr: RW,
+ /// Reload Value
+ pub rvr: RW,
+ /// Current Value
+ pub cvr: RW,
+ /// Calibration Value
+ pub calib: RO,
+}
+
+/// SysTick clock source
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub enum SystClkSource {
+ /// Core-provided clock
+ Core,
+ /// External reference clock
+ External,
+}
+
+const SYST_COUNTER_MASK: u32 = 0x00ff_ffff;
+
+const SYST_CSR_ENABLE: u32 = 1 << 0;
+const SYST_CSR_TICKINT: u32 = 1 << 1;
+const SYST_CSR_CLKSOURCE: u32 = 1 << 2;
+const SYST_CSR_COUNTFLAG: u32 = 1 << 16;
+
+const SYST_CALIB_SKEW: u32 = 1 << 30;
+const SYST_CALIB_NOREF: u32 = 1 << 31;
+
+impl SYST {
+ /// Clears current value to 0
+ ///
+ /// After calling `clear_current()`, the next call to `has_wrapped()` will return `false`.
+ #[inline]
+ pub fn clear_current(&mut self) {
+ unsafe { self.cvr.write(0) }
+ }
+
+ /// Disables counter
+ #[inline]
+ pub fn disable_counter(&mut self) {
+ unsafe { self.csr.modify(|v| v & !SYST_CSR_ENABLE) }
+ }
+
+ /// Disables SysTick interrupt
+ #[inline]
+ pub fn disable_interrupt(&mut self) {
+ unsafe { self.csr.modify(|v| v & !SYST_CSR_TICKINT) }
+ }
+
+ /// Enables counter
+ ///
+ /// *NOTE* The reference manual indicates that:
+ ///
+ /// "The SysTick counter reload and current value are undefined at reset, the correct
+ /// initialization sequence for the SysTick counter is:
+ ///
+ /// - Program reload value
+ /// - Clear current value
+ /// - Program Control and Status register"
+ ///
+ /// The sequence translates to `self.set_reload(x); self.clear_current(); self.enable_counter()`
+ #[inline]
+ pub fn enable_counter(&mut self) {
+ unsafe { self.csr.modify(|v| v | SYST_CSR_ENABLE) }
+ }
+
+ /// Enables SysTick interrupt
+ #[inline]
+ pub fn enable_interrupt(&mut self) {
+ unsafe { self.csr.modify(|v| v | SYST_CSR_TICKINT) }
+ }
+
+ /// Gets clock source
+ ///
+ /// *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the
+ /// bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
+ #[inline]
+ pub fn get_clock_source(&mut self) -> SystClkSource {
+ // NOTE(unsafe) atomic read with no side effects
+ if self.csr.read() & SYST_CSR_CLKSOURCE != 0 {
+ SystClkSource::Core
+ } else {
+ SystClkSource::External
+ }
+ }
+
+ /// Gets current value
+ #[inline]
+ pub fn get_current() -> u32 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).cvr.read() }
+ }
+
+ /// Gets reload value
+ #[inline]
+ pub fn get_reload() -> u32 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).rvr.read() }
+ }
+
+ /// Returns the reload value with which the counter would wrap once per 10
+ /// ms
+ ///
+ /// Returns `0` if the value is not known (e.g. because the clock can
+ /// change dynamically).
+ #[inline]
+ pub fn get_ticks_per_10ms() -> u32 {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).calib.read() & SYST_COUNTER_MASK }
+ }
+
+ /// Checks if an external reference clock is available
+ #[inline]
+ pub fn has_reference_clock() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).calib.read() & SYST_CALIB_NOREF == 0 }
+ }
+
+ /// Checks if the counter wrapped (underflowed) since the last check
+ ///
+ /// *NOTE* This takes `&mut self` because the read operation is side effectful and will clear
+ /// the bit of the read register.
+ #[inline]
+ pub fn has_wrapped(&mut self) -> bool {
+ self.csr.read() & SYST_CSR_COUNTFLAG != 0
+ }
+
+ /// Checks if counter is enabled
+ ///
+ /// *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the
+ /// bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
+ #[inline]
+ pub fn is_counter_enabled(&mut self) -> bool {
+ self.csr.read() & SYST_CSR_ENABLE != 0
+ }
+
+ /// Checks if SysTick interrupt is enabled
+ ///
+ /// *NOTE* This takes `&mut self` because the read operation is side effectful and can clear the
+ /// bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
+ #[inline]
+ pub fn is_interrupt_enabled(&mut self) -> bool {
+ self.csr.read() & SYST_CSR_TICKINT != 0
+ }
+
+ /// Checks if the calibration value is precise
+ ///
+ /// Returns `false` if using the reload value returned by
+ /// `get_ticks_per_10ms()` may result in a period significantly deviating
+ /// from 10 ms.
+ #[inline]
+ pub fn is_precise() -> bool {
+ // NOTE(unsafe) atomic read with no side effects
+ unsafe { (*Self::PTR).calib.read() & SYST_CALIB_SKEW == 0 }
+ }
+
+ /// Sets clock source
+ #[inline]
+ pub fn set_clock_source(&mut self, clk_source: SystClkSource) {
+ match clk_source {
+ SystClkSource::External => unsafe { self.csr.modify(|v| v & !SYST_CSR_CLKSOURCE) },
+ SystClkSource::Core => unsafe { self.csr.modify(|v| v | SYST_CSR_CLKSOURCE) },
+ }
+ }
+
+ /// Sets reload value
+ ///
+ /// Valid values are between `1` and `0x00ffffff`.
+ ///
+ /// *NOTE* To make the timer wrap every `N` ticks set the reload value to `N - 1`
+ #[inline]
+ pub fn set_reload(&mut self, value: u32) {
+ unsafe { self.rvr.write(value) }
+ }
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/test.rs b/src/rust/vendor/cortex-m/src/peripheral/test.rs
new file mode 100644
index 000000000..cab064aad
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/test.rs
@@ -0,0 +1,170 @@
+#[test]
+fn cpuid() {
+ let cpuid = unsafe { &*crate::peripheral::CPUID::PTR };
+
+ assert_eq!(address(&cpuid.base), 0xE000_ED00);
+ assert_eq!(address(&cpuid.pfr), 0xE000_ED40);
+ assert_eq!(address(&cpuid.dfr), 0xE000_ED48);
+ assert_eq!(address(&cpuid.afr), 0xE000_ED4C);
+ assert_eq!(address(&cpuid.mmfr), 0xE000_ED50);
+ assert_eq!(address(&cpuid.isar), 0xE000_ED60);
+ assert_eq!(address(&cpuid.clidr), 0xE000_ED78);
+ assert_eq!(address(&cpuid.ctr), 0xE000_ED7C);
+ assert_eq!(address(&cpuid.ccsidr), 0xE000_ED80);
+ assert_eq!(address(&cpuid.csselr), 0xE000_ED84);
+}
+
+#[test]
+fn dcb() {
+ let dcb = unsafe { &*crate::peripheral::DCB::PTR };
+
+ assert_eq!(address(&dcb.dhcsr), 0xE000_EDF0);
+ assert_eq!(address(&dcb.dcrsr), 0xE000_EDF4);
+ assert_eq!(address(&dcb.dcrdr), 0xE000_EDF8);
+ assert_eq!(address(&dcb.demcr), 0xE000_EDFC);
+}
+
+#[test]
+fn dwt() {
+ let dwt = unsafe { &*crate::peripheral::DWT::PTR };
+
+ assert_eq!(address(&dwt.ctrl), 0xE000_1000);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.cyccnt), 0xE000_1004);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.cpicnt), 0xE000_1008);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.exccnt), 0xE000_100C);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.sleepcnt), 0xE000_1010);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.lsucnt), 0xE000_1014);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.foldcnt), 0xE000_1018);
+ assert_eq!(address(&dwt.pcsr), 0xE000_101C);
+ assert_eq!(address(&dwt.c[0].comp), 0xE000_1020);
+ assert_eq!(address(&dwt.c[0].mask), 0xE000_1024);
+ assert_eq!(address(&dwt.c[0].function), 0xE000_1028);
+ assert_eq!(address(&dwt.c[1].comp), 0xE000_1030);
+ assert_eq!(address(&dwt.c[1].mask), 0xE000_1034);
+ assert_eq!(address(&dwt.c[1].function), 0xE000_1038);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.lar), 0xE000_1FB0);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&dwt.lsr), 0xE000_1FB4);
+}
+
+#[test]
+fn fpb() {
+ let fpb = unsafe { &*crate::peripheral::FPB::PTR };
+
+ assert_eq!(address(&fpb.ctrl), 0xE000_2000);
+ assert_eq!(address(&fpb.remap), 0xE000_2004);
+ assert_eq!(address(&fpb.comp), 0xE000_2008);
+ assert_eq!(address(&fpb.comp[1]), 0xE000_200C);
+ assert_eq!(address(&fpb.lar), 0xE000_2FB0);
+ assert_eq!(address(&fpb.lsr), 0xE000_2FB4);
+}
+
+#[test]
+fn fpu() {
+ let fpu = unsafe { &*crate::peripheral::FPU::PTR };
+
+ assert_eq!(address(&fpu.fpccr), 0xE000_EF34);
+ assert_eq!(address(&fpu.fpcar), 0xE000_EF38);
+ assert_eq!(address(&fpu.fpdscr), 0xE000_EF3C);
+ assert_eq!(address(&fpu.mvfr), 0xE000_EF40);
+ assert_eq!(address(&fpu.mvfr[1]), 0xE000_EF44);
+ assert_eq!(address(&fpu.mvfr[2]), 0xE000_EF48);
+}
+
+#[test]
+fn itm() {
+ let itm = unsafe { &*crate::peripheral::ITM::PTR };
+
+ assert_eq!(address(&itm.stim), 0xE000_0000);
+ assert_eq!(address(&itm.ter), 0xE000_0E00);
+ assert_eq!(address(&itm.tpr), 0xE000_0E40);
+ assert_eq!(address(&itm.tcr), 0xE000_0E80);
+ assert_eq!(address(&itm.lar), 0xE000_0FB0);
+ assert_eq!(address(&itm.lsr), 0xE000_0FB4);
+}
+
+#[test]
+fn mpu() {
+ let mpu = unsafe { &*crate::peripheral::MPU::PTR };
+
+ assert_eq!(address(&mpu._type), 0xE000ED90);
+ assert_eq!(address(&mpu.ctrl), 0xE000ED94);
+ assert_eq!(address(&mpu.rnr), 0xE000ED98);
+ assert_eq!(address(&mpu.rbar), 0xE000ED9C);
+ assert_eq!(address(&mpu.rasr), 0xE000EDA0);
+ assert_eq!(address(&mpu.rbar_a1), 0xE000EDA4);
+ assert_eq!(address(&mpu.rasr_a1), 0xE000EDA8);
+ assert_eq!(address(&mpu.rbar_a2), 0xE000EDAC);
+ assert_eq!(address(&mpu.rasr_a2), 0xE000EDB0);
+ assert_eq!(address(&mpu.rbar_a3), 0xE000EDB4);
+ assert_eq!(address(&mpu.rasr_a3), 0xE000EDB8);
+}
+
+#[test]
+fn nvic() {
+ let nvic = unsafe { &*crate::peripheral::NVIC::PTR };
+
+ assert_eq!(address(&nvic.iser), 0xE000E100);
+ assert_eq!(address(&nvic.icer), 0xE000E180);
+ assert_eq!(address(&nvic.ispr), 0xE000E200);
+ assert_eq!(address(&nvic.icpr), 0xE000E280);
+ assert_eq!(address(&nvic.iabr), 0xE000E300);
+ assert_eq!(address(&nvic.ipr), 0xE000E400);
+ #[cfg(not(armv6m))]
+ assert_eq!(address(&nvic.stir), 0xE000EF00);
+}
+
+#[test]
+fn scb() {
+ let scb = unsafe { &*crate::peripheral::SCB::PTR };
+
+ assert_eq!(address(&scb.icsr), 0xE000_ED04);
+ assert_eq!(address(&scb.vtor), 0xE000_ED08);
+ assert_eq!(address(&scb.aircr), 0xE000_ED0C);
+ assert_eq!(address(&scb.scr), 0xE000_ED10);
+ assert_eq!(address(&scb.ccr), 0xE000_ED14);
+ assert_eq!(address(&scb.shpr), 0xE000_ED18);
+ assert_eq!(address(&scb.shcsr), 0xE000_ED24);
+ assert_eq!(address(&scb.cfsr), 0xE000_ED28);
+ assert_eq!(address(&scb.hfsr), 0xE000_ED2C);
+ assert_eq!(address(&scb.dfsr), 0xE000_ED30);
+ assert_eq!(address(&scb.mmfar), 0xE000_ED34);
+ assert_eq!(address(&scb.bfar), 0xE000_ED38);
+ assert_eq!(address(&scb.afsr), 0xE000_ED3C);
+ assert_eq!(address(&scb.cpacr), 0xE000_ED88);
+}
+
+#[test]
+fn syst() {
+ let syst = unsafe { &*crate::peripheral::SYST::PTR };
+
+ assert_eq!(address(&syst.csr), 0xE000_E010);
+ assert_eq!(address(&syst.rvr), 0xE000_E014);
+ assert_eq!(address(&syst.cvr), 0xE000_E018);
+ assert_eq!(address(&syst.calib), 0xE000_E01C);
+}
+
+#[test]
+fn tpiu() {
+ let tpiu = unsafe { &*crate::peripheral::TPIU::PTR };
+
+ assert_eq!(address(&tpiu.sspsr), 0xE004_0000);
+ assert_eq!(address(&tpiu.cspsr), 0xE004_0004);
+ assert_eq!(address(&tpiu.acpr), 0xE004_0010);
+ assert_eq!(address(&tpiu.sppr), 0xE004_00F0);
+ assert_eq!(address(&tpiu.ffcr), 0xE004_0304);
+ assert_eq!(address(&tpiu.lar), 0xE004_0FB0);
+ assert_eq!(address(&tpiu.lsr), 0xE004_0FB4);
+ assert_eq!(address(&tpiu._type), 0xE004_0FC8);
+}
+
+fn address(r: *const T) -> usize {
+ r as usize
+}
diff --git a/src/rust/vendor/cortex-m/src/peripheral/tpiu.rs b/src/rust/vendor/cortex-m/src/peripheral/tpiu.rs
new file mode 100644
index 000000000..11cb79e91
--- /dev/null
+++ b/src/rust/vendor/cortex-m/src/peripheral/tpiu.rs
@@ -0,0 +1,31 @@
+//! Trace Port Interface Unit;
+//!
+//! *NOTE* Not available on Armv6-M.
+
+use volatile_register::{RO, RW, WO};
+
+/// Register block
+#[repr(C)]
+pub struct RegisterBlock {
+ /// Supported Parallel Port Sizes
+ pub sspsr: RO,
+ /// Current Parallel Port Size
+ pub cspsr: RW