MCTP + Rust: new crates for platform MCTP infrastructure
At Code Construct, we have been working on a suite of open source Management Component Transport Protocol (MCTP) infrastructure in Rust, to support our clients' platform bringup projects. We have since published that code to our github repositories - so have provided a brief tour of those new repositories here.
Overall, the source code below implements a set of tooling for implementing MCTP-based communication on Linux (where we have core support for MCTP in the operating system) and on embedded systems (where we do not). For the latter cases, we have a stand-alone MCTP stack implementation, portable enough for general microcontroller usage.
All of the code referenced below is now available in our github repositories, and is under active development. Read on for details!
In case you're not familiar with MCTP, we have an introduction post for some background.
Core MCTP components: the mctp-rs
workspace🔗
The mctp-rs
repository is a Rust workspace, with a set of crates for building
MCTP applications - either as Linux applications using the kernel MCTP
socket support, or embedded Rust
environments, where we supply the entire MCTP stack.
There are several crates in the workspace, but they're of roughly three types:
-
a common definition of an MCTP transport interface,
-
implementations of transport interfaces for various environments; and
-
applications and libraries that use an MCTP transport.
The mctp
crate
provides the central part - a common set of Rust traits that form an interface
between MCTP applications - as requesters or responders (or both!) - and the
underlying MCTP implementation.
As an overall structure:
So, implementations of these core mctp
traits provide an MCTP transport;
consumers of these traits are MCTP applications. The other crates in the
workspace provide either transport implementations, or upper-layer protocol
support.
The traits are available in both asynchronous and blocking flavours, to suit whichever programming model works best for a platform.
The
mctp-linux
(docs here)
crate provides implementations of the above mctp
traits for Linux systems,
using the AF_MCTP
sockets as an MCTP transport.
Of course, it's entirely possible to write Linux applications directly to the
AF_MCTP
socket interface. However, using mctp-linux
, we can write
applications (or upper-layer protocol libraries) that also work on embedded
MCTP environments, with minimal changes.
The
mctp-estack
(docs)
crate provides implementations of the mctp
traits as a self-contained MCTP
stack, suitable for embedded environments (ie., without std
or alloc
).
This embedded stack is fairly comprehensive, supporting message fragmentation and reassembly, routing across multiple hardware interfaces, multiple listener applications and MCTP control protocol support. It should suit well for almost all MCTP-endpoint-device scenarios.
A set of upper-layer protocol libraries and tools are provided in the
pldm
(docs),
pldm-fw
(docs), and
pldm-fw-cli
(docs)
crates. These use the mctp
trait interfaces to implement base PLDM support,
and a PLDM for Firmware Update ("PLDM type 5") implementation - both as Firmware
Device (FD) and User Agent (UA)
Base NVMe-MI device implementation: nvme-mi-dev
🔗
A common use for MCTP is to interact with NVM Express (NVMe) storage devices, for status monitoring, lifetime management, and provisioning operations. This is possible over the NVMe Management Interface "NVMe-MI" protocol, using MCTP as a transport mechanism.
We have previously implemented a layer of tooling to support NVMe-MI requesters,
as part of the Linux libnvme
project.
To complement this, we have also just released code that implements the device
side of NVMe-MI, entirely in Rust: the
nvme-mi-dev
crate. This
implements basic NVMe-MI discovery and management functions, with more protocol
coverage in progress.
Prototype MCTP embedded firmware: usbnvme
🔗
Using the above MCTP crates, plus the embassy embedded Rust framework, we have implemented a hardware MCTP endpoint in firmware, suitable for 32-bit microcontroller environment. For our reference design, we have this running on a STM32H7S3L8 Nucleo development board:

As a primary use-case, this device implements an NVMe-MI responder on top of the embedded MCTP stack. In effect, the device above appears as a out-of-band-manageable, enterprise NVMe-MI device - just without the actual NVMe storage capability.
This is able to integrate with a typical MCTP-capable platform, such as a Baseboard Management Controller (BMC), connecting via a MCTP-over-USB channel. This allows rapid development of the BMC's own software stack, for communication with various target device types.
usbnvme
and a BMC
systemThis setup allows us to prototype MCTP endpoints easily, and test against actual server hardware - all before production MCTP-over-USB devices are available. Hardware costs are minimal - the Nucleo board is available around US$36 at the time of writing.
The firmware structure is mostly modular - while we have implemented a NVMe-MI
endpoint here, there is certainly flexibility to replace this with other
upper-layer applications that communicate over MCTP. As long as they use the
mctp
crate traits, porting to the embedded environment should be
straightforward. This is the intention of those traits after all!
Check out the usbnvme
repository
on github for all sources.
MCTP stack for userspace simulations: mctp-dev
🔗
Similar to usbnvme
, the
mctp-dev
crate implements a
MCTP-over-USB stack, and a NVMe-MI endpoint, but is runnable as a Linux
userspace application. This application can connect to a qemu virtual machine,
for virtual MCTP prototyping.
usbnvme
and a BMC
systemStandard qemu
can support connections to virtual USB endpoints through the
usb-redir
device:
$ qemu-system-arm -M ast2600-evb \
-chardev pty,id=usbredir0 \
-device usb-redir,chardev=usbredir0 \
[...]
char device redirected to /dev/pts/4 (label usbredir)
... which we can connect to our mctp-dev
process:
$ mctp-dev usb /dev/pts/4
08:18:18 [INFO] Created MCTP USB transport on /dev/pts/4
08:18:18 [INFO] MCTP Control Protocol server listening
08:18:18 [INFO] NVMe-MI endpoint listening
This allows us to rapidly develop and test MCTP applications in the qemu
environment - so no hardware is required. Iterating on changes to the mctp-dev
emulation implementation is just a matter of re-building and re-running the one
process.
For the NVMe-MI implementation, mctp-dev
makes heavy use of our new
nvme-mi-dev
crate, which
provides the NVMe subsystem models, plus the management interface command set.
Of course, the aim for mctp-dev
is to provide a flexible platform for MCTP
experimentation, so the sources are intended to be easily modifiable to modify
to suit new applications.
mctp-dev
also supports MCTP communication over serial transports, which can
connect to either qemu, or real UART hardware.
... all published on github🔗
All of the above crates are published under OSI-approved open source licenses, on our Code Construct github organisation. For handy reference:
- Core MCTP crates:
mctp-rs
workspace - NVMe-MI device:
nvme-mi-dev
crate - USB NVMe-MI STM32 firmware:
usbnvme
crate - MCTP userspace device:
mctp-dev
crate
Crates that are suitable for integration into other codebases have been published to crates.io as well, and have documents on docs.rs:
mctp
- crates.io / docs.rsmctp-estack
- crates.io / docs.rsmctp-linux
- crates.io / docs.rsmctp-usb-embassy
- crates.io / docs.rspldm
- crates.io / docs.rspldm-fw
- crates.io / docs.rspldm-fw-cli
- crates.io / docs.rs
Most of these are under ongoing development; feel free to follow along, or get in contact if you're adopting these for your own platforms.