5.6: Pipeline Patterns for Embedded Systems
What You'll Learn
- Master CI/CD pipeline patterns for embedded systems development
- Learn Docker and Yocto integration for reproducible embedded builds
- Understand multi-architecture build strategies for ARM, RISC-V, and other platforms
- Implement hardware-in-the-loop (HIL) testing in automated pipelines
16.06.1 Docker for Embedded Linux
Overview
Docker enables reproducible embedded Linux builds by containerizing the build environment, toolchains, and dependencies. This approach ensures consistent builds across development teams and CI/CD systems.
Yocto + Docker Integration
Note: Yocto layer compatibility varies by release (Kirkstone, Scarthgap, etc.). Verify recipe compatibility with your target Yocto version.
Adding Docker to Yocto Build
local.conf additions:
# Enable Docker in Yocto build
IMAGE_INSTALL:append = " docker docker-compose"
# For systemd-based systems
DISTRO_FEATURES:append = " virtualization"
# Add required kernel features
DISTRO_FEATURES:append = " systemd"
VIRTUAL-RUNTIME_init_manager = "systemd"
Recipe example (meta-layer/recipes-containers/docker/docker_%.bbappend):
# Docker configuration for embedded
PACKAGECONFIG:remove = "seccomp"
PACKAGECONFIG:append = " systemd"
# Optimize for embedded
DOCKER_STORAGE_DRIVER = "overlay2"
Custom Yocto Layer for Containers
# Create custom layer
bitbake-layers create-layer meta-containers-custom
# Add recipes
meta-containers-custom/
├── conf/
│ └── layer.conf
└── recipes-containers/
├── docker/
│ └── docker-custom_1.0.bb
└── images/
└── container-image.bb
container-image.bb:
SUMMARY = "Embedded Linux Container Image"
LICENSE = "MIT"
IMAGE_INSTALL = " \
packagegroup-core-boot \
docker \
docker-compose \
python3 \
kernel-modules \
"
IMAGE_FEATURES += "ssh-server-dropbear package-management"
# Minimal rootfs
IMAGE_ROOTFS_SIZE ?= "8192"
IMAGE_ROOTFS_EXTRA_SPACE:append = "${@bb.utils.contains("DISTRO_FEATURES", "systemd", " + 4096", "" ,d)}"
inherit core-image
CI/CD Integration - Yocto Build
GitHub Actions - Yocto Build
name: Yocto Container Build
on: [push]
jobs:
yocto-build:
runs-on: ubuntu-latest
container:
image: crops/poky:ubuntu-22.04
options: --privileged
steps:
- uses: actions/checkout@v4
- name: Initialize Yocto
run: |
source oe-init-build-env build
- name: Configure build
run: |
cd build
echo 'IMAGE_INSTALL:append = " docker"' >> conf/local.conf
echo 'DISTRO_FEATURES:append = " virtualization systemd"' >> conf/local.conf
- name: Build image
run: |
source oe-init-build-env build
bitbake core-image-minimal
- name: Extract artifacts
run: |
cp build/tmp/deploy/images/*/*.wic.gz ./
- uses: actions/upload-artifact@v3
with:
name: yocto-image
path: "*.wic.gz"
GitLab CI - Yocto Pipeline
stages:
- build
- test
- deploy
yocto-build:
stage: build
image: crops/poky:ubuntu-22.04
script:
- source oe-init-build-env build
- echo 'IMAGE_INSTALL:append = " docker"' >> build/conf/local.conf
- bitbake core-image-minimal
artifacts:
paths:
- build/tmp/deploy/images/
expire_in: 1 week
tags:
- embedded
16.06.2 Cross-Compilation Pipelines
ARM Cross-Compilation
Dockerfile for ARM Cross-Compilation
FROM ubuntu:22.04
# Install cross-compilation toolchain
RUN apt-get update && apt-get install -y \
gcc-arm-linux-gnueabihf \
g++-arm-linux-gnueabihf \
gcc-aarch64-linux-gnu \
g++-aarch64-linux-gnu \
qemu-user-static \
build-essential \
cmake \
&& rm -rf /var/lib/apt/lists/*
# Set cross-compilation environment
ENV CROSS_COMPILE=arm-linux-gnueabihf-
ENV CC=arm-linux-gnueabihf-gcc
ENV CXX=arm-linux-gnueabihf-g++
ENV AR=arm-linux-gnueabihf-ar
ENV RANLIB=arm-linux-gnueabihf-ranlib
WORKDIR /workspace
COPY . .
# Build for ARM
RUN mkdir build && cd build && \
cmake -DCMAKE_TOOLCHAIN_FILE=../toolchain-arm.cmake .. && \
make
toolchain-arm.cmake:
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR arm)
set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc)
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++)
set(CMAKE_FIND_ROOT_PATH /usr/arm-linux-gnueabihf)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
Cross-Compilation Pipeline
GitHub Actions - ARM Build
name: ARM Cross-Compilation
on: [push, pull_request]
jobs:
build-arm:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [armv7, aarch64]
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Build for ${{ matrix.arch }}
run: |
docker build \
--build-arg ARCH=${{ matrix.arch }} \
-t firmware:${{ matrix.arch }} \
-f Dockerfile.cross .
- name: Extract binary
run: |
docker create --name temp firmware:${{ matrix.arch }}
docker cp temp:/workspace/build/firmware ./firmware-${{ matrix.arch }}
docker rm temp
- uses: actions/upload-artifact@v4
with:
name: firmware-${{ matrix.arch }}
path: firmware-${{ matrix.arch }}
16.06.3 Multi-Architecture Builds
Docker Buildx for Multi-Architecture
Setup and Configuration
# Create a new builder instance
docker buildx create --name multiarch --driver docker-container --use
# Bootstrap the builder
docker buildx inspect --bootstrap
# Enable QEMU for emulation
docker run --privileged --rm tonistiigi/binfmt --install all
Multi-Architecture Dockerfile
# syntax=docker/dockerfile:1.4
# Use multi-arch base image
FROM --platform=$BUILDPLATFORM python:3.11-slim as builder
# Build arguments for cross-compilation
ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG TARGETOS
ARG TARGETARCH
RUN echo "Building on $BUILDPLATFORM, targeting $TARGETPLATFORM"
WORKDIR /app
# Install dependencies
COPY requirements.txt .
# Architecture-specific installation
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --no-cache-dir -r requirements.txt
# Final stage
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY . .
CMD ["python", "app.py"]
Architecture-Specific Customization
FROM --platform=$TARGETPLATFORM alpine:3.19
ARG TARGETARCH
# Install architecture-specific packages
RUN case ${TARGETARCH} in \
"amd64") apk add --no-cache x86_64-specific-package ;; \
"arm64") apk add --no-cache aarch64-specific-package ;; \
"arm") apk add --no-cache armv7-specific-package ;; \
"riscv64") apk add --no-cache riscv64-specific-package ;; \
esac
COPY app-${TARGETARCH} /app/app
CMD ["/app/app"]
Multi-Architecture Build Pipeline
GitHub Actions - Multi-Arch
name: Multi-Architecture Build
on:
push:
branches: [main]
tags: ['v*']
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: myregistry/myapp
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/riscv64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=myregistry/myapp:buildcache
cache-to: type=registry,ref=myregistry/myapp:buildcache,mode=max
GitLab CI - Multi-Arch Pipeline
variables:
DOCKER_DRIVER: overlay2
PLATFORMS: "linux/amd64,linux/arm64,linux/arm/v7"
stages:
- build
- manifest
.buildx-setup:
before_script:
- docker run --privileged --rm tonistiigi/binfmt --install all
- docker buildx create --name multibuilder --driver docker-container --use
- docker buildx inspect --bootstrap
build-multiarch:
extends: .buildx-setup
stage: build
image: docker:24
services:
- docker:24-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker buildx build
--platform $PLATFORMS
--cache-from type=registry,ref=$CI_REGISTRY_IMAGE:buildcache
--cache-to type=registry,ref=$CI_REGISTRY_IMAGE:buildcache,mode=max
--tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE:latest
--push
.
16.06.4 Hardware-in-the-Loop (HIL) Testing
Overview
HIL testing integrates physical hardware into the CI/CD pipeline for real-world validation of embedded systems.
Self-Hosted Runner Setup
GitHub Actions Runner with Hardware
# Install GitHub Actions Runner on hardware-attached machine
cd /opt
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.311.0.tar.gz -L \
https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz
tar xzf ./actions-runner-linux-x64-2.311.0.tar.gz
# Configure runner
./config.sh --url https://github.com/user/repo --token YOUR_TOKEN --labels embedded,hardware
# Install as service
sudo ./svc.sh install
sudo ./svc.sh start
GitLab Runner with Hardware
[[runners]]
name = "hardware-test-runner"
url = "https://gitlab.com/"
token = "RUNNER_TOKEN"
executor = "shell" # Shell executor for direct hardware access
[runners.cache]
Type = "s3"
Shared = true
# USB device access
[runners.docker]
privileged = true
devices = ["/dev/ttyUSB0:/dev/ttyUSB0"]
volumes = ["/dev/bus/usb:/dev/bus/usb"]
HIL Testing Pipeline
GitHub Actions - HIL Test
name: Hardware-in-the-Loop Testing
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build firmware
run: |
docker build -t firmware:latest .
docker create --name temp firmware:latest
docker cp temp:/workspace/build/firmware.hex ./
docker rm temp
- uses: actions/upload-artifact@v4
with:
name: firmware
path: firmware.hex
hil-test:
needs: build
runs-on: self-hosted
labels: [embedded, hardware]
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: firmware
- name: Flash device
run: |
# Flash via OpenOCD
openocd -f interface/stlink.cfg \
-f target/stm32f4x.cfg \
-c "program firmware.hex verify reset exit"
- name: Run hardware tests
run: |
sleep 2 # Wait for device to boot
python3 tests/hardware_test.py --port /dev/ttyUSB0
- name: Collect logs
if: always()
run: |
python3 tests/collect_logs.py > hardware-logs.txt
- uses: actions/upload-artifact@v4
if: always()
with:
name: hardware-logs
path: hardware-logs.txt
GitLab CI - HIL Pipeline
stages:
- build
- test
- hardware_test
build:firmware:
stage: build
image: arm/toolchain:latest
script:
- cmake -B build -DCMAKE_BUILD_TYPE=Release
- cmake --build build
- arm-none-eabi-objcopy -O ihex build/firmware.elf firmware.hex
artifacts:
paths:
- firmware.hex
tags:
- embedded
test:hardware:
stage: hardware_test
needs:
- build:firmware
script:
- openocd -f interface/stlink.cfg -f target/stm32f4x.cfg \
-c "program firmware.hex verify reset exit"
- sleep 2
- python3 tests/hardware_test.py --port /dev/ttyUSB0
artifacts:
reports:
junit: hardware-test-results.xml
tags:
- embedded
- hardware
only:
- merge_requests
- main
16.06.5 Platform-Specific Build Patterns
PlatformIO Integration
GitHub Actions - PlatformIO
name: PlatformIO Build
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
environment:
- esp32dev
- esp8266
- nrf52840_dk
- teensy41
steps:
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cache/pip
~/.platformio/.cache
key: ${{ runner.os }}-pio-${{ hashFiles('**/platformio.ini') }}
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install PlatformIO
run: pip install platformio
- name: Build for ${{ matrix.environment }}
run: platformio run -e ${{ matrix.environment }}
- name: Run tests
run: platformio test -e ${{ matrix.environment }}
- uses: actions/upload-artifact@v4
with:
name: firmware-${{ matrix.environment }}
path: .pio/build/${{ matrix.environment }}/firmware.*
Zephyr RTOS Build
GitHub Actions - Zephyr
name: Zephyr Build
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
container:
image: zephyrprojectrtos/ci:latest
strategy:
matrix:
board:
- qemu_cortex_m3
- nrf52840dk_nrf52840
- stm32f4_disco
- esp32
steps:
- uses: actions/checkout@v4
with:
path: app
submodules: recursive
- name: Setup Zephyr
run: |
west init -l app
west update
west zephyr-export
pip install -r zephyr/scripts/requirements.txt
- name: Build for ${{ matrix.board }}
run: |
cd app
west build -b ${{ matrix.board }} -p auto
- name: Run unit tests (QEMU only)
if: startsWith(matrix.board, 'qemu_')
run: |
cd app
west build -t run
- uses: actions/upload-artifact@v4
with:
name: ${{ matrix.board }}-build
path: |
app/build/zephyr/zephyr.bin
app/build/zephyr/zephyr.hex
app/build/zephyr/zephyr.elf
ESP-IDF (Espressif IoT Development Framework)
name: ESP-IDF Build
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
container:
image: espressif/idf:latest
strategy:
matrix:
target: [esp32, esp32s2, esp32s3, esp32c3]
steps:
- uses: actions/checkout@v4
- name: Build for ${{ matrix.target }}
run: |
. $IDF_PATH/export.sh
idf.py set-target ${{ matrix.target }}
idf.py build
- uses: actions/upload-artifact@v4
with:
name: firmware-${{ matrix.target }}
path: build/*.bin
16.06.6 Complete Embedded CI/CD Example
Full-Featured Pipeline
name: Complete Embedded CI/CD
on:
push:
branches: [main, develop]
tags: ['v*']
pull_request:
branches: [main]
env:
FIRMWARE_VERSION: ${{ github.ref_name }}
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check code format
run: |
clang-format --dry-run --Werror src/*.c include/*.h
- name: Static analysis
run: |
cppcheck --enable=warning,style src/
build-debug:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build debug firmware
run: |
docker build -t firmware:debug -f Dockerfile.debug .
docker create --name temp firmware:debug
docker cp temp:/build/firmware.elf ./firmware-debug.elf
docker rm temp
- uses: actions/upload-artifact@v4
with:
name: firmware-debug
path: firmware-debug.elf
build-release:
runs-on: ubuntu-latest
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
steps:
- uses: actions/checkout@v4
- name: Build release firmware
run: |
docker build -t firmware:release .
docker create --name temp firmware:release
docker cp temp:/build/firmware.hex ./firmware-release.hex
docker cp temp:/build/firmware.bin ./firmware-release.bin
docker rm temp
- uses: actions/upload-artifact@v4
with:
name: firmware-release
path: |
firmware-release.hex
firmware-release.bin
unit-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run unit tests
run: |
cmake -B build_test -DBUILD_TESTS=ON
cmake --build build_test
cd build_test && ctest --output-on-failure
qemu-test:
needs: build-debug
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: firmware-debug
- name: Run QEMU simulation
run: |
qemu-system-arm -M lm3s6965evb \
-nographic \
-kernel firmware-debug.elf \
-serial stdio | tee qemu-output.log
- name: Validate output
run: python3 tests/validate_qemu_output.py qemu-output.log
hil-test:
needs: build-release
runs-on: self-hosted
if: github.event_name != 'pull_request'
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: firmware-release
- name: Flash and test
run: |
./scripts/flash_device.sh firmware-release.hex
python3 tests/hardware_tests.py --device /dev/ttyUSB0
release:
needs: [lint, build-release, unit-test, qemu-test, hil-test]
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: firmware-release
- name: Create release package
run: |
mkdir release
cp firmware-release.* release/
echo "Firmware Version: ${{ env.FIRMWARE_VERSION }}" > release/README.txt
echo "Build Date: $(date)" >> release/README.txt
tar -czf firmware-${{ env.FIRMWARE_VERSION }}.tar.gz release/
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
files: firmware-${{ env.FIRMWARE_VERSION }}.tar.gz
body: |
Firmware release ${{ env.FIRMWARE_VERSION }}
## Changes
See commit history for details.
16.06.7 Best Practices
1. Build Reproducibility
- Use Docker images with fixed toolchain versions
- Lock dependency versions
- Document build environment
2. Testing Strategy
- Unit tests on host machine
- Integration tests in QEMU/simulator
- HIL tests on real hardware
- Periodic full regression on hardware
3. Artifact Management
- Store firmware binaries long-term
- Include debug symbols separately
- Generate checksums/signatures
4. Security
- Code signing for production firmware
- Secure boot verification
- Encrypted OTA updates
5. Resource Optimization
- Use build caching effectively
- Parallel builds where possible
- Self-hosted runners for hardware access
Summary
Embedded systems CI/CD requires specialized pipeline patterns that address unique challenges like cross-compilation, multi-architecture support, and hardware-in-the-loop testing. By leveraging Docker containers, Yocto builds, and modern CI/CD platforms, teams can achieve reproducible, automated builds and comprehensive testing for embedded applications.
Key takeaways:
- Use Docker for reproducible toolchain management
- Implement multi-architecture builds with buildx
- Integrate hardware testing with self-hosted runners
- Automate firmware signing and deployment
- Maintain comprehensive testing at all levels
Successful embedded CI/CD pipelines balance automation with the practical constraints of hardware availability and testing requirements.
References
- Docker Buildx: https://docs.docker.com/buildx/
- Yocto Project: https://www.yoctoproject.org/
- PlatformIO: https://platformio.org/
- Zephyr Project: https://zephyrproject.org/
- ESP-IDF: https://docs.espressif.com/projects/esp-idf/