2.4: Continuous Integration Mastery
CI/CD for ASPICE Projects
What is Continuous Integration?
Definition: Automatically build, test, and verify code on every commit
CI/CD Pipeline Stages: The following diagram shows the complete CI/CD pipeline from commit through build, static analysis, unit tests, integration tests, and deployment, with quality gates at each stage.
Benefits:
- Fast Feedback: Know within minutes if commit broke build or tests
- Quality Gates: Automated checks enforce standards (MISRA, coverage, tests)
- Traceability: Every build linked to commit, requirements, tests (ASPICE SUP.8)
- Repeatability: Same build every time (no "works on my machine")
CI/CD for Automotive Embedded Systems
Challenges for Embedded CI/CD
Challenge 1: Cross-Compilation
- Code runs on ECU (ARM, TriCore), not CI server (x86)
- Solution: Cross-compiler toolchain in Docker container
Challenge 2: Hardware Dependencies
- Tests need CAN, sensors, actuators
- Solution: Hardware-in-the-Loop (HIL) in CI pipeline
Challenge 3: Long Build Times
- Embedded builds: 10–30 minutes (vs. web: 1–5 minutes)
- Solution: Incremental builds, caching, parallel jobs
Challenge 4: Safety Standards
- MISRA C:2012, ISO 26262 compliance checks
- Solution: Automated static analysis (cppcheck, PC-lint)
GitLab CI Pipeline Example
Project Structure
acc_ecu/
├── .gitlab-ci.yml # CI/CD configuration
├── Dockerfile # Build environment
├── CMakeLists.txt
├── src/
│ ├── acc_controller.c
│ └── acc_controller.h
├── tests/
│ └── test_acc_controller.cpp
└── docs/
└── traceability_matrix.md
.gitlab-ci.yml (Complete Pipeline)
# GitLab CI/CD Pipeline for ACC ECU (ASPICE CL2 compliant)
stages:
- format-check # Stage 1: Check code formatting
- build # Stage 2: Compile code
- test # Stage 3: Run unit tests
- static-analysis # Stage 4: MISRA C, complexity
- coverage # Stage 5: Code coverage analysis
- traceability # Stage 6: Verify traceability
- artifacts # Stage 7: Package binaries
- deploy # Stage 8: Deploy to HIL (optional)
variables:
GCC_ARM_VERSION: "10.3-2021.10" # ARM cross-compiler version
MISRA_COMPLIANCE_THRESHOLD: "0" # Zero MISRA violations for ASIL-B
COVERAGE_THRESHOLD: "90" # 90% coverage for ASIL-B
# Docker image with toolchain
image: ubuntu:22.04
before_script:
- apt-get update -qq || { echo "Failed to update package list"; exit 1; }
- apt-get install -y -qq cmake gcc g++ gcovr cppcheck clang-format python3 || { echo "Failed to install build dependencies"; exit 1; }
# ──────────────────────────────────────────────────────────────────────────
# Stage 1: Format Check
# ──────────────────────────────────────────────────────────────────────────
format-check:
stage: format-check
script:
- echo "Checking code formatting (clang-format)..."
- clang-format --version
- clang-format --dry-run --Werror src/*.c src/*.h
allow_failure: false
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 2: Build
# ──────────────────────────────────────────────────────────────────────────
build:
stage: build
script:
- echo "Building ACC ECU..."
- mkdir -p build && cd build
- cmake .. -DCMAKE_BUILD_TYPE=Release
- make -j$(nproc) # Parallel build
- echo "Build completed successfully."
artifacts:
paths:
- build/acc_ecu.elf # Binary artifact
- build/compile_commands.json
expire_in: 7 days
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- build/
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 3: Unit Tests
# ──────────────────────────────────────────────────────────────────────────
test:
stage: test
script:
- echo "Running unit tests (Google Test)..."
- cd build
- cmake .. -DCMAKE_BUILD_TYPE=Debug -DENABLE_COVERAGE=ON
- make -j$(nproc)
- ./tests/acc_tests --gtest_output=xml:test_results.xml
- echo "All tests passed."
artifacts:
reports:
junit: build/test_results.xml # Test report for GitLab UI
paths:
- build/test_results.xml
expire_in: 30 days
dependencies:
- build
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 4: Static Analysis (MISRA C:2012)
# ──────────────────────────────────────────────────────────────────────────
static-analysis:
stage: static-analysis
script:
- echo "Running static analysis (cppcheck)..."
- cppcheck --version
- cppcheck --enable=all --inconclusive --std=c11
--addon=misra.py
--xml --xml-version=2
--output-file=cppcheck_report.xml
src/
- echo "Checking for MISRA violations..."
- python3 scripts/check_misra.py cppcheck_report.xml
- echo "Static analysis completed."
artifacts:
reports:
codequality: cppcheck_report.json # Code quality report for GitLab
paths:
- cppcheck_report.xml
expire_in: 30 days
allow_failure: false # Fail build if MISRA violations found
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 5: Code Coverage
# ──────────────────────────────────────────────────────────────────────────
coverage:
stage: coverage
script:
- echo "Generating code coverage report..."
- cd build
- gcovr -r .. --xml --xml-pretty --output coverage.xml
- gcovr -r .. --html --html-details --output coverage.html
- COVERAGE=$(gcovr -r .. | grep TOTAL | awk '{print $4}' | sed 's/%//')
- echo "Coverage: ${COVERAGE}%"
- |
if (( $(echo "$COVERAGE < $COVERAGE_THRESHOLD" | bc -l) )); then
echo "[FAIL] Coverage $COVERAGE% < $COVERAGE_THRESHOLD% (requirement for ASIL-B)"
exit 1
fi
- echo "[PASS] Coverage $COVERAGE% ≥ $COVERAGE_THRESHOLD%"
artifacts:
paths:
- build/coverage.xml
- build/coverage.html
expire_in: 30 days
coverage: '/TOTAL.*\s+(\d+%)/' # Extract coverage % for GitLab badge
dependencies:
- test
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 6: Traceability Check
# ──────────────────────────────────────────────────────────────────────────
traceability:
stage: traceability
script:
- echo "Verifying traceability (requirements → code → tests)..."
- python3 scripts/check_traceability.py src/ tests/
- echo "Traceability check passed."
artifacts:
paths:
- traceability_report.md
expire_in: 30 days
allow_failure: false # Fail if traceability gaps found
only:
- merge_requests
- main
# ──────────────────────────────────────────────────────────────────────────
# Stage 7: Package Artifacts
# ──────────────────────────────────────────────────────────────────────────
artifacts:
stage: artifacts
script:
- echo "Packaging release artifacts..."
- mkdir -p release
- cp build/acc_ecu.elf release/
- cp build/coverage.html release/
- cp traceability_report.md release/
- tar -czf acc_ecu_${CI_COMMIT_SHORT_SHA}.tar.gz release/
- echo "Artifacts packaged: acc_ecu_${CI_COMMIT_SHORT_SHA}.tar.gz"
artifacts:
paths:
- acc_ecu_${CI_COMMIT_SHORT_SHA}.tar.gz
expire_in: 90 days
dependencies:
- build
- coverage
- traceability
only:
- main # Only on main branch (not on feature branches)
# ──────────────────────────────────────────────────────────────────────────
# Stage 8: Deploy to HIL (Hardware-in-the-Loop)
# ──────────────────────────────────────────────────────────────────────────
deploy-hil:
stage: deploy
script:
- echo "Deploying to HIL test bench..."
- scp build/acc_ecu.elf hil-server:/opt/hil/binaries/
- ssh hil-server "cd /opt/hil && ./run_hil_tests.sh acc_ecu.elf"
- echo "HIL tests completed."
only:
- main
when: manual # Manual trigger (HIL expensive, not on every commit)
environment:
name: hil
url: http://hil-server:8080/results
Supporting Scripts
scripts/check_misra.py (Verify zero MISRA violations):
#!/usr/bin/env python3
import xml.etree.ElementTree as ET
import sys
import re
def check_misra_violations(xml_file):
tree = ET.parse(xml_file)
root = tree.getroot()
violations = []
# Proper regex patterns for MISRA C rule IDs
MISRA_PATTERNS = [
r'^misra-c2012-\d+\.\d+', # MISRA C:2012 format (e.g., misra-c2012-1.1)
r'^misra-c2004-', # MISRA C:2004 fallback
]
for error in root.findall('.//error'):
error_id = error.attrib.get('id', '')
# Check against known MISRA rule patterns using regex
if any(re.match(pattern, error_id) for pattern in MISRA_PATTERNS):
violations.append(error.attrib)
if violations:
print(f"[FAIL] Found {len(violations)} MISRA C:2012 violations:")
for v in violations:
print(f" - {v['file']}:{v['line']}: {v['msg']} ({v['id']})")
sys.exit(1)
else:
print("[PASS] No MISRA violations found.")
sys.exit(0)
if __name__ == '__main__':
check_misra_violations(sys.argv[1])
scripts/check_traceability.py (Verify all code traced to requirements):
#!/usr/bin/env python3
import re
import glob
import sys
def extract_implements_tags(file_path):
"""Extract @implements tags from C/C++ files"""
implements = []
with open(file_path, 'r') as f:
content = f.read()
pattern = r'@implements\s+\[([^\]]+)\]'
matches = re.findall(pattern, content)
implements.extend(matches)
return implements
def check_traceability():
"""Check that all functions have @implements tags"""
src_files = glob.glob('src/*.c')
missing_traceability = []
for src_file in src_files:
implements = extract_implements_tags(src_file)
# Count functions in file (improved regex for C function signatures)
with open(src_file, 'r') as f:
content = f.read()
# Match return_type function_name( including pointers, static, const, inline, extern
pattern = r'^\s*(?:static\s+)?(?:inline\s+)?(?:extern\s+)?(?:const\s+)?[\w\s*]+\s+(\w+)\s*\('
functions = re.findall(pattern, content, re.MULTILINE)
num_functions = len(functions)
if num_functions > len(implements):
missing = num_functions - len(implements)
missing_traceability.append((src_file, missing))
if missing_traceability:
print("[FAIL] Traceability gaps found:")
for file, missing in missing_traceability:
print(f" - {file}: {missing} functions missing @implements tags")
sys.exit(1)
else:
print("[PASS] All functions have traceability (@implements tags)")
sys.exit(0)
if __name__ == '__main__':
check_traceability()
CI/CD Best Practices
1. Fail Fast
Principle: Run fastest checks first, fail immediately on error
Stage Order (fast → slow):
1. Format check (10 seconds)
2. Build (2 minutes)
3. Unit tests (5 minutes)
4. Static analysis (3 minutes)
5. HIL tests (30 minutes) ← Slowest, run last or manually
Benefit: Developer gets feedback in 10 seconds (formatting) instead of waiting 30 minutes (HIL)
2. Cache Dependencies
Problem: Installing dependencies every build (slow)
Solution: Cache toolchain, libraries
Example (.gitlab-ci.yml):
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- build/ # Incremental build
- .apt-cache/ # Cached apt packages
- .ccache/ # Compiler cache
Benefit: Build time reduced from 10 minutes → 2 minutes
3. Parallel Jobs
Problem: Sequential stages slow (10 min + 5 min + 3 min = 18 min total)
Solution: Run independent stages in parallel
Example:
# These can run in parallel (no dependencies)
static-analysis:
stage: analysis
script: [cppcheck]
complexity-check:
stage: analysis
script: [radon]
coverage:
stage: analysis
script: [gcov]
Benefit: Total time = max(cppcheck, radon, gcov) instead of sum
4. Artifacts and Reports
Store Build Artifacts for traceability (ASPICE SUP.8):
artifacts:
paths:
- build/acc_ecu.elf # Binary
- build/test_results.xml # Test report
- build/coverage.html # Coverage report
- traceability_report.md # Traceability matrix
expire_in: 90 days # Retention for audit (ISO 26262 requires records)
GitLab Integrations:
artifacts:
reports:
junit: test_results.xml # Tests in GitLab UI
coverage_report: # Coverage in GitLab UI
coverage_format: cobertura
path: coverage.xml
codequality: cppcheck.json # Code quality in merge request
5. Quality Gates
Enforce Standards Automatically
Example Quality Gates:
# Fail if coverage < 90%
- COVERAGE=$(gcovr | grep TOTAL | awk '{print $4}' | sed 's/%//')
- if (( $(echo "$COVERAGE < 90" | bc -l) )); then exit 1; fi
# Fail if MISRA violations > 0
- if grep -q "misra" cppcheck_report.xml; then exit 1; fi
# Fail if complexity > 15 (cyclomatic complexity)
- radon cc src/ -a -nb | awk '$NF > 15 {exit 1}'
Benefit: Cannot merge code that violates standards (automatic enforcement)
Docker for Reproducible Builds
Dockerfile (Embedded Toolchain)
# Dockerfile for ACC ECU build environment
FROM ubuntu:22.04
# Install build tools
RUN apt-get update && apt-get install -y \
cmake \
gcc \
g++ \
gcc-arm-none-eabi \
gcovr \
cppcheck \
clang-format \
python3 \
python3-pip \
git \
&& rm -rf /var/lib/apt/lists/*
# Install Python tools
RUN pip3 install pyyaml jinja2
# Set working directory
WORKDIR /workspace
# Default command
CMD ["/bin/bash"]
Build Docker Image:
docker build -t acc-ecu-toolchain:1.0 .
Use in CI/CD:
image: acc-ecu-toolchain:1.0
build:
script:
- cmake .. && make
Benefit: Same toolchain for every developer and CI server ("works on my machine" problem solved)
Advanced CI/CD Topics
1. Matrix Builds (Multiple Targets)
Problem: Code must run on multiple ECUs (TriCore TC397, ARM Cortex-M7)
Solution: Matrix build (test all targets)
build:
stage: build
parallel:
matrix:
- TARGET: [tricore_tc397, arm_cortex_m7, x86_linux]
script:
- cmake .. -DTARGET=$TARGET
- make
artifacts:
paths:
- build/acc_ecu_${TARGET}.elf
Benefit: One commit tests all targets (catch platform-specific bugs early)
2. Scheduled Pipelines (Nightly Builds)
Problem: HIL tests too slow for every commit (30 minutes)
Solution: Run HIL tests nightly (not on every commit)
hil-tests:
stage: deploy
script:
- ./run_hil_tests.sh
only:
- schedules # Only run on scheduled pipelines (nightly)
GitLab Schedule: Settings → CI/CD → Schedules → Add (daily at 2 AM)
3. Deployment to HIL Bench
Workflow: CI → Build → Deploy to HIL → Run Tests → Report
Example HIL Integration:
deploy-hil:
stage: deploy
script:
- echo "Deploying to dSPACE SCALEXIO HIL..."
- scp build/acc_ecu.elf hil-server:/opt/dspace/binaries/
- ssh hil-server "cd /opt/dspace && python run_hil_suite.py --binary acc_ecu.elf"
- scp hil-server:/opt/dspace/results.xml ./hil_results.xml
artifacts:
reports:
junit: hil_results.xml
when: manual # Manual trigger (HIL expensive)
Summary
CI/CD Pipeline Stages:
- Format Check: clang-format (10s)
- Build: CMake + Make (2 min)
- Test: Google Test (5 min)
- Static Analysis: cppcheck, MISRA C:2012 (3 min)
- Coverage: gcov, gcovr (2 min)
- Traceability: Verify @implements tags (1 min)
- Artifacts: Package binaries, reports (1 min)
- Deploy: HIL tests (30 min, manual trigger)
Best Practices:
- Fail Fast: Run fastest checks first
- Cache: Toolchain, dependencies, incremental builds
- Parallel Jobs: Run independent stages concurrently
- Quality Gates: Enforce coverage ≥90%, zero MISRA violations
- Docker: Reproducible builds across all environments
ASPICE Compliance: CI/CD supports SUP.8 (configuration management), SUP.9 (problem resolution), SWE.4 (unit verification)
Chapter 34 Complete: Thinking Like a Software Engineer covers clean code, TDD, code reviews, and CI/CD mastery
Next: Chapter 35 — Working with AI Assistants (effective prompting, reviewing AI output, HITL decision-making)