2.2: Test-Driven Development

TDD for Safety-Critical Embedded Systems

What is Test-Driven Development?

Definition: Write tests first, then write code to make tests pass

TDD Cycle (Red-Green-Refactor): The following diagram illustrates the three-phase TDD cycle: write a failing test (Red), write minimal code to pass (Green), then improve code structure (Refactor).

TDD Cycle

Why TDD for ASPICE?

  • SWE.4 Requirement: "Develop unit test cases" (TDD ensures 100% test coverage from the start)
  • Defect Prevention: Catch bugs early (before they reach integration, HIL, or field)
  • Design Improvement: Writing tests first forces clean, testable interfaces

Industry Data:

  • TDD reduces defect density by 40–80% (Microsoft and IBM studies)
  • TDD increases test coverage from 60–70% (typical) to 90–100%
  • TDD increases initial development time by 15–35%, but reduces debugging time by 50–80%

TDD Example: ACC Speed Control

Requirement

[SWE-045-11] Calculate Safe Following Distance

The ACC software shall calculate safe following distance as:
  d_safe = v × t_following
where:
  v = vehicle speed (m/s)
  t_following = 2.0 seconds (constant)

Valid range: 0 ≤ d_safe ≤ 200 meters

Step 1: RED (Write Failing Test)

Test First (before implementation):

/**
 * @test TC-SWE-045-11-1: Calculate safe distance at 50 km/h
 * @verifies [SWE-045-11] Safe Following Distance
 */
TEST(ACC_SafeDistance, CalculateSafeDistance_50kmh_Returns27m) {
    /* Arrange */
    float vehicle_speed_kmh = 50.0F;  /* 50 km/h ≈ 13.89 m/s */
    float expected_distance_m = 27.78F;  /* 13.89 m/s × 2.0s ≈ 27.78m */

    /* Act */
    float actual_distance_m = ACC_CalculateSafeDistance(vehicle_speed_kmh);

    /* Assert */
    EXPECT_NEAR(actual_distance_m, expected_distance_m, 0.1F);  /* Tolerance ±0.1m */
}

Compile: [FAIL] Fails (function ACC_CalculateSafeDistance doesn't exist yet)


Step 2: GREEN (Write Minimal Code)

Implementation (just enough to pass test):

/**
 * @brief Calculate safe following distance
 * @implements [SWE-045-11] Safe Following Distance
 * @param[in] vehicle_speed_kmh Current vehicle speed in km/h
 * @return Safe following distance in meters
 */
float ACC_CalculateSafeDistance(float vehicle_speed_kmh) {
    const float FOLLOWING_TIME_SEC = 2.0F;  /* 2 seconds */
    const float KMH_TO_MS = 1.0F / 3.6F;    /* Convert km/h to m/s */

    float vehicle_speed_ms = vehicle_speed_kmh * KMH_TO_MS;
    float safe_distance_m = vehicle_speed_ms * FOLLOWING_TIME_SEC;

    return safe_distance_m;
}

Run Tests: [PASS] Test passes


Step 3: REFACTOR (Improve Code Quality)

Question: Is the code clean? Can it be improved?

Review:

  • [PASS] Clear variable names
  • [PASS] Constants defined
  • [PASS] Function is small (5 lines)
  • [PASS] Single responsibility

Conclusion: No refactoring needed (code already clean)


Step 4: REPEAT (Add More Tests)

Test 2: Boundary - Zero Speed

/**
 * @test TC-SWE-045-11-2: Zero speed returns zero distance
 */
TEST(ACC_SafeDistance, CalculateSafeDistance_0kmh_Returns0m) {
    float vehicle_speed_kmh = 0.0F;
    float actual_distance_m = ACC_CalculateSafeDistance(vehicle_speed_kmh);
    EXPECT_FLOAT_EQ(actual_distance_m, 0.0F);
}

Run: [PASS] Test passes (implementation already handles this)

Test 3: Boundary - Maximum Speed

/**
 * @test TC-SWE-045-11-3: Maximum speed (150 km/h)
 */
TEST(ACC_SafeDistance, CalculateSafeDistance_150kmh_Returns83m) {
    float vehicle_speed_kmh = 150.0F;  /* 150 km/h ≈ 41.67 m/s */
    float expected_distance_m = 83.34F;  /* 41.67 × 2.0 ≈ 83.34m */

    float actual_distance_m = ACC_CalculateSafeDistance(vehicle_speed_kmh);
    EXPECT_NEAR(actual_distance_m, expected_distance_m, 0.1F);
}

Run: [PASS] Test passes

Test 4: Negative Speed (Invalid Input)

/**
 * @test TC-SWE-045-11-4: Negative speed should return 0 (defensive)
 */
TEST(ACC_SafeDistance, CalculateSafeDistance_NegativeSpeed_Returns0) {
    float vehicle_speed_kmh = -10.0F;  /* Invalid input */
    float actual_distance_m = ACC_CalculateSafeDistance(vehicle_speed_kmh);
    EXPECT_FLOAT_EQ(actual_distance_m, 0.0F);  /* Defensive: Return 0 */
}

Run: [FAIL] Test fails (current implementation returns negative distance)

Fix Implementation (add input validation):

float ACC_CalculateSafeDistance(float vehicle_speed_kmh) {
    const float FOLLOWING_TIME_SEC = 2.0F;
    const float KMH_TO_MS = 1.0F / 3.6F;

    /* Defensive: Reject negative speed */
    if (vehicle_speed_kmh < 0.0F) {
        return 0.0F;
    }

    float vehicle_speed_ms = vehicle_speed_kmh * KMH_TO_MS;
    float safe_distance_m = vehicle_speed_ms * FOLLOWING_TIME_SEC;

    return safe_distance_m;
}

Run: [PASS] All tests pass


TDD Benefits

1. Design for Testability

TDD Forces Clean Interfaces

Before TDD (hard to test):

void ACC_Update(void) {
    /* Reads global variables, calls CAN directly, writes to actuators */
    /* Hard to test: Requires real CAN hardware, actuators */
}

After TDD (testable design):

/**
 * @brief Calculate target speed (pure function, easy to test)
 */
float ACC_CalculateTargetSpeed(float obstacle_distance_m, float vehicle_speed_kmh) {
    /* No I/O, no globals: Easy to test */
    const float FOLLOWING_TIME_SEC = 2.0F;
    float safe_distance_m = (vehicle_speed_kmh / 3.6F) * FOLLOWING_TIME_SEC;

    if (obstacle_distance_m < safe_distance_m) {
        return vehicle_speed_kmh - 5.0F;  /* Decelerate */
    }

    return vehicle_speed_kmh;  /* Maintain speed */
}

/**
 * @brief Main control loop (orchestrates testable functions)
 */
void ACC_Update(void) {
    /* Thin orchestration layer */
    float distance = SensorFusion_GetObstacleDistance();
    float target_speed = ACC_CalculateTargetSpeed(distance, g_vehicle_speed);
    Actuator_SetSpeed(target_speed);
}

TDD Principle: If a function is hard to test, it's poorly designed (TDD forces you to fix design)


2. Regression Safety

Example: Refactoring with Confidence

Scenario: Optimization needed (Kalman filter too slow)

Before Optimization:

float KalmanFilter_Update(float measurement) {
    /* Original implementation: 50ms */
    /* ... 20 lines of matrix math ... */
}

With TDD:

  1. Write 10 unit tests for Kalman filter (covering all cases)
  2. All tests pass [PASS]
  3. Optimize implementation (reduce to 20ms)
  4. Run tests again [PASS] All pass
  5. Confidence: Optimization didn't break correctness

Without TDD:

  • Optimize code
  • Hope it still works
  • Bugs discovered later (HIL test, proving ground, or worst: field)

3. Documentation via Tests

Tests Show How to Use the Code

Example: PID Controller

/**
 * @test TC-SWE-045-12-1: PID controller with zero error returns zero output
 */
TEST(PID_Controller, ZeroError_ReturnsZeroOutput) {
    PID_Init(1.0F, 0.1F, 0.01F);  /* Kp=1.0, Ki=0.1, Kd=0.01 */

    float error = 0.0F;
    float dt = 0.1F;  /* 100ms */

    float output = PID_Calculate(error, dt);

    EXPECT_FLOAT_EQ(output, 0.0F);
}

/**
 * @test TC-SWE-045-12-2: PID controller with positive error increases output
 */
TEST(PID_Controller, PositiveError_IncreasesOutput) {
    PID_Init(1.0F, 0.1F, 0.01F);

    float error = 10.0F;  /* 10 km/h too slow */
    float dt = 0.1F;

    float output = PID_Calculate(error, dt);

    EXPECT_GT(output, 0.0F);  /* Output should be positive (throttle) */
}

Benefit: New engineer reads tests to understand API usage (better than documentation, because tests are always up-to-date)


TDD for Embedded C (Google Test)

Project Structure

acc_ecu/
├── src/
│   ├── acc_controller.c       /* Implementation */
│   └── acc_controller.h
├── tests/
│   ├── test_acc_controller.cpp  /* Unit tests (C++ using Google Test framework) */
│   └── test_main.cpp
├── CMakeLists.txt             /* Build configuration */
└── README.md

Note: Tests written in C++ to leverage Google Test framework while testing C code

Test Setup (Google Test)

CMakeLists.txt:

cmake_minimum_required(VERSION 3.14)
project(ACC_ECU_Tests)

# Google Test
include(FetchContent)
FetchContent_Declare(
  googletest
  GIT_REPOSITORY https://github.com/google/googletest.git
  GIT_TAG release-1.12.1
)
FetchContent_MakeAvailable(googletest)

# Test executable
add_executable(acc_tests
    tests/test_acc_controller.cpp
    src/acc_controller.c
)

target_link_libraries(acc_tests gtest_main)

# Enable coverage
target_compile_options(acc_tests PRIVATE --coverage)
target_link_options(acc_tests PRIVATE --coverage)

Writing Tests (Google Test)

tests/test_acc_controller.cpp:

#include <gtest/gtest.h>

extern "C" {
    #include "acc_controller.h"  /* C header */
}

/**
 * @brief Test fixture for ACC controller tests
 */
class ACC_ControllerTest : public ::testing::Test {
protected:
    void SetUp() override {
        /* Initialize before each test */
        ACC_Init();
    }

    void TearDown() override {
        /* Cleanup after each test */
        ACC_Deinit();
    }
};

/**
 * @test TC-SWE-045-11-1: Calculate safe distance at 50 km/h
 * @verifies [SWE-045-11] Safe Following Distance
 */
TEST_F(ACC_ControllerTest, CalculateSafeDistance_50kmh_Returns27m) {
    float vehicle_speed_kmh = 50.0F;
    float expected_distance_m = 27.78F;

    float actual_distance_m = ACC_CalculateSafeDistance(vehicle_speed_kmh);

    EXPECT_NEAR(actual_distance_m, expected_distance_m, 0.1F);
}

/**
 * @test TC-SWE-045-11-2: Zero speed returns zero distance
 */
TEST_F(ACC_ControllerTest, CalculateSafeDistance_0kmh_Returns0m) {
    float actual_distance_m = ACC_CalculateSafeDistance(0.0F);
    EXPECT_FLOAT_EQ(actual_distance_m, 0.0F);
}

/**
 * @test TC-SWE-045-11-3: Negative speed returns 0 (defensive)
 */
TEST_F(ACC_ControllerTest, CalculateSafeDistance_NegativeSpeed_Returns0) {
    float actual_distance_m = ACC_CalculateSafeDistance(-10.0F);
    EXPECT_FLOAT_EQ(actual_distance_m, 0.0F);
}

/**
 * @test TC-SWE-045-11-4: Maximum speed (150 km/h)
 */
TEST_F(ACC_ControllerTest, CalculateSafeDistance_150kmh_Returns83m) {
    float expected_distance_m = 83.34F;
    float actual_distance_m = ACC_CalculateSafeDistance(150.0F);
    EXPECT_NEAR(actual_distance_m, expected_distance_m, 0.1F);
}

Running Tests

Build and Run:

# Build
mkdir build && cd build
cmake ..
make

# Run tests
./acc_tests

# Output:
# [==========] Running 4 tests from 1 test suite.
# [----------] 4 tests from ACC_ControllerTest
# [ RUN      ] ACC_ControllerTest.CalculateSafeDistance_50kmh_Returns27m
# [       OK ] ACC_ControllerTest.CalculateSafeDistance_50kmh_Returns27m (0 ms)
# [ RUN      ] ACC_ControllerTest.CalculateSafeDistance_0kmh_Returns0m
# [       OK ] ACC_ControllerTest.CalculateSafeDistance_0kmh_Returns0m (0 ms)
# [ RUN      ] ACC_ControllerTest.CalculateSafeDistance_NegativeSpeed_Returns0
# [       OK ] ACC_ControllerTest.CalculateSafeDistance_NegativeSpeed_Returns0 (0 ms)
# [ RUN      ] ACC_ControllerTest.CalculateSafeDistance_150kmh_Returns83m
# [       OK ] ACC_ControllerTest.CalculateSafeDistance_150kmh_Returns83m (0 ms)
# [----------] 4 tests from ACC_ControllerTest (0 ms total)
# [==========] 4 tests from 1 test suite ran. (0 ms total)
# [  PASSED  ] 4 tests.

# Coverage report
gcov ../src/acc_controller.c
# acc_controller.c: 100.00% of 12 lines

Test Categories for ASPICE

Test Strategy (SWE.4)

ASPICE SWE.4 BP1: "Develop unit test cases according to test strategy"

Test Categories:

  1. Nominal Cases (happy path):

    • Typical values (50 km/h)
    • Expected behavior
  2. Boundary Cases:

    • Minimum value (0 km/h)
    • Maximum value (150 km/h)
    • Just below/above boundaries
  3. Error Cases (defensive):

    • Invalid input (negative speed)
    • Out-of-range values
    • Null pointers
  4. Equivalence Partitions:

    • Low speed (0-50 km/h)
    • Medium speed (50-100 km/h)
    • High speed (100-150 km/h)

Example Test Plan:

## Test Plan: ACC_CalculateSafeDistance

| Test ID | Category | Input | Expected Output | Rationale |
|---------|----------|-------|-----------------|-----------|
| TC-1 | Nominal | 50 km/h | 27.78m | Typical highway speed |
| TC-2 | Boundary | 0 km/h | 0m | Vehicle stopped |
| TC-3 | Boundary | 150 km/h | 83.34m | Maximum speed |
| TC-4 | Error | -10 km/h | 0m | Invalid input (defensive) |
| TC-5 | Equivalence | 30 km/h | 16.67 m | Low speed partition |
| TC-6 | Equivalence | 100 km/h | 55.56 m | High speed partition |

TDD for Legacy Code

Challenge: Existing Code Without Tests

Problem: How to add tests to untested code?

Strategy: Characterization Tests

  1. Write tests that describe current behavior (even if buggy)
  2. Refactor to improve design
  3. Fix bugs (now safe, tests catch regressions)

Example: Legacy Function

/* Legacy code: No tests, hard to understand */
void ACC_DoSomething(int x) {
    /* 100 lines of complex, untested legacy implementation */
}

Step 1: Characterization Test (describe current behavior):

TEST(ACC_Legacy, DoSomething_Input10_ReturnsWhat) {
    /* Don't know what it should return, so test what it currently returns */
    int result = ACC_DoSomething(10);
    EXPECT_EQ(result, 42);  /* Whatever it currently returns */
}

Step 2: Add More Tests (cover all branches):

TEST(ACC_Legacy, DoSomething_Input0_Returns0) {
    EXPECT_EQ(ACC_DoSomething(0), 0);
}

TEST(ACC_Legacy, DoSomething_InputNegative_ReturnsNegative) {
    EXPECT_EQ(ACC_DoSomething(-5), -21);
}

Step 3: Refactor (improve code, tests ensure no behavior change):

/* Refactored: Small, testable functions */
int ACC_CalculateHelper1(int x) { /* ... */ }
int ACC_CalculateHelper2(int x) { /* ... */ }

int ACC_DoSomething(int x) {
    return ACC_CalculateHelper1(x) + ACC_CalculateHelper2(x);
}

Run Tests: [PASS] All pass (behavior unchanged, but code cleaner)


Summary

TDD Process: RED (write failing test) → GREEN (make it pass) → REFACTOR (improve code)

TDD Benefits:

  1. 100% Coverage: Tests written first ensure all code tested
  2. Design for Testability: Forces clean, modular interfaces
  3. Regression Safety: Refactor with confidence (tests catch breaks)
  4. Living Documentation: Tests show how to use the code

ASPICE Alignment: TDD naturally satisfies SWE.4 (unit testing) with high coverage and traceability

Tools: Google Test (C++), Unity (C), CMake + gcov for coverage

Next: Code Review Excellence (34.03) — Effective code reviews that catch defects early


Self-Assessment Quiz

Test your understanding of Test-Driven Development. Answers are at the bottom.

Question 1: What is the correct TDD cycle order?

  • A) GREEN → RED → REFACTOR
  • B) RED → REFACTOR → GREEN
  • C) RED → GREEN → REFACTOR
  • D) REFACTOR → RED → GREEN

Question 2: In TDD, when do you write the implementation code?

  • A) Before writing any tests
  • B) After the test is written and fails (RED)
  • C) After refactoring is complete
  • D) At the same time as writing tests

Question 3: Which test category checks for invalid input like negative speed?

  • A) Nominal case
  • B) Boundary case
  • C) Error case (defensive)
  • D) Equivalence partition

Question 4: What is a "characterization test" used for?

  • A) Testing new code before implementation
  • B) Documenting current behavior of legacy code before refactoring
  • C) Testing performance characteristics
  • D) Testing UI character display

Question 5: For ASPICE SWE.4 compliance, what test coverage should you target?

  • A) 50%
  • B) 70%
  • C) 90–100% (all paths tested)
  • D) Coverage doesn't matter, just pass tests

Quiz Answers

  1. C - RED (write failing test) → GREEN (make it pass) → REFACTOR (improve code). This is the fundamental TDD cycle.

  2. B - In TDD, you first write a test that fails (RED), then write the minimum code to make it pass (GREEN).

  3. C - Error cases test defensive programming with invalid inputs. Nominal tests happy paths; boundary tests limits; equivalence partitions ranges.

  4. B - Characterization tests document the current behavior of legacy code, enabling safe refactoring.

  5. C - ASPICE SWE.4 expects comprehensive coverage (90–100%) with all paths tested. Lower coverage indicates insufficient verification.

Score Interpretation:

  • 5/5: Excellent TDD understanding
  • 3-4/5: Good foundation, practice with the code examples
  • 1-2/5: Re-read the chapter, try implementing TDD on a simple function