2.0: Thinking Like a Software Engineer

Purpose of This Tutorial

For Engineers Transitioning to Software Engineering

Audience: Systems engineers learning software implementation, and junior software engineers entering safety-critical development

Purpose: Develop software craftsmanship mindset for ASPICE-compliant embedded systems

What You'll Learn:

  1. Clean Code Principles: Writing maintainable, readable C code for safety-critical systems
  2. Test-Driven Development: TDD methodology adapted for embedded systems with ASPICE
  3. Code Review Excellence: Effective code reviews that catch defects early
  4. CI/CD Mastery: Building robust automated pipelines for ASPICE projects

Why This Matters:

  • Clean code reduces defect density (industry average: 5–10 defects/KLOC; best practice: 1–2 defects/KLOC)
  • TDD improves test coverage (typical: 60–70%; with TDD: 90–100%)
  • Code reviews catch 60–90% of defects before testing (IBM and Microsoft studies)
  • CI/CD reduces integration time (manual: days; automated: minutes)

Software Engineering vs Systems Engineering

Complementary Perspectives

The following diagram contrasts the systems engineer's top-down approach (requirements to architecture to components) with the software engineer's bottom-up approach (code to functions to algorithms to tests).

Software Engineering Thinking

Key Difference:

  • Systems Engineer: Thinks in requirements → architecture → components
  • Software Engineer: Thinks in code → functions → algorithms → tests

Both are Essential:

  • Systems engineer defines what (speed control ±2 km/h)
  • Software engineer implements how (PID controller with specific gains)

The Software Engineering Mindset

Core Principles

1. Code is Read More Than Written

Bad Code (write-once mentality):

void f(int* d, int s) {
    for(int i=0;i<s;i++){
        if(d[i]<0)d[i]=0;
        if(d[i]>100)d[i]=100;
    }
}

[FAIL] Problems:

  • Cryptic names: What is f? What is d? What is s?
  • No documentation
  • Hard to test (no clear contract)

Good Code (read-optimized):

/**
 * @brief Clamp sensor values to valid range [0, 100]
 * @implements [SWE-045-7] Sensor Data Validation
 * @param[in,out] sensor_data Array of sensor values to clamp
 * @param[in] num_sensors Number of sensors in array
 * @pre sensor_data != NULL, num_sensors > 0
 * @post All values in [0, 100]
 */
void ClampSensorData(int* sensor_data, int num_sensors) {
    const int MIN_VALID = 0;
    const int MAX_VALID = 100;

    for (int i = 0; i < num_sensors; i++) {
        if (sensor_data[i] < MIN_VALID) {
            sensor_data[i] = MIN_VALID;
        }
        if (sensor_data[i] > MAX_VALID) {
            sensor_data[i] = MAX_VALID;
        }
    }
}

[PASS] Benefits:

  • Self-documenting name: ClampSensorData explains purpose
  • Constants: MIN_VALID, MAX_VALID (no magic numbers)
  • Doxygen header: Links to requirement, pre/postconditions
  • Readable: Future engineer understands in 10 seconds

Principle: Write code for the next engineer (might be you in 6 months)


2. Make It Work, Make It Right, Make It Fast

Wrong Sequence (premature optimization):

Engineer: "I spent 2 weeks optimizing the Kalman filter to run in 5ms."
Manager: "Does it work correctly?"
Engineer: "Well... I haven't written tests yet, but it's FAST!"
Manager: "Write tests first. Optimization without correctness is useless."

Right Sequence:

Step 1: Make It Work
- Implement feature, write tests, verify correctness
- Don't worry about performance yet

Step 2: Make It Right
- Refactor: Clean code, remove duplication, improve naming
- Ensure 100% test coverage (for ASIL-B code)

Step 3: Make It Fast (only if needed)
- Profile: Is performance actually a problem?
- Optimize: Focus on bottlenecks (not the whole codebase)
- Verify: Run tests again (ensure optimization didn't break correctness)

Example: ACC PID Controller

Week 1: Make It Work
- Implemented PID controller, basic unit tests
- Speed control: ±5 km/h (requirement: ±2 km/h) [WARN] Not meeting requirement yet

Week 2: Make It Right
- Fixed bug in integral term (windup issue)
- Added 15 unit tests (boundary, overflow, negative values)
- Speed control: ±1.8 km/h [PASS] Meets requirement

Week 3: Make It Fast (optional)
- Requirement: ≤50ms, actual: 35ms → No optimization needed
- Spent week on next feature instead

Principle: Correctness first, performance second (if needed)


3. Test Everything, Trust Nothing

Defensive Programming for Safety-Critical Systems

Assumption-Based Code (dangerous in ASIL-B):

int ACC_SetSpeed(int speed_kmh) {
    g_set_speed = speed_kmh;  // Assume speed is valid
    return 0;
}

[FAIL] Problems:

  • No input validation (what if speed_kmh = -50 or speed_kmh = 500?)
  • Silent failure (returns 0 even if invalid)

Defensive Code (safety-critical):

/**
 * @brief Set target speed for ACC system
 * @implements [SWE-045-8] Set Speed Command
 * @param[in] speed_kmh Desired speed in km/h
 * @return 0 = success, -1 = invalid speed
 * @safety_class ASIL-B
 */
int ACC_SetSpeed(int speed_kmh) {
    /* Requirement [SWE-045-8]: Valid speed range 30-150 km/h */
    const int MIN_SPEED = 30;
    const int MAX_SPEED = 150;

    /* Input validation */
    if (speed_kmh < MIN_SPEED || speed_kmh > MAX_SPEED) {
        /* Log error for diagnostics */
        Log_Error(ERROR_INVALID_SPEED, speed_kmh);
        return -1;  /* Failure */
    }

    /* Set speed only if valid */
    g_set_speed = speed_kmh;
    return 0;  /* Success */
}

[PASS] Benefits:

  • Input validation (reject invalid speeds)
  • Error logging (diagnostics can trace failures)
  • Clear return codes (caller knows if successful)

Principle: Validate all inputs, check all returns, log all errors


4. Refactor Ruthlessly (But Safely)

What is Refactoring?

Definition: Improving code structure without changing behavior

Example: Extracting a Function

Before (long function, hard to test):

void ACC_ControlLoop(void) {
    /* Read sensors */
    float radar_distance = CAN_ReadRadar();
    float camera_distance = Ethernet_ReadCamera();

    /* Sensor fusion (10 lines of Kalman filter math) */
    float fused_distance = /* ... Kalman filter calculation ... */;

    /* Calculate target speed */
    float safe_distance = g_vehicle_speed * 2.0;  /* 2 seconds */
    float target_speed = (fused_distance < safe_distance) ?
        g_vehicle_speed - 5.0 : g_set_speed;

    /* Control output */
    if (target_speed < g_vehicle_speed) {
        BrakeActuator_Set(/* ... */);
    } else {
        ThrottleActuator_Set(/* ... */);
    }
}

[FAIL] Problems:

  • 50+ lines (too long)
  • Mixes 4 responsibilities (read sensors, fuse, calculate, actuate)
  • Hard to unit test (requires real CAN, Ethernet, actuators)

After (refactored into small, testable functions):

void ACC_ControlLoop(void) {
    /* Read and fuse sensor data */
    float fused_distance = SensorFusion_GetObstacleDistance();

    /* Calculate target speed */
    float target_speed = ACC_CalculateTargetSpeed(fused_distance);

    /* Apply control output */
    ACC_ApplyControl(target_speed);
}

float ACC_CalculateTargetSpeed(float obstacle_distance) {
    /* Pure function: Easy to unit test (no I/O dependencies) */
    const float FOLLOWING_TIME = 2.0;  /* seconds */
    float safe_distance = g_vehicle_speed * FOLLOWING_TIME;

    if (obstacle_distance < safe_distance) {
        /* Reduce speed to maintain safe distance */
        return g_vehicle_speed - 5.0;
    } else {
        /* Maintain set speed */
        return g_set_speed;
    }
}

[PASS] Benefits:

  • Small functions (5-15 lines each)
  • Single responsibility (each function does one thing)
  • Easy to test (ACC_CalculateTargetSpeed is pure function, no mocks needed)

Refactoring Strategy:

  1. Write tests first (ensure behavior unchanged)
  2. Extract function (move code to new function)
  3. Run tests (verify behavior unchanged)
  4. Repeat for each responsibility

Principle: Improve structure continuously, with tests as safety net


ASPICE and Software Engineering

How SWE.3 and SWE.4 Relate to Clean Code

ASPICE Requirements:

  • SWE.3 BP2: "Software units are designed according to coding standards"
  • SWE.4 BP1: "Develop unit test cases according to software unit verification strategy"

Clean Code Practices Support ASPICE:

Clean Code Practice ASPICE Benefit Process
Naming conventions Easier code reviews (SUP.2) SWE.3
Short functions Higher testability (SWE.4) SWE.3
Single responsibility Easier traceability (1 function = 1 requirement) SWE.1, SWE.3
Defensive programming Fewer defects (SUP.9) SWE.3
Unit tests 100% coverage for ASIL-B (SWE.4) SWE.4
Refactoring Maintainability (MAN.3) SWE.3

Example: Traceability Improved by Clean Code

Bad Code (multiple responsibilities, unclear traceability):

void ACC_DoEverything(void) {
    /* Implements [SWE-045-1], [SWE-045-2], [SWE-045-3]... */
    /* 200 lines of mixed logic */
}

[FAIL] Problem: Impossible to trace which code implements which requirement

Good Code (one function = one requirement):

/**
 * @implements [SWE-045-1] Obstacle Distance Calculation
 */
float SensorFusion_GetObstacleDistance(void) { /* ... */ }

/**
 * @implements [SWE-045-2] Safe Distance Calculation
 */
float ACC_CalculateSafeDistance(float vehicle_speed) { /* ... */ }

/**
 * @implements [SWE-045-3] Target Speed Calculation
 */
float ACC_CalculateTargetSpeed(float obstacle_distance) { /* ... */ }

[PASS] Benefit: Clear 1:1 mapping (requirement → function → test)


Learning Path

Recommended Steps to Master Software Engineering

Stage 1: Learn Clean Code Basics (1–3 months)

  • Read Clean Code by Robert C. Martin
  • Practice refactoring: Take one messy function, refactor to clean code
  • Study MISRA C:2012 (required for automotive)
  • Review code examples in this chapter

Stage 2: Practice TDD (3–6 months)

  • Write tests first, implementation second
  • Start with pure functions (no I/O dependencies)
  • Aim for 90–100% coverage (required for ASIL-B)
  • Use Google Test (C++) or Unity (C)

Stage 3: Master Code Reviews (6–9 months)

  • Review others' code (catch MISRA violations, logic errors)
  • Accept feedback on your code (learn from mistakes)
  • Use checklists (ASPICE review checklist, safety review)

Stage 4: Build CI/CD Pipelines (9–12 months)

  • Automate builds (CMake, Make)
  • Automate tests (CTest, Google Test)
  • Automate checks (cppcheck, PC-lint, gcov)
  • Integrate with GitLab CI / GitHub Actions

Stage 5: Mentorship (12+ months)

  • Mentor junior engineers
  • Conduct training sessions
  • Contribute to coding standards

Summary

Software Engineering Mindset:

  1. Code is Read More Than Written: Write for the next engineer
  2. Make It Work, Make It Right, Make It Fast: Correctness before optimization
  3. Test Everything, Trust Nothing: Defensive programming for safety
  4. Refactor Ruthlessly: Improve structure continuously with tests as safety net

ASPICE Alignment: Clean code practices support SWE.3, SWE.4, SUP.2, SUP.9

Key Skills: Clean code, TDD, code review, CI/CD automation

Next: Clean Code Principles (34.01) — Practical guidelines for writing maintainable embedded C code