1.4: Traceability in Practice

Why Traceability Matters

ASPICE Requirement

ASPICE SWE.1 BP5 (BP = Base Practice): "Establish bidirectional traceability between system requirements and software requirements."

What This Means:

  • Forward Traceability: System req → Software req → Code → Tests
  • Backward Traceability: Tests → Code → Software req → System req

Why It's Mandatory:

  1. Impact Analysis: If requirement changes, know what code/tests affected
  2. Verification: Prove all requirements tested (no gaps)
  3. Audit Trail: Assessors (TÜV, FDA) verify traceability during ASPICE/ISO 26262 audits

The Traceability Chain

End-to-End Example (ACC ECU)

The following diagram traces a single ACC feature from stakeholder need through system requirement, software requirement, implementation, and test case, demonstrating complete bidirectional traceability.

Traceability Practice

Traceability Links:

  • [SWE-045-1] implements [SYS-045]
  • ACC_GetObstacleDistance() @implements [SWE-045-1]
  • TC-SWE-045-1-1 @verifies [SWE-045-1]

Traceability in Code

Using @implements Tags

C Code Example:

/**
 * @brief Calculate obstacle distance from radar sensor
 * @implements [SWE-045-1] Obstacle Distance Calculation
 * @safety_class ASIL-B
 * @param[out] distance_m Obstacle distance in meters
 * @return 0 = success, -1 = sensor invalid
 */
int ACC_GetObstacleDistance(float* distance_m) {
    /* Implementation */
}

Unit Test Example (Google Test):

/**
 * @test TC-SWE-045-1-1: Typical value (5 meters)
 * @verifies [SWE-045-1] Obstacle Distance Calculation
 */
TEST_F(ACC_ControllerTest, GetObstacleDistance_TypicalValue_5m) {
    /* Test code */
}

Benefit: Parse code with grep or scripts to auto-generate a traceability matrix.


Traceability Matrix

Format (Excel or Markdown)

Forward Traceability (Requirement → Implementation):

| System Req | Software Req | Implementation | Test Case | Status |
|------------|--------------|----------------|-----------|--------|
| SYS-045 | SWE-045-1 | acc_controller.c:45 | TC-SWE-045-1-1 | [PASS] Verified |
| SYS-045 | SWE-045-2 | acc_controller.c:78 | TC-SWE-045-2-1 | [PASS] Verified |
| SYS-045 | SWE-045-3 | acc_controller.c:120 | TC-SWE-045-3-1 | [PASS] Verified |
| SYS-046 | SWE-046-1 | can_driver.c:56 | TC-SWE-046-1-1 | [WARN] Test failing |

Backward Traceability (Test → Requirement):

| Test Case | Verifies | Requirement | Implementation | Result |
|-----------|----------|-------------|----------------|--------|
| TC-SWE-045-1-1 | SWE-045-1 | Distance calculation | acc_controller.c:45 | [PASS] PASS |
| TC-SWE-045-1-2 | SWE-045-1 | Boundary (0m) | acc_controller.c:45 | [PASS] PASS |
| TC-SWE-045-1-3 | SWE-045-1 | Invalid sensor | acc_controller.c:45 | [PASS] PASS |

Automated Traceability Generation

Script to Parse @implements Tags

Python Script:

#!/usr/bin/env python3
import re
import glob

def extract_traceability(code_files):
    """
    Parse @implements tags from C code, generate traceability matrix
    """
    traceability = []

    for file_path in code_files:
        with open(file_path, 'r') as f:
            content = f.read()

        # Find functions with @implements tags
        pattern = r'@implements\s+\[([^\]]+)\].*?\n\s*(\w+)\s+(\w+)\s*\('
        matches = re.findall(pattern, content, re.MULTILINE | re.DOTALL)

        for req_id, return_type, func_name in matches:
            # Find test cases verifying this requirement
            test_cases = find_test_cases(req_id)

            traceability.append({
                'requirement': req_id,
                'function': func_name,
                'file': file_path,
                'test_cases': test_cases,
                'status': 'Verified' if test_cases else 'Not Tested'
            })

    return traceability

def find_test_cases(req_id):
    """
    Search test files for @verifies tags matching req_id
    """
    test_files = glob.glob('tests/test_*.cpp')
    test_cases = []

    for test_file in test_files:
        with open(test_file, 'r') as f:
            content = f.read()

        # Find test cases verifying this requirement
        pattern = rf'@verifies\s+\[{re.escape(req_id)}\].*?TEST.*?\((.*?)\)'
        matches = re.findall(pattern, content, re.DOTALL)

        for test_name in matches:
            test_cases.append(test_name.strip())

    return test_cases

# Usage
code_files = glob.glob('src/*.c')
traceability = extract_traceability(code_files)

# Generate Markdown table
print("| Requirement | Function | File | Test Cases | Status |")
print("|-------------|----------|------|------------|--------|")
for item in traceability:
    tests = ', '.join(item['test_cases']) if item['test_cases'] else 'None'
    status = '[PASS]' if item['status'] == 'Verified' else '[FAIL]'
    print(f"| {item['requirement']} | {item['function']} | {item['file']} | {tests} | {status} |")

Output:

| Requirement | Function | File | Test Cases | Status |
|-------------|----------|------|------------|--------|
| SWE-045-1 | ACC_GetObstacleDistance | acc_controller.c | TC-SWE-045-1-1, TC-SWE-045-1-2 | [PASS] |
| SWE-045-2 | ACC_GetClosingSpeed | acc_controller.c | TC-SWE-045-2-1 | [PASS] |
| SWE-046-1 | CAN_ReadMessage | can_driver.c | None | [FAIL] |

Benefit: Auto-generated traceability matrix updated on every commit.


Traceability Tools

Tool Comparison

Tool Cost Best For Pros Cons
IBM DOORS €5k-50k/user Large projects (1000+ req) Industry standard, ASPICE-compliant Expensive, steep learning curve
Jama Connect €2k-10k/user Medium projects (100-1000 req) Modern UI, web-based Still expensive
Excel €0 Small projects (<100 req) Simple, everyone knows it Manual updates, error-prone
Markdown + Git €0 Very small projects (<50 req) Version control built-in No automation
Custom Script €0 (DIY) Any size (if automated) Tailored to project Requires maintenance

Recommendation:

  • Large projects (€5M+, 5+ years): DOORS (industry standard, assessor-friendly)
  • Medium projects (€1–5M, 2–5 years): Jama Connect (modern, cost-effective)
  • Small projects (<€1M, <2 years): Excel + custom scripts (low cost, flexible)

Maintaining Traceability

Workflow Integration

1. Requirements Phase (SWE.1):

1. Requirements engineer writes requirements in DOORS
2. Each requirement assigned unique ID: [SWE-XXX-Y]
3. Baseline requirements (freeze for development)

2. Implementation Phase (SWE.3):

1. Software engineer implements function
2. Adds @implements [SWE-XXX-Y] tag in Doxygen header
3. Commits code to Git
4. CI/CD pipeline runs traceability script, updates matrix

3. Testing Phase (SWE.4):

1. Test engineer writes unit test
2. Adds @verifies [SWE-XXX-Y] tag in test header
3. Commits test to Git
4. CI/CD updates traceability matrix (marks requirement as "Verified")

4. Review Phase (SUP.2):

1. Reviewer checks traceability matrix
2. Verifies 100% coverage: All requirements → Code → Tests
3. If gaps found (requirement without code, or code without test), reject PR

Traceability Gap Analysis

Coverage Report

Example Report:

## Traceability Coverage Report

### Forward Traceability (Requirements → Code)
- **Total Requirements**: 34
- **Implemented**: 32 (94%)
- **Not Implemented**: 2 (6%)

**Missing Implementations**:
- [SWE-089] Sensor Fusion (no code found, in progress)
- [SWE-123] Diagnostics (no code found, deferred to v1.1)

---

### Backward Traceability (Code → Requirements)
- **Total Functions**: 45
- **Traced**: 42 (93%)
- **Orphan (no @implements tag)**: 3 (7%)

**Orphan Functions** (no requirement link):
- `ACC_Helper_CalculateAverage()` (Line 145, acc_controller.c)
  - Justification: Helper function, not directly linked to requirement
  - Action: Add comment explaining purpose, or link to parent requirement
- `Debug_PrintCANMessage()` (Line 567, can_driver.c)
  - Justification: Debug utility, not production code
  - Action: Exclude from traceability (add #ifdef DEBUG guard)

---

### Verification Traceability (Requirements → Tests)
- **Total Requirements**: 34
- **Tested**: 34 (100%) [PASS]
- **Not Tested**: 0 (0%)

**Coverage**: 100% (all requirements have tests)

Best Practices

1. Automate Traceability

Manual Traceability (Excel):

  • [FAIL] Error-prone (typos, forgot to update)
  • [FAIL] Time-consuming (30 minutes per update)
  • [FAIL] Out-of-date (developers don't update Excel)

Automated Traceability (Script + CI/CD):

  • [PASS] Always up-to-date (runs on every commit)
  • [PASS] Fast (1 second to regenerate matrix)
  • [PASS] Reliable (no human error)

Action: Write a Python script (~100 lines) to parse @implements tags and run it in CI/CD.


2. Enforce Traceability in Code Review

Code Review Checklist:

☐ All functions have Doxygen headers?
☐ All public functions have @implements tags?
☐ All test cases have @verifies tags?
☐ Traceability matrix updated (auto-generated)?
☐ No orphan requirements (100% implementation coverage)?

Git Hook (pre-commit):

#!/bin/bash
# .git/hooks/pre-commit

# Check that all new functions have @implements tags
new_functions=$(git diff --cached | grep -E '^\+.*\w+\s+\w+\s*\(')

if echo "$new_functions" | grep -v '@implements'; then
    echo "[FAIL] Error: New function missing @implements tag"
    echo "Add @implements [SWE-XXX] to Doxygen header"
    exit 1
fi

echo "[PASS] Traceability check passed"

3. Review Traceability Matrix Regularly

Frequency: Weekly (during sprint planning) or at milestones

Review Questions:

  • Are all requirements implemented? (100% forward traceability?)
  • Are all requirements tested? (100% verification traceability?)
  • Any orphan code? (code without requirement link?)
  • Any orphan tests? (tests without requirement link?)

Summary

Traceability Chain: Stakeholder need → System requirement → Software requirement → Code → Tests

Traceability Tags: @implements (code → requirement), @verifies (test → requirement)

Traceability Matrix: Forward (requirement → code → test), backward (test → code → requirement)

Tools: DOORS (large projects), Jama (medium projects), Excel + scripts (small projects)

Best Practices: Automate (scripts + CI/CD), enforce (code review checklist), review regularly


Chapter 33 Complete: Thinking Like a Systems Engineer covers systems mindset, requirements practice, architecture decisions, and traceability

Next: Chapter 34 — Thinking Like a Software Engineer (clean code, TDD, code review, CI/CD mastery)