1.0: Thinking Like a Systems Engineer

Key Terms

Key terms used in this tutorial (see Appendix G and Appendix H for the complete glossary):

  • V-Model: Verification and validation lifecycle model pairing each development phase with a corresponding test phase
  • SYS.1-5: ASPICE system engineering processes covering requirements through qualification
  • HARA: Hazard Analysis and Risk Assessment — the ISO 26262 process for identifying hazards and assigning ASILs
  • STPA: System-Theoretic Process Analysis — a hazard analysis method based on systems theory
  • ADR: Architecture Decision Record — document capturing an architectural decision, its context, and rationale
  • SyRS: System Requirements Specification — the primary work product of SYS.2
  • SAD: System Architecture Document — the primary work product of SYS.3
  • HIL: Hardware-in-the-Loop — testing with real hardware and simulated environment
  • PID: Proportional-Integral-Derivative — a control algorithm common in embedded systems
  • FMEA: Failure Modes and Effects Analysis — systematic method for identifying potential failures

Purpose of This Tutorial

For Engineers New to Systems Engineering

Audience: Software engineers transitioning to systems engineering roles, junior engineers learning ASPICE

Purpose: Develop systems thinking mindset for safety-critical embedded systems development

What You'll Learn:

  1. Systems Mindset: How systems engineers think differently from software engineers
  2. Requirements Engineering: Best practices for eliciting, analyzing, and managing requirements
  3. Architecture Decisions: How to make and document architectural trade-offs
  4. Traceability: Maintaining end-to-end traceability throughout the development lifecycle

Prerequisites: Familiarity with basic software development concepts (version control, testing). Prior experience with embedded systems is helpful but not required. Complete Part I (Chapters 1-5) for ASPICE fundamentals before starting this tutorial.

Why This Matters:

  • Systems engineering prevents costly late-stage failures (requirements gaps, integration issues)
  • ASPICE compliance requires systems engineering rigor (SYS = Systems Engineering processes, SWE = Software Engineering processes)
  • Safety-critical systems demand holistic thinking (ISO 26262, IEC 62304)

Systems Engineering vs Software Engineering

Different Perspectives

The following diagram contrasts how systems engineers and software engineers view the same development challenge, highlighting the difference between requirements-driven thinking and implementation-driven thinking.

Systems Thinking

Key Difference:

  • Software Engineer: Thinks in code (functions, classes, algorithms)
  • Systems Engineer: Thinks in requirements (needs, constraints, interfaces)

The V-Model Perspective

Systems Engineer's View of Development

The following diagram shows the V-Model from a systems engineer's perspective, emphasizing the left-side activities (requirements, architecture, design) and their traceability links to the right-side verification activities.

V-Model Practice

Systems Engineer's Role:

  • Left Side (Requirements): Ensure completeness, consistency, feasibility
  • Right Side (Verification): Ensure each level verifies its requirements
  • Traceability: Maintain links across all levels (SYS → SWE → Code → Tests)

Software Engineer's Role:

  • Bottom Levels: Implement units (SWE.3), write tests (SWE.4)
  • Middle Levels: Integrate components (SWE.5)

Core Systems Engineering Principles

1. Requirements-Driven Development

Principle: Every design decision traces back to a requirement

Bad Example (solution-first thinking):

Engineer: "I think we should use a Kalman filter for sensor fusion."
Manager: "Why?"
Engineer: "Because it's a good algorithm."
Manager: "But what requirement does it satisfy?"
Engineer: "Uh... it makes the sensor data more accurate?"

[WRONG] Problem: No clear requirement, no measurable criteria

Good Example (requirements-first thinking):

Requirement [SYS-089]: The system shall fuse radar and camera data to achieve
obstacle detection accuracy ≥95% in 90% of operational conditions.

Engineer: "We need a sensor fusion algorithm. Let me evaluate options:
  - Option A: Simple averaging (accuracy: 85%, latency: 5ms, cost: €0)
  - Option B: Kalman filter (accuracy: 95%, latency: 20ms, cost: €0)
  - Option C: ML-based fusion (accuracy: 98%, latency: 50ms, cost: €50k ML infra)

Given requirement [SYS-089] (≥95% accuracy), Option B (Kalman filter) is minimum
viable solution. Option C exceeds requirement at high cost. Recommendation: Option B."

[CORRECT] Correct: Requirement-driven, quantified trade-offs


2. Think in Interfaces, Not Implementations

Principle: Define what components do (interfaces), not how (implementation)

Bad Example (implementation-focused):

System Requirement: "The ACC ECU shall use a PID controller with Kp=1.5, Ki=0.2,
Kd=0.1 to control vehicle speed."

[WRONG] Problem: Overly prescriptive (specifies algorithm, parameters), limits design freedom

Good Example (interface-focused):

System Requirement [SYS-045]: "The ACC ECU shall maintain vehicle speed within
±2 km/h of set speed under normal driving conditions (flat road, no obstacles)."

Verification: Test on proving ground, measure speed deviation over 10-minute drive.

[CORRECT] Correct: Specifies behavior (speed control accuracy), allows implementation flexibility

How to Think in Interfaces:

  1. Define inputs/outputs: What data goes in? What comes out?
  2. Define constraints: Latency, accuracy, error handling
  3. Leave implementation open: Let designers choose best algorithm

3. Manage Complexity Through Decomposition

Principle: Break complex systems into manageable subsystems

Example: ACC System Decomposition

Level 1 (System Level): The following diagram shows the ACC system decomposed into its major subsystems, with external interfaces to sensors, actuators, and other ECUs clearly identified.

Interface Management

Level 2 (Software Architecture): This diagram drills down into the software architecture, showing how system-level requirements are allocated to individual software components.

Requirements Allocation

Benefit: Each module can be developed, tested, verified independently


4. Balance Conflicting Stakeholder Needs

Principle: Engineering is about trade-offs (cost, performance, safety, schedule)

Example Conflict:

  • Customer (OEM): Wants high accuracy (98% obstacle detection)
  • Project Manager: Budget constraint (€50k ML infrastructure too expensive)
  • Safety Engineer: Must achieve ASIL-B (no false negatives, safety-critical)
  • Software Team: Prefers simple algorithm (easier to verify, faster development)

Systems Engineer's Role:

  1. Elicit all constraints: Budget, safety, schedule, performance
  2. Quantify trade-offs: Accuracy vs cost vs complexity
  3. Propose solution: Meets minimum requirements at lowest risk/cost
  4. Document decision: Architecture Decision Record (ADR)

Example ADR Excerpt:

# ADR-007: Sensor Fusion Algorithm Selection

## Decision
Use Kalman filter (Option B) for sensor fusion.

## Rationale
- Meets accuracy requirement: 95% (requirement: ≥95%) [PASS]
- Latency acceptable: 20ms (requirement: ≤50ms) [PASS]
- Cost: €0 (no additional infrastructure) [PASS]
- Complexity: Moderate (well-understood algorithm, easier verification than ML)
- Safety: ASIL-B achievable (deterministic, testable)

## Alternatives Rejected
- **Option A (Simple averaging)**: Only 85% accuracy (does not meet requirement) [FAIL]
- **Option C (ML fusion)**: 98% accuracy but €50k cost, exceeds budget [FAIL]

Key Skills for Systems Engineers

Technical Skills

  1. Requirements Analysis:

    • Elicit needs from stakeholders (interviews, workshops)
    • Translate needs → quantified requirements
    • Detect ambiguities, gaps, conflicts
  2. Architecture Design:

    • Decompose system into subsystems
    • Define interfaces (APIs, messages, protocols)
    • Evaluate trade-offs (performance, cost, safety)
  3. Traceability Management:

    • Maintain links: Stakeholder needs → System requirements → Software requirements → Code → Tests
    • Use tools: DOORS, Jama Connect, Excel (for small projects) - see Chapter 13 for detailed tool guidance
  4. Verification Planning:

    • Define test strategy (unit, integration, system, acceptance)
    • Allocate requirements to test levels
    • Plan test environment (HIL, proving ground, field trials)

Soft Skills

  1. Stakeholder Management:

    • Negotiate requirements with customers, safety engineers, software teams
    • Manage conflicting priorities
  2. Communication:

    • Write clear, unambiguous requirements
    • Document architecture decisions (ADRs)
    • Present trade-offs to management
  3. Critical Thinking:

    • Challenge assumptions ("Is this requirement necessary?")
    • Ask "why" questions (5 Whys technique)
    • Think holistically (how does this affect the whole system?)

Systems Engineering in ASPICE Context

ASPICE Processes for Systems Engineers

Process Systems Engineer Role Deliverables
SYS.2 Define system requirements System Requirements Specification (SyRS)
SYS.3 Design system architecture System Architecture Document (SAD)
SYS.4 Integrate system components Integration test plan, test results
SYS.5 Qualify system System test report, acceptance criteria
SWE.1 Review software requirements Ensure SWE requirements trace to SYS
SWE.2 Review software architecture Ensure SW architecture implements SYS architecture

Key Responsibility: Ensure consistency across system and software levels

Collaboration Handoff Points: Systems engineers hand off to software teams at these key points: (1) After SYS.2 requirements baseline for SWE.1, (2) After SYS.3 architecture for SWE.2 design, (3) During SYS.4/SYS.5 integration for verification coordination. Establish formal handoff reviews at each transition.


Learning Path

Recommended Steps to Develop Systems Thinking

Stage 1: Learn the Basics (1–3 months)

  • Read this book (Parts I-II: ASPICE fundamentals, processes)
  • Study the ACC ECU case study in Chapter 25 for practical examples
  • Practice writing requirements (use templates from Chapter 32.01)

Stage 2: Practice on Small Projects (3–6 months)

  • Take ownership of one subsystem (e.g., CAN communication module)
  • Write system requirements for that subsystem
  • Define interfaces with other subsystems
  • Create traceability matrix (SYS → SWE)

Stage 3: Lead a Small Project (6–12 months)

  • Act as systems engineer for a feature (e.g., ACC speed control)
  • Elicit requirements from stakeholders
  • Design architecture, document ADRs
  • Coordinate with software, hardware, safety teams

Stage 4: Mentorship (12+ months)

  • Review requirements written by others
  • Conduct architecture reviews
  • Mentor junior engineers

Summary

Systems Engineering Mindset:

  1. Requirements-Driven: Every decision traces to a requirement
  2. Interface-Focused: Define what, not how
  3. Decomposition: Break complexity into manageable parts
  4. Trade-Off Thinking: Balance cost, performance, safety, schedule
  5. Holistic View: Consider entire system, not just software

Key Skills: Requirements analysis, architecture design, traceability, verification planning, stakeholder management

ASPICE Role: Ensure consistency across system (SYS.2-5) and software (SWE.1-2) levels

Next: The Systems Engineering Mindset (33.01) — Deep dive into thinking patterns and practical scenarios