1.4: SYS.4 System Integration and Integration Verification


Learning Objectives

After reading this section, you will be able to:

  • Integrate system elements according to integration strategy
  • Develop and execute system integration tests
  • Apply AI for test generation and result analysis
  • Verify interface compliance
  • Design AI-assisted HIL test campaigns
  • Implement intelligent regression test selection for integration cycles

Process Definition

Purpose

SYS.4 Purpose: To integrate system elements and verify that the integrated system elements are consistent with the system architecture.

Key Distinction: SYS.4 verifies that integrated elements work together correctly according to the system architecture. This is fundamentally different from SYS.5, which verifies that the complete integrated system satisfies the system requirements. SYS.4 answers "Did we build the interfaces right?" while SYS.5 answers "Did we build the right system?"

Outcomes

Outcome Description
O1 Verification measures for system integration are specified
O2 The sequence and preconditions for the integration of the system elements are defined
O3 Verification measures are selected for system integration
O4 Verification measures for system integration are performed and verification results recorded
O5 Consistency is ensured and bidirectional traceability is established
O6 Bidirectional traceability is established between verification results and verification measures
O7 Results are summarized and communicated to all affected parties

Relationship to Other Processes

Related Process Relationship
SYS.3 Provides the system architecture that defines integration structure and interfaces
SYS.5 Receives the fully integrated system for system-level verification
SWE.5 Software integration feeds into system integration as a system element
HWE.4 Hardware verification provides validated hardware elements for system integration
SUP.8 Configuration management tracks element versions used in integration builds

Base Practices with AI Integration

AI Automation Levels:

  • L1 (Assist): AI suggests, human decides and executes
  • L2 (Collaborate): AI drafts/executes, human reviews and approves
  • L3 (Automate): AI executes autonomously, human monitors results
BP Base Practice AI Level AI Application Detail
BP1 Specify verification measures for system integration L1-L2 Test specification AI derives integration test cases from architecture interfaces, signal lists, and timing constraints. Generates test specifications including preconditions, stimuli, and expected responses.
BP2 Select verification measures L2 Coverage optimization AI analyzes interface coverage matrix and recommends optimal test subsets. Identifies redundant tests and coverage gaps against architecture elements.
BP3 Integrate system elements and perform integration verification L2-L3 Automated execution AI orchestrates HIL test execution sequences, monitors real-time signals, and performs automated pass/fail determination against tolerance bands.
BP4 Ensure consistency and establish bidirectional traceability L2 Trace generation AI maintains traceability from architecture interfaces to integration tests to verification results. Flags orphan tests and untested interfaces.
BP5 Summarize and communicate results L2 Report generation AI generates integration test reports with trend analysis, defect classification, and coverage summaries for stakeholder communication.

System Integration Test Strategy

AI-Assisted Strategy Design

Note: The integration test strategy must be aligned with the system architecture (SYS.3) and account for hardware availability, build sequences, and interface maturity.

AI Strategy Generation Input:
─────────────────────────────
Architecture: SYS.3 system architecture description
Elements:     HW elements, SW components, external interfaces
Constraints:  Hardware availability timeline, lab access schedule
Risk data:    Historical defect density per interface type

AI Strategy Output:
───────────────────
1. Integration Sequence Recommendation
   - Phase 1: CAN communication stack (highest risk, earliest availability)
   - Phase 2: Sensor signal chain (ADC + filtering + calibration)
   - Phase 3: Actuator control path (PWM + driver + feedback)
   - Phase 4: Cross-functional interfaces (diagnostics, NVM, watchdog)
   - Phase 5: Full system integration (all elements combined)

2. Test Environment Mapping
   - Phases 1-2: SIL sufficient for protocol and logic verification
   - Phase 3: PIL required for timing-critical actuator paths
   - Phases 4-5: HIL mandatory for full electrical interface verification

3. Risk-Based Prioritization
   - CAN ↔ Application interface: HIGH (timing-critical, safety-relevant)
   - Sensor ↔ Application interface: MEDIUM (calibration-dependent)
   - NVM ↔ Application interface: LOW (well-established pattern)

Human Review: Approve sequence, validate hardware availability assumptions

Integration Approaches

Approach Description When to Use Embedded Example
Big Bang All at once Small, simple systems Simple sensor module
Incremental One element at a time Complex systems ECU with multiple functions
Top-Down High-level first Interface-heavy systems Start with CAN interface before drivers
Bottom-Up Low-level first Hardware-dependent systems Start with motor drivers before control logic
Sandwich Both directions Large, layered systems Middleware ECU with drivers and apps

Automotive Example: BCM Integration

The diagram below shows the BCM integration phases, illustrating how system elements are progressively combined and verified from individual components to the fully integrated system.

Integration Phases


Test Environment Progression

SIL → PIL → HIL → VIL

The following diagram illustrates the test environment progression from Software-in-the-Loop through Vehicle-in-the-Loop, showing increasing hardware fidelity at each stage.

Test Strategy


Test Case Generation

AI-Powered Test Case Derivation from System Requirements

Principle: Integration test cases are derived from the system architecture interface specifications, not from system requirements directly. Each architectural interface should have at least one integration test verifying data exchange, timing, and error behavior.

test_case_generation:
  input:
    architecture: "sys3_bcm_architecture.xml"
    interface_spec: "bcm_interface_definitions.xlsx"
    signal_database: "bcm_signals.dbc"
    timing_constraints: "sys_timing_budget.yaml"

  ai_analysis:
    - Parse interface definitions for all system element pairs
    - Extract signal ranges, scaling factors, and validity conditions
    - Identify timing constraints per interface path
    - Cross-reference with known failure patterns from defect database
    - Generate boundary value combinations for multi-signal interfaces

  generated_tests:
    normal_operation:
      - "SYS-INT-001: CAN lock command → actuator activation (nominal)"
      - "SYS-INT-002: Sensor input → application state update (all channels)"
      - "SYS-INT-003: Diagnostic request → response frame (all services)"

    timing_verification:
      - "SYS-INT-020: End-to-end latency CAN Rx to actuator command"
      - "SYS-INT-021: Sensor sampling to application data availability"
      - "SYS-INT-022: Watchdog trigger to safe state transition"

    error_injection:
      - "SYS-INT-040: CAN bus-off recovery and re-integration"
      - "SYS-INT-041: Sensor open-circuit detection and substitution"
      - "SYS-INT-042: Power supply brownout during actuator operation"

    boundary_conditions:
      - "SYS-INT-060: Maximum CAN bus load with all messages active"
      - "SYS-INT-061: Simultaneous actuator activation current limit"
      - "SYS-INT-062: Temperature extremes with timing verification"

  human_review:
    - Verify completeness of interface coverage
    - Confirm error injection scenarios are physically realizable
    - Approve timing tolerances and pass/fail criteria

Integration Test Specification

Test Case Example

---
ID: SYS-INT-010
Title: Door Lock CAN Command Integration
Type: Integration Test
Priority: High
Phase: Phase 4 (System Interfaces)
---

## Objective

Verify that lock command received via CAN bus correctly activates
door lock actuators within timing requirements.

## Preconditions

- BCM powered and initialized
- CAN bus connected at 500 kbps
- All four door locks in unlocked position
- HIL environment operational

## Test Steps

| Step | Action | Expected Result |
|------|--------|-----------------|
| 1 | Send CAN message 0x301 [0x01, ...] | Message received |
| 2 | Monitor lock timing | Command within 50ms |
| 3 | Verify actuator signals | All 4 doors commanded |
| 4 | Verify feedback signals | Locked state within 200ms |

## Pass Criteria

- All four doors locked within 200ms of CAN message
- No error DTCs set
- Power consumption within limits

## Traceability

- Verifies: SYS-BCM-010, SYS-BCM-025
- Architecture: CAN Interface → Application → Driver

Hardware-in-the-Loop (HIL) Testing

AI in HIL Test Environments

Note: HIL testing is the primary verification environment for SYS.4 because it allows exercising real hardware interfaces (CAN, LIN, analog/digital I/O) while simulating the vehicle environment. AI enhances HIL testing by automating signal monitoring, anomaly detection, and adaptive test sequencing.

HIL Capability Traditional Approach AI-Enhanced Approach
Signal monitoring Fixed threshold checks Adaptive tolerance bands learned from baseline runs
Fault injection Predefined fault scripts AI-generated fault combinations based on FMEA data
Pass/fail determination Static comparison tables Pattern-based evaluation with configurable confidence levels
Test sequencing Fixed sequential execution Risk-adaptive ordering with early termination on critical failures
Environment modeling Static plant models AI-tuned plant model parameters from real-world data
Coverage tracking Manual spreadsheet updates Real-time interface coverage dashboard with gap alerts

HIL Test Configuration Example

HIL Environment: BCM Integration Verification
──────────────────────────────────────────────
Hardware Under Test:  BCM ECU (production sample, Rev C)
HIL Platform:         dSPACE SCALEXIO with DS2211 I/O boards
CAN Interface:        Vector VN1610 (2x CAN channels at 500 kbps)
LIN Interface:        Vector VN1611 (1x LIN channel at 19.2 kbps)
Load Simulation:      4x electronic load modules (actuator emulation)
Sensor Simulation:    8x configurable analog outputs (0-5V)
Power Supply:         Programmable 8-16V with transient injection

AI Monitoring Layer:
  - Real-time signal recording at 1 ms resolution
  - Automated signal quality assessment per test step
  - Anomaly detection on all monitored channels
  - Timing measurement with microsecond precision
  - Automatic screenshot and waveform capture on failure

Test Automation Framework

Architecture for Automated System Integration Testing

Note: The test automation framework bridges the gap between test specifications (BP1) and automated execution (BP3). It should support multiple test environments (SIL, PIL, HIL) with a common test description language.

Framework Layer Responsibility AI Role
Test Description Human-readable test specifications in YAML/JSON AI generates test descriptions from architecture interfaces
Test Orchestrator Sequences test execution, manages preconditions AI optimizes execution order for efficiency and early defect detection
Environment Abstraction Maps logical signals to physical I/O channels AI validates signal mapping against architecture model
Signal Layer Reads/writes signals via CAN, LIN, analog, digital AI monitors signal quality and detects communication errors
Evaluation Engine Compares actual vs. expected results AI applies tolerance bands and pattern matching for pass/fail
Reporting Generates test reports and coverage matrices AI summarizes results, highlights trends, classifies defects

Automation Pipeline Configuration

# System integration test automation pipeline
sys_integration_pipeline:
  trigger:
    - integration_build_complete
    - nightly_schedule: "02:00 UTC"

  environment:
    platform: "dSPACE SCALEXIO"
    ecu_image: "${BUILD_ARTIFACT}/bcm_integration.hex"
    signal_db: "config/bcm_signals.dbc"
    plant_model: "models/vehicle_body_v3.2.mdl"

  phases:
    - name: "Flash and Initialize"
      timeout: 120s
      steps:
        - flash_ecu: "${ecu_image}"
        - wait_boot: 5s
        - verify_dtc_clear: true

    - name: "CAN Interface Integration"
      test_suite: "tests/can_integration/*.yaml"
      timeout: 600s
      abort_on_critical: true

    - name: "Sensor Chain Integration"
      test_suite: "tests/sensor_integration/*.yaml"
      timeout: 900s
      abort_on_critical: true

    - name: "Actuator Path Integration"
      test_suite: "tests/actuator_integration/*.yaml"
      timeout: 900s
      abort_on_critical: true

    - name: "Full System Integration"
      test_suite: "tests/full_integration/*.yaml"
      timeout: 1800s

  reporting:
    format: [junit_xml, html, pdf]
    ai_summary: true
    coverage_matrix: true
    trend_comparison: "last_5_builds"

AI Integration for Testing

L2: Test Case Generation

Input: System requirements + interface specifications

AI Output:
─────────
Generated Integration Test Cases for SYS-BCM-010:

1. SYS-INT-010-01: Normal lock via button
   - Precondition: All doors unlocked
   - Action: Press lock button
   - Expected: All doors locked < 200ms

2. SYS-INT-010-02: Normal lock via CAN
   - Precondition: All doors unlocked
   - Action: Send CAN lock command
   - Expected: All doors locked < 200ms

3. SYS-INT-010-03: Partial lock (1 door stuck)
   - Precondition: RL door mechanically blocked
   - Action: Press lock button
   - Expected: 3 doors locked, DTC set for RL

4. SYS-INT-010-04: Rapid lock/unlock
   - Precondition: All doors unlocked
   - Action: Lock, wait 100ms, unlock, wait 100ms, lock
   - Expected: Final state locked, no errors

Human Review: Verify completeness, add domain-specific cases

L3: Automated Test Execution

The diagram below shows how AI-generated test cases flow through automated execution environments, with results fed back for coverage analysis and regression tracking.

AI Test Generation


Defect Analysis

AI-Powered Root Cause Analysis of Integration Failures

Note: Integration defects frequently involve timing, protocol, or interface mismatch issues that are difficult to diagnose from test results alone. AI accelerates root cause analysis by correlating failure patterns across test runs and signal traces.

AI Defect Analysis Report:
──────────────────────────

Failed Test: SYS-INT-010-02 (CAN lock command integration)

Symptom: Actuator feedback delayed by 85ms (expected < 200ms, measured 285ms)

AI Correlation Analysis:
  - Signal trace shows CAN Rx timestamp normal (2ms)
  - Application processing delay normal (8ms)
  - Actuator command output delayed by 75ms
  - Delay correlates with high CPU load period (NVM write in progress)

Pattern Match:
  - Similar failure in Build #142 (NVM write blocking, resolved by priority inversion fix)
  - 3 other tests show marginal timing in same execution window

Root Cause Hypothesis:
  1. [HIGH confidence] NVM write operation blocking actuator task
  2. [MEDIUM confidence] Task priority configuration mismatch after RTOS update
  3. [LOW confidence] Hardware timer drift under thermal stress

Suggested Investigation:
  - Check RTOS task priority table against architecture specification
  - Review NVM write scheduling vs. actuator control cycle
  - Capture CPU load trace during next test execution

Human Action: Validate hypothesis, authorize investigation path

Defect Classification Matrix

Defect Category Typical Root Cause AI Detection Method Resolution Approach
Timing violation Task scheduling, bus load Statistical timing analysis across runs RTOS configuration review, bus load optimization
Signal mismatch Scaling factor, endianness Value range comparison against interface spec Signal database update, calibration correction
Protocol error State machine, sequence CAN/LIN protocol decode and validation Protocol stack configuration, state machine fix
Missing response Initialization order, timeout Timeout pattern and boot sequence analysis Startup sequence adjustment, timeout tuning
Intermittent failure Race condition, EMC Flaky test detection and environmental correlation Synchronization fix, shielding improvement

Regression Testing

Intelligent Test Selection and Prioritization

Note: Full integration test suites can take hours to execute on HIL platforms. AI-assisted regression test selection reduces execution time while maintaining confidence in integration quality.

Selection Strategy Description AI Contribution
Change-based Select tests affected by modified system elements AI maps code/config changes to affected interfaces and tests
Risk-based Prioritize tests for high-risk interfaces AI ranks interfaces by historical defect density and safety impact
Time-budget Select maximum coverage within time constraint AI solves coverage optimization given execution time budget
Failure-history Prioritize previously failing tests AI weights tests by recent failure frequency and fix proximity
Dependency-based Include tests for downstream interfaces AI traces element dependencies through architecture model

Regression Optimization Example

AI Regression Analysis:
───────────────────────

Build Delta: CAN driver updated (v2.3.1 → v2.4.0), NVM module unchanged

Full Suite:     248 integration tests, estimated 4.2 hours
AI Recommended:  67 integration tests, estimated 1.1 hours

Selection Breakdown:
  - CAN interface tests (all):       34 tests  [change-based]
  - CAN-dependent path tests:        18 tests  [dependency-based]
  - Previously flaky tests:           8 tests  [failure-history]
  - Safety-critical path smoke tests: 7 tests  [risk-based]

Excluded (with justification):
  - NVM integration tests (42):  No changes to NVM module or interfaces
  - LIN interface tests (38):    No shared code path with CAN update
  - Sensor chain tests (56):     Independent signal path, no coupling

Coverage Impact:
  - Interface coverage: 94% of affected interfaces (vs. 100% full suite)
  - Risk coverage: 100% of HIGH-risk interfaces maintained
  - Estimated defect escape probability: < 2%

Human Decision: Approve reduced suite or request full execution

HITL Protocol for System Integration Testing

Human-in-the-Loop Protocol: System integration testing involves safety-relevant decisions about interface behavior and system-level interactions. The following protocol defines mandatory human checkpoints.

Decision Point AI Role Human Role Approval Required
Integration strategy and sequence AI proposes sequence based on risk and availability analysis Engineer reviews, adjusts for project constraints Yes -- strategy sign-off
Test specification review AI generates test cases from architecture interfaces Test engineer validates completeness and correctness Yes -- test spec approval
Test environment configuration AI validates signal mapping and model parameters Engineer verifies HIL setup matches architecture Yes -- environment release
Test execution monitoring AI executes tests, monitors signals, flags anomalies Engineer monitors critical test phases, responds to anomalies No -- routine execution
Pass/fail determination AI applies evaluation criteria, reports results Engineer reviews all failures and borderline passes Yes -- for failures and anomalies
Root cause analysis AI correlates failures with historical patterns and signal data Engineer validates hypothesis, determines true root cause Yes -- defect disposition
Regression scope selection AI recommends test subset based on change impact Engineer approves reduced scope or mandates full suite Yes -- regression scope approval
Integration milestone sign-off AI generates integration report with coverage summary Integration manager reviews report and approves milestone Yes -- milestone gate

Escalation Criteria

Mandatory human escalation is triggered when any of the following conditions are detected during automated test execution:

Condition Action
Safety-critical interface test failure Immediate halt, notify safety engineer
More than 3 failures in a single integration phase Pause execution, request human assessment
Timing violation exceeding 2x the specified tolerance Flag as potential architecture issue, escalate to system architect
Previously passing test now failing (regression) Classify severity, block build promotion if HIGH
Anomalous signal behavior not covered by pass/fail criteria Log for human review, do not auto-classify

Work Products

Note: Work Product IDs follow ASPICE 4.0 standard numbering.

WP ID Work Product Content AI Role
08-60 Verification Measure Specifications for integration verification including test cases, preconditions, and expected results AI generates draft specifications from architecture interfaces
06-50 Integration Sequence Instruction Defined integration sequence, preconditions, and build configuration per phase AI proposes sequence based on risk analysis and dependency graph
03-50 Verification Measure Data Raw verification data including signal traces, timing measurements, and CAN logs AI indexes and tags raw data for traceability
15-52 Verification Results Recorded pass/fail status with evidence links AI performs initial pass/fail evaluation, human confirms
15-08 Build Record Element versions, configuration items, and build identifiers per integration step AI extracts version info from build system, verifies consistency
13-51 Consistency Evidence Traceability matrix: architecture interfaces to tests to results AI maintains and validates traceability links
13-52 Communication Evidence Integration test summary reports for stakeholders AI generates reports with trend analysis and recommendations

Tool Integration

Vector CANoe, dSPACE, and NI with AI

Note: AI integration with established automotive test tools enhances automation without replacing proven measurement and simulation capabilities. The AI layer sits above the tool-specific interfaces.

Tool Primary Use in SYS.4 AI Integration Point
Vector CANoe CAN/LIN bus simulation and monitoring AI generates CAPL test scripts from interface specs; automated protocol compliance checks; signal anomaly detection in bus traces
Vector vTESTstudio Test case management and execution AI maps test specifications to executable test sequences; coverage gap analysis against architecture interfaces
dSPACE SCALEXIO Real-time HIL simulation platform AI tunes plant model parameters; adaptive fault injection scheduling; automated signal quality assessment
dSPACE ControlDesk HIL experiment management and instrumentation AI configures measurement layouts from test specifications; automated parameter sweep management
NI VeriStand Real-time test and simulation AI generates stimulus profiles from requirement constraints; automated test sequencing with adaptive timing
NI TestStand Test execution management AI optimizes test sequence ordering; parallel test scheduling; intelligent retry on intermittent failures
DOORS / Polarion Requirements and traceability management AI maintains bidirectional traces between requirements, architecture, test specs, and results

Tool Chain Integration Architecture

Tool Chain Data Flow:
─────────────────────
Requirements DB (DOORS)
    ↓ [AI: trace extraction]
Architecture Model (Enterprise Architect / Rhapsody)
    ↓ [AI: interface enumeration]
Test Specification (vTESTstudio / TestStand)
    ↓ [AI: test script generation]
HIL Execution (CANoe + SCALEXIO / VeriStand)
    ↓ [AI: signal monitoring, pass/fail evaluation]
Result Database (custom / ALM tool)
    ↓ [AI: trend analysis, defect correlation]
Traceability Report (auto-generated)
    ↓ [Human: review and approve]
Integration Milestone Gate

Implementation Checklist

Usage: This checklist supports project teams in establishing SYS.4 process compliance with AI integration. Each item maps to the relevant base practice and outcome.

# Checklist Item BP Outcome Status
1 System architecture (SYS.3) baselined and available -- Prereq [ ]
2 Integration strategy defined (approach, sequence, phases) BP1 O2 [ ]
3 Integration test environment identified and configured (SIL/PIL/HIL) BP1 O1 [ ]
4 AI tool qualified for test generation (if used for safety-relevant tests) BP1 O1 [ ]
5 Integration test specifications created for all architecture interfaces BP1 O1 [ ]
6 Test specifications reviewed and approved (HITL gate) BP1 O1 [ ]
7 Verification measures selected per release scope BP2 O3 [ ]
8 HIL signal mapping validated against architecture model BP3 O4 [ ]
9 Integration build procedure documented with element versions BP3 O2 [ ]
10 Integration tests executed and raw data recorded BP3 O4 [ ]
11 Pass/fail results determined and failures investigated BP3 O4 [ ]
12 Traceability established: architecture → test spec → result BP4 O5, O6 [ ]
13 Coverage analysis completed (interface coverage, requirement coverage) BP4 O5 [ ]
14 Gaps identified and addressed or justified BP4 O5 [ ]
15 Integration test report generated and communicated BP5 O7 [ ]
16 Regression test scope defined for subsequent integration cycles BP2 O3 [ ]
17 All defects logged with root cause and disposition BP3 O4 [ ]
18 Integration milestone sign-off obtained BP5 O7 [ ]

Summary

SYS.4 System Integration and Integration Verification:

  • AI Level: L2-L3 (significant automation possible)
  • AI Value: Test generation, execution automation, result analysis
  • Human Essential: Strategy decisions, failure analysis, milestone sign-off
  • Key Outputs: Integration verification specification, verification results, coverage matrix
  • Test Environments: SIL → PIL → HIL → VIL progression
  • Critical Success Factor: Architecture-driven test derivation with AI assistance and human oversight