2.6: SWE.6 Software Verification


Process Definition

Purpose

ASPICE PAM v4.0 Official Purpose:

The purpose of the Software Verification process is to ensure that the integrated software is verified to be consistent with the software requirements.

ASPICE Source: PAM v4.0 Section 4.4.6, Lines 1963-1965

Outcomes (ASPICE PAM v4.0)

IMPORTANT: These are the EXACT official outcomes from ASPICE PAM v4.0. SWE.6 has exactly 5 outcomes.

Outcome Official Description (PAM v4.0) AI Support Level
O1 Verification measures are specified for software verification of the software based on the software requirements. L1-L2 (AI drafts, human validates)
O2 Verification measures are selected according to the release scope considering criteria, including criteria for regression verification. L1 (AI suggests, human decides)
O3 The integrated software is verified using the selected verification measures and the results of software verification are recorded. L2 (AI executes, human validates)
O4 Consistency and bidirectional traceability are established between verification measures and software requirements; and bidirectional traceability is established between verification results and verification measures. L2 (AI checks, human validates)
O5 Results of the software verification are summarized and communicated to all affected parties. L1 (AI drafts, human communicates)

Note on Terminology: ASPICE PAM v4.0 uses "Software Verification" (not "Software Qualification Test"). The process verifies integrated software against software requirements.

ASPICE Source: PAM v4.0 Section 4.4.6, Lines 1967-1973


Base Practices (ASPICE PAM v4.0)

IMPORTANT: SWE.6 has exactly 5 base practices. The table below uses EXACT PAM v4.0 descriptions.

BP Official Base Practice Description (PAM v4.0) AI Level AI Application HITL Required
BP1 Specify verification measures for software verification. Specify the verification measures for software verification suitable to provide evidence for compliance of the integrated software with the functional and non-functional information in the software requirements, including: techniques, pass/fail criteria, entry/exit criteria, sequence, and infrastructure/environment setup. L1-L2 AI suggests verification measures and test strategies, human validates completeness YES - Human validates test strategy
BP2 Select verification measures. Document the selection of verification measures considering selection criteria including criteria for regression verification. The documented selection of verification measures shall have sufficient coverage according to the release scope. L1 AI analyzes coverage and suggests selection, human decides final selection YES - Human decides scope
BP3 Verify the integrated software. Perform the verification of the integrated software using the selected verification measures. Record the verification results including pass/fail status and corresponding verification measure data. L2 AI executes automated tests on HIL/SIL platforms, human validates results YES - Human validates pass/fail decisions
BP4 Ensure consistency and establish bidirectional traceability. Ensure consistency and establish bidirectional traceability between verification measures and software requirements. Establish bidirectional traceability between verification results and verification measures. L2 AI generates traceability matrices and checks consistency, human validates critical links YES - Human validates traceability
BP5 Summarize and communicate results. Summarize the software verification results and communicate them to all affected parties. L1 AI drafts verification reports, human reviews and communicates to stakeholders YES - Human accountability for communication

ASPICE Source: PAM v4.0 Section 4.4.6, Lines 1975-2007


Software Verification vs. Earlier Test Levels

Test Level Comparison

Aspect SWE.4 Unit Verification SWE.5 Integration Verification SWE.6 Software Verification
Focus Code correctness Interface correctness Requirement satisfaction
Reference Detailed design Architecture Software requirements
Scope Single unit Multiple units/components Complete integrated software
Environment Host/target SIL/PIL HIL/Target
Perspective Developer Architect Customer/QA
Traceability Unit → Design Component → Architecture Verification measures → SW requirements

Key Distinction: SWE.6 verifies the integrated software as a whole against software requirements, while SWE.4 and SWE.5 verify individual units and their integration respectively.


Software Verification Strategy

Verification Environment

The diagram below depicts the Hardware-in-the-Loop (HIL) test setup, showing how the target ECU connects to simulation hardware, I/O interfaces, and the test automation framework.

HIL Test Setup


Software Verification Specification

Verification Measure Template

---
ID: SWE-VM-BCM-001
Title: Door Lock Response Time Verification
Type: Software Verification Measure
Priority: Critical
Requirement: SWE-BCM-103
Environment: HIL
---

## Objective

Verify that the integrated door lock software meets the 10ms timing requirement
for actuator command generation under real operating conditions.

## Test Setup

> **Note**: HIL platform reference is illustrative; actual platform selection is project-specific.

- HIL Platform: (e.g., dSPACE MicroAutoBox or equivalent)
- ECU: BCM production sample
- Plant Model: Door actuator dynamics (validated model)
- Measurement: Test host with 100us resolution

## Test Conditions

| Condition | Value |
|-----------|-------|
| Supply voltage | 12V ± 0.5V |
| Temperature | 25°C ± 5°C |
| CAN bus load | 50% |

## Test Procedure

### TC-001: Normal Operation Timing

| Step | Action | Expected | Measured | Status |
|------|--------|----------|----------|--------|
| 1 | Initialize BCM | Ready state | | |
| 2 | Send CAN lock command | Command received | | |
| 3 | Start timing measurement | T0 captured | | |
| 4 | Monitor actuator outputs | Outputs activated | | |
| 5 | Stop timing at last output | T1 captured | | |
| 6 | Calculate: T1 - T0 | ≤ 10ms | | |

### TC-002: High Load Timing

| Step | Action | Expected | Measured | Status |
|------|--------|----------|----------|--------|
| 1 | Configure 80% CAN load | Load active | | |
| 2 | Send CAN lock command | Command received | | |
| 3 | Measure actuator timing | ≤ 10ms | | |

### TC-003: Extended Temperature

| Step | Action | Expected | Measured | Status |
|------|--------|----------|----------|--------|
| 1 | Set chamber to 85°C | Temperature stable | | |
| 2 | Execute lock sequence | Timing ≤ 10ms | | |
| 3 | Set chamber to -40°C | Temperature stable | | |
| 4 | Execute lock sequence | Timing ≤ 10ms | | |

## Pass Criteria

- All timing measurements ≤ 10ms
- All actuator outputs in correct sequence
- No error DTCs set during normal operation

## Traceability

- Requirement: SWE-BCM-103
- System: SYS-BCM-010 (200ms system requirement)
- Forward: SYS.4 System Integration Test

AI-Assisted Software Verification

L2: Coverage Gap Analysis

The following diagram shows how AI identifies gaps in qualification test coverage by cross-referencing software requirements against executed test cases and their results.

Qualification Coverage

L2: Result Analysis

This diagram illustrates the AI-assisted analysis of qualification test results, highlighting pass/fail trends, regression detection, and automated root cause suggestions.

AI Qualification Testing


Test Automation Script

HIL Verification Execution

"""
@file swe_vm_bcm_001.py
@brief HIL verification script for door lock timing verification
@trace SWE-VM-BCM-001
@platform dSPACE MicroAutoBox III
"""

import dspace.api as ds
import time
from typing import List, Tuple
import logging

# Verification measure configuration
CONFIG = {
    "verification_measure_id": "SWE-VM-BCM-001",
    "requirement": "SWE-BCM-103",
    "timing_limit_ms": 10.0,
    "measurement_resolution_us": 100,
    "iterations": 10,
}

class DoorLockTimingVerification:
    """Software verification measure for door lock timing requirement."""

    def __init__(self, hil_connection):
        self.hil = hil_connection
        self.results: List[Tuple[int, float]] = []

    def setup(self) -> bool:
        """Initialize verification environment."""
        logging.info("Setting up verification environment")

        # Configure HIL model
        self.hil.set_parameter("BCM.SupplyVoltage", 12.0)
        self.hil.set_parameter("BCM.Temperature", 25.0)
        self.hil.set_parameter("CAN.BusLoad", 0.5)

        # Enable measurement capture
        self.hil.enable_capture("BCM.ActuatorOutputs", CONFIG["measurement_resolution_us"])

        # Initialize ECU
        self.hil.send_can_message(0x700, [0x01])  # Init command
        time.sleep(0.1)

        # Verify ready state
        status = self.hil.read_variable("BCM.Status")
        return status == "READY"

    def execute_verification_iteration(self, iteration: int) -> Tuple[bool, float]:
        """Execute single verification iteration."""
        logging.info(f"Executing verification iteration {iteration}")

        # Start capture
        self.hil.start_capture()

        # Send lock command via CAN
        self.hil.send_can_message(0x200, [0x01])  # Lock command

        # Wait for completion
        time.sleep(0.020)  # 20ms timeout

        # Stop capture
        capture_data = self.hil.stop_capture()

        # Analyze timing
        t_command = capture_data.get_event_time("CAN_RX_LOCK")
        t_last_actuator = capture_data.get_event_time("ACTUATOR_RR_SET")

        if t_command is None or t_last_actuator is None:
            return False, float('inf')

        timing_ms = (t_last_actuator - t_command) * 1000

        # Check pass/fail
        passed = timing_ms <= CONFIG["timing_limit_ms"]

        return passed, timing_ms

    def run(self) -> dict:
        """Execute complete verification sequence."""
        logging.info(f"Starting verification measure {CONFIG['verification_measure_id']}")

        # Setup
        if not self.setup():
            return {"status": "SETUP_FAILED", "results": []}

        # Execute iterations
        all_passed = True
        for i in range(CONFIG["iterations"]):
            passed, timing = self.execute_verification_iteration(i)
            self.results.append((i, timing))

            if not passed:
                all_passed = False
                logging.warning(f"Iteration {i} failed: {timing:.2f}ms")

        # Generate verification report
        report = {
            "verification_measure_id": CONFIG["verification_measure_id"],
            "requirement": CONFIG["requirement"],
            "status": "PASSED" if all_passed else "FAILED",
            "iterations": CONFIG["iterations"],
            "timing_limit_ms": CONFIG["timing_limit_ms"],
            "results": self.results,
            "min_timing_ms": min(r[1] for r in self.results),
            "max_timing_ms": max(r[1] for r in self.results),
            "avg_timing_ms": sum(r[1] for r in self.results) / len(self.results),
        }

        logging.info(f"Verification completed: {report['status']}")
        return report


# Main execution
if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)

    # Connect to HIL
    hil = ds.connect("MicroAutoBox_BCM")

    # Run verification measure
    verification = DoorLockTimingVerification(hil)
    result = verification.run()

    # Output result
    print(f"\nVerification Result: {result['status']}")
    print(f"Timing: {result['avg_timing_ms']:.2f}ms avg "
          f"({result['min_timing_ms']:.2f} - {result['max_timing_ms']:.2f}ms)")

Software Verification Report

Report Structure

---
Document: SWE.6 Software Verification Report
Project: BCM Door Lock Control
Version: 1.0
Date: 2025-01-15
Status: Conditional Release
---

## Executive Summary

The BCM Door Lock Control integrated software has been verified against 20 software
requirements. 18 of 20 requirements pass verification. 2 requirements
require remediation before full release.

## Verification Scope

| Category | Count | Passed | Failed |
|----------|-------|--------|--------|
| Functional | 12 | 11 | 1 |
| Timing | 5 | 4 | 1 |
| Error handling | 3 | 3 | 0 |
| **TOTAL** | **20** | **18** | **2** |

## Failed Requirements

### SWE-BCM-103 (Partial)

- **Issue**: Timing violation at -40°C (11.2ms vs 10ms limit)
- **Root Cause**: Motor driver slew rate at cold temperature
- **Remediation**: Update driver configuration for cold operation
- **Risk Assessment**: Low (rare operating condition)

### SWE-BCM-105 (Partial)

- **Issue**: DTC timing 180ms vs 50ms requirement
- **Root Cause**: Debounce parameter mismatch
- **Remediation**: Adjust debounce from 100ms to 50ms
- **Risk Assessment**: Medium (diagnostic response time)

## Recommendation

Conditional release approved with defect tracking:

- DR-BCM-001: Cold temperature timing fix
- DR-BCM-002: Debounce parameter correction

Full release after re-verification of fixes.

## Traceability Summary

| SW Requirement | Verification Measure | Result |
|----------------|-------------------|--------|
| SWE-BCM-100 | SWE-VM-BCM-010 | PASS |
| SWE-BCM-101 | SWE-VM-BCM-001, 002 | PASS |
| SWE-BCM-102 | SWE-VM-BCM-003, 004 | PASS |
| SWE-BCM-103 | SWE-VM-BCM-001, 005 | PARTIAL |
| SWE-BCM-104 | SWE-VM-BCM-007 | PASS |
| SWE-BCM-105 | SWE-VM-BCM-020, 021 | PARTIAL |
| ... | ... | ... |

## Approval

| Role | Name | Signature | Date |
|------|------|-----------|------|
| Test Lead | | | |
| SW Lead | | | |
| QA Manager | | | |

Work Products (Information Items per ASPICE PAM v4.0)

IMPORTANT: ASPICE PAM v4.0 uses "Information Items" terminology. The IDs below are from the official PAM work product table.

Information Item ID Information Item Name Outcomes Supported AI Role
08-60 Verification Measure O1 AI drafts verification measures (test specifications), human validates
08-58 Verification Measure Selection Set O2 AI suggests selection based on coverage and criteria, human decides
03-50 Verification Measure Data O3 AI executes verification measures and captures data
15-52 Verification Results O3 AI analyzes results (pass/fail), human validates critical failures
13-51 Consistency Evidence O4 AI generates traceability matrices, human validates consistency
13-52 Communication Evidence O5 Human only - documentation of verification summary communication

Note on Work Product IDs:

  • 08-60 Verification Measure: Test specifications, test cases, test procedures
  • 08-58 Verification Measure Selection Set: Documented selection of which verification measures to execute for a release
  • 03-50 Verification Measure Data: Raw test data, logs, measurements from execution
  • 15-52 Verification Results: Pass/fail status, defect reports, analysis of results
  • 13-51 Consistency Evidence: Traceability between requirements and verification measures
  • 13-52 Communication Evidence: Meeting minutes, verification summary reports, stakeholder sign-offs

ASPICE Source: PAM v4.0 Section 4.4.6, Work Products Table (Lines 2011-2019)


Summary

SWE.6 Software Verification - ASPICE PAM v4.0 Compliance:

  • Process Purpose: Ensure integrated software is verified to be consistent with software requirements
  • Outcomes: 5 official outcomes (O1-O5)
  • Base Practices: 5 official BPs (BP1-BP5)
  • Information Items: 08-60 (Verification Measure), 08-58 (Selection Set), 03-50 (Measure Data), 15-52 (Results), 13-51 (Consistency), 13-52 (Communication)

AI Integration by Base Practice:

  • BP1 (Specify): L1-L2 - AI drafts verification measures, human validates
  • BP2 (Select): L1 - AI suggests selection, human decides
  • BP3 (Verify): L2 - AI executes tests, human validates results
  • BP4 (Traceability): L2 - AI checks traceability, human validates
  • BP5 (Communicate): L1 - AI drafts reports, human communicates

Human-in-the-Loop (HITL) Requirements:

  • ALL base practices require human review and validation
  • BP5 (communication) requires direct human accountability
  • AI assists with execution and analysis - humans make final verification decisions

Key ASPICE Compliance Points:

  • Verification measures must trace to SOFTWARE REQUIREMENTS (not system requirements)
  • Bidirectional traceability required between: (1) verification measures ↔ SW requirements, (2) verification results ↔ verification measures
  • Verification performed on INTEGRATED SOFTWARE (not individual units)
  • Human accountability for verification summary and communication to stakeholders