3.6: HWE.4 Hardware Verification


Process Definition

Purpose

ASPICE PAM v4.0 Official Purpose:

The purpose of the Hardware Verification process is to verify the hardware design and implementation against the hardware requirements. This includes functional testing, environmental testing (thermal, EMC), reliability validation, and ensuring production-ready hardware through systematic verification activities.

ASPICE Source: PAM v4.0 Section 4.5.4

Outcomes (ASPICE PAM v4.0)

IMPORTANT: These are the official outcomes aligned with ASPICE PAM v4.0. HWE.4 defines the verification activities that demonstrate compliance of the hardware with its specified requirements.

Outcome Official Description (PAM v4.0) AI Support Level
O1 Verification measures are specified for hardware verification of the hardware design and implementation based on the hardware requirements. L1-L2 (AI drafts verification plans, human validates)
O2 Verification measures are selected according to the release scope including criteria for regression verification. L1 (AI suggests, human decides)
O3 The hardware is verified using the selected verification measures and the results of hardware verification are recorded. L2 (AI executes simulation/analysis, human validates)
O4 Consistency and bidirectional traceability are established between verification measures and hardware requirements; and bidirectional traceability is established between verification results and verification measures. L2 (AI generates traceability, human validates)
O5 Results of the hardware verification are summarized and communicated to all affected parties. L1 (AI drafts, human communicates)

Base Practices with AI Integration

IMPORTANT: The table below maps HWE.4 base practices to AI integration levels and HITL requirements.

BP Base Practice Description AI Level AI Application HITL Required
BP1 Specify verification measures for hardware verification. Define verification techniques (simulation, bench test, environmental test, formal analysis), pass/fail criteria, entry/exit criteria, sequencing, and required infrastructure/equipment setup for each hardware requirement. L1-L2 AI drafts verification plans from requirements, suggests test coverage strategies, identifies missing verification measures. Human validates completeness and technical adequacy. YES - Human validates verification strategy
BP2 Select verification measures. Document the selection of verification measures considering release scope, criticality, and regression verification criteria. Ensure sufficient coverage of hardware requirements. L1 AI analyzes coverage matrices and suggests optimal verification measure selection. AI flags under-covered requirements and recommends additional measures. Human decides final selection. YES - Human decides scope and priority
BP3 Verify the hardware. Execute the selected verification measures on the hardware design (simulation) and hardware implementation (bench testing, environmental testing). Record all verification results including pass/fail status and measurement data. L2 AI orchestrates simulation runs, automates test bench sequences, collects and organizes measurement data. AI performs initial pass/fail assessment. Human validates all results, especially failures and marginal passes. YES - Human validates all pass/fail decisions
BP4 Ensure consistency and establish bidirectional traceability. Maintain traceability between verification measures and hardware requirements, and between verification results and verification measures. L2 AI generates traceability matrices automatically, detects gaps and inconsistencies, flags orphan tests or untested requirements. Human validates critical traceability links. YES - Human validates traceability completeness
BP5 Summarize and communicate results. Summarize the hardware verification results and communicate the verification status, open issues, and risk assessment to all affected parties including hardware, software, and system teams. L1 AI drafts verification summary reports, generates charts and trend analysis, highlights critical findings. Human reviews, approves, and communicates to stakeholders. YES - Human accountability for communication

Verification Testing Categories

1. In-Circuit Testing (ICT)

Objective: Detect manufacturing defects before system integration

Test Points:

  • Continuity (all traces connected)
  • Voltage (power supply rails correct)
  • Resistance (component values within tolerance)
  • Capacitance (filtering adequate)

Example ICT Program:

Test 1: Power Supply Voltage
  |- Apply 5V input
  |- Measure 3.3V output (+/-3%)
  |_ Expected: 3.20-3.36V

Test 2: Ground Continuity
  |- Measure resistance between GND test points
  |_ Expected: <1 ohm

Test 3: Component Presence
  |- Capacitors: Measure capacitance (+/-10%)
  |- Resistors: Measure resistance (+/-5%)
  |_ Inductors: Measure inductance (+/-10%)

Test 4: Signal Integrity
  |- Clock oscillator frequency (8MHz +/-1%)
  |_ JTAG chain (scan path integrity)

Results: PASS 99/100 boards

2. Thermal Testing

Purpose: Verify heat dissipation under worst-case conditions

Test Setup:

Environmental chamber (Thermotron, Weiss):
|- Ambient temperature: 50 deg C (worst case)
|- Board power dissipation: 2W
|- Thermal monitoring: 10 thermocouples
|_ Duration: 4 hours (thermal stabilization)

Measurement Points:
|- Microcontroller junction (target <100 deg C)
|- Power regulator (target <80 deg C)
|- Sensor interface (target <60 deg C)
|_ PCB ambient (confirm chamber temp)

Results:

MCU Junction Temperature: 95 deg C [PASS] (well below 125 deg C limit)
Power Regulator: 75 deg C [PASS] (margin available)
Thermal design margin: >20 deg C [PASS] ACCEPTABLE

3. EMC Testing (Emissions & Immunity)

Objective: Verify design meets regulatory standards

EMI Emissions (FCC, CISPR):

Frequency Range: 150kHz - 1GHz
Test Method: Conducted (power lines), Radiated (antenna)

Results:
|- 8MHz oscillator: -40dBuV/m @ 1m [PASS] (limit: -20dBuV/m)
|- MCU switching noise: -35dBuV/m [PASS]
|_ Power supply ripple: <10mV peak-to-peak [PASS]

EMI Immunity (IEC 61000):

ESD (Electrostatic Discharge): +/-8kV contact
|- Test: Zap connectors with 8kV discharge
|_ Result: System remains responsive [PASS]

RF Immunity (100MHz - 1GHz, 10V/m):
|- Test: Expose to RF field, monitor ADC accuracy
|_ Result: ADC error <1% [PASS]

Power Supply Disturbance (+/-20% voltage transient):
|- Test: Simulate brownout conditions
|_ Result: Watchdog triggers, safe shutdown [PASS]

4. Reliability Testing (HALT/HASS)

HALT (Highly Accelerated Life Testing):

Stress conditions (beyond normal operating range):
|- Temperature range: -10 deg C to +80 deg C (vs. normal 0-50 deg C)
|- Power cycling: 500 cycles (on/off every 30 seconds)
|- Vibration: 5G acceleration, 20Hz-2kHz sweep
|_ Combined stress: All three simultaneously

Test Duration: 168 hours (1 week)
Failure Criteria: Any functional loss
Target: ZERO failures
Result: [PASS] All boards pass (robustness demonstrated)

HASS (Highly Accelerated Stress Screening):

  • Manufacturing screening process
  • Applies subset of HALT stresses
  • Detects marginal components before field deployment

HW Verification Strategies

Hardware verification employs multiple complementary strategies to ensure design correctness at different stages. Each strategy addresses a distinct class of defects and provides evidence for different aspects of the hardware requirements.

Simulation

Simulation verifies design behavior at the schematic and layout level before physical prototypes exist. It is the earliest and most cost-effective verification method.

Simulation Type Purpose Tools (Examples) AI Integration
SPICE Simulation Analog circuit behavior: gain, bandwidth, stability, noise, transient response LTspice, PSpice, Cadence Spectre AI sweeps parameter corners, identifies worst-case operating points, suggests design margin improvements
Signal Integrity (SI) Transmission line effects, crosstalk, impedance matching, eye diagrams HyperLynx, Ansys SIwave AI analyzes eye diagram margins, flags traces at risk of timing violations
Power Integrity (PI) PDN impedance, voltage droop, decoupling adequacy Ansys RedHawk, Cadence Voltus AI optimizes decoupling capacitor placement and values for target impedance profile
Thermal Simulation Junction temperatures, thermal via effectiveness, airflow modeling Ansys Icepak, FloTHERM AI predicts hotspot locations, suggests layout modifications to improve thermal performance
EMC Pre-compliance Radiated and conducted emissions prediction, susceptibility modeling Ansys HFSS, CST Studio AI correlates simulation predictions with historical lab results to estimate pass/fail probability

Emulation (FPGA Prototyping)

For designs that include programmable logic or complex digital subsystems, FPGA-based emulation provides a high-fidelity verification platform.

  • Speed advantage: Emulation runs at MHz speeds compared to simulation at Hz/kHz, enabling verification of real-time behavior
  • HW-SW co-verification: Actual firmware can execute on the emulated hardware, testing true HW-SW interaction
  • Peripheral validation: I/O interfaces can be validated against real external devices before silicon availability
  • AI role: AI monitors emulation runs for anomalous behavior, correlates FPGA results with simulation predictions, and flags discrepancies for human review

Formal Verification

Formal methods provide mathematical proof of design properties, complementing simulation which can only cover a finite set of scenarios.

Technique Application Benefit
Property checking Verify that specific assertions always hold (e.g., "output voltage never exceeds absolute maximum") Exhaustive proof, no corner cases missed
Equivalence checking Confirm that the implemented netlist matches the schematic intent Catches synthesis errors, manual editing mistakes
Timing closure Static timing analysis proves all paths meet setup/hold requirements No dynamic stimuli needed, all paths covered

AI in Hardware Verification

AI and machine learning techniques bring significant value to hardware verification by addressing the state-space explosion problem and accelerating coverage closure.

ML for Coverage Analysis

Traditional coverage analysis relies on engineers to manually identify gaps and write additional tests. AI transforms this into an automated, data-driven process.

Coverage prediction model:

"""
AI-assisted coverage analysis for hardware verification.
Predicts verification gaps and suggests targeted test stimuli.
"""

from dataclasses import dataclass
from typing import List, Dict

@dataclass
class CoverageGap:
    requirement_id: str
    current_coverage: float
    target_coverage: float
    suggested_stimuli: List[str]
    priority: str

class HWCoverageAnalyzer:
    """Analyze HW verification coverage and suggest closure strategies."""

    def __init__(self, coverage_db_path: str):
        self.coverage_data = self._load_coverage(coverage_db_path)

    def _load_coverage(self, path: str) -> Dict:
        """Load coverage database from simulation/test results."""
        # Parse coverage reports from simulation tools
        return {}

    def identify_gaps(self, target: float = 0.95) -> List[CoverageGap]:
        """Identify requirements with coverage below target threshold."""
        gaps = []
        for req_id, coverage in self.coverage_data.items():
            if coverage < target:
                gap = CoverageGap(
                    requirement_id=req_id,
                    current_coverage=coverage,
                    target_coverage=target,
                    suggested_stimuli=self._suggest_stimuli(req_id),
                    priority=self._classify_priority(req_id, coverage)
                )
                gaps.append(gap)
        return gaps

    def _suggest_stimuli(self, req_id: str) -> List[str]:
        """Use AI to suggest test stimuli that target uncovered scenarios."""
        # AI analyzes which input combinations have not been exercised
        # and recommends specific test vectors
        return []

    def _classify_priority(self, req_id: str, coverage: float) -> str:
        """Classify gap priority based on requirement criticality."""
        if coverage < 0.50:
            return "CRITICAL"
        elif coverage < 0.80:
            return "HIGH"
        else:
            return "MEDIUM"

Assertion Mining

AI can automatically extract implicit design intent from schematics and documentation, then generate verification assertions that engineers may not have considered.

AI assertion mining workflow:

  1. Input ingestion: AI processes schematic netlists, component datasheets, and design specifications
  2. Rule extraction: AI identifies implicit constraints (e.g., maximum current ratings, voltage thresholds, timing relationships)
  3. Assertion generation: AI generates formal assertions encoding these constraints
  4. Human review: Engineer validates that generated assertions correctly capture design intent
  5. Integration: Approved assertions are added to the verification plan

Example generated assertions:

Assertion HWA-001: VREG_3V3 output shall remain within 3.135V-3.465V
                   under all load conditions (0mA to 500mA)
  Source: LDO datasheet + schematic load analysis
  Priority: CRITICAL (MCU operating range)

Assertion HWA-002: Clock jitter on XTAL_OUT shall not exceed 50ps RMS
                   across temperature range -40C to +85C
  Source: MCU datasheet timing requirements
  Priority: HIGH (CAN communication reliability)

Assertion HWA-003: Decoupling capacitor ESR on VDD pins shall be
                   < 50 milliohm at 100MHz
  Source: PDN impedance target from PI analysis
  Priority: MEDIUM (power integrity margin)

Bug Hunting with AI

AI-powered bug hunting applies anomaly detection and pattern matching across verification results to identify subtle hardware defects.

Technique Description Defect Class
Anomaly detection ML model trained on passing test patterns detects statistical outliers in measurement data that indicate marginal components Marginal solder joints, parametric drift
Cross-correlation AI correlates failures across multiple test categories (thermal + EMC + reliability) to identify common root causes Design weaknesses exposed under combined stress
Historical pattern matching AI compares current verification results against databases of known failure modes from previous projects Recurring layout mistakes, component derating issues
Regression analysis AI identifies verification results that degraded between hardware revisions, even when still within pass criteria Gradual performance erosion across board revisions

HW-SW Co-Verification

Hardware verification cannot be performed in isolation. The hardware must be verified in the context of the software that will execute on it, and vice versa. HW-SW co-verification addresses the integration boundary.

Integrated Verification Approaches

Approach Stage Description AI Role
Virtual Prototyping Pre-silicon SystemC/TLM models of hardware execute with actual firmware binary. Verifies register maps, interrupt handling, memory maps. AI generates virtual platform models from register specifications, monitors firmware-hardware interactions for protocol violations.
FPGA-in-the-Loop Pre-production Hardware logic runs on FPGA while software runs on target MCU or host processor. Real-time HW-SW interaction verified. AI compares FPGA behavior against simulation golden reference, flags timing discrepancies.
HIL with Prototype Prototype Physical prototype board runs in HIL environment with simulated plant models and real communication buses. AI orchestrates test sequences, monitors analog and digital interfaces, detects intermittent failures through statistical analysis.
Target Board Test Production Final production hardware tested with production software under real operating conditions. AI analyzes production test data trends across board populations, detects manufacturing drift.

Interface Verification Matrix

For each HW-SW interface, verification must cover both directions of interaction:

HW-SW Interface Verification Points:
|- GPIO: SW write -> HW output level measured
|         HW input stimulus -> SW read value confirmed
|- ADC:  HW analog input applied -> SW digital value verified
|         SW configuration -> HW sampling rate/resolution confirmed
|- PWM:  SW duty cycle set -> HW output waveform measured
|         HW timer overflow -> SW interrupt timing verified
|- SPI:  SW transmit data -> HW MOSI waveform captured
|         HW MISO stimulus -> SW receive buffer verified
|- CAN:  SW frame transmit -> HW bus waveform compliant
|         HW bus frame received -> SW message buffer correct
|_ I2C:  SW address/data sent -> HW bus protocol verified
          HW ACK/NACK -> SW status register correct

Test Bench Automation

AI-Assisted Testbench Generation

AI accelerates testbench development by generating test infrastructure from hardware specifications. This reduces the manual effort of writing repetitive test code while ensuring comprehensive coverage.

Automated test bench workflow:

  1. Specification parsing: AI extracts testable parameters from hardware requirements and datasheets
  2. Stimulus generation: AI creates parameterized test vectors covering nominal, boundary, and corner cases
  3. Checker generation: AI writes output verification logic with appropriate tolerances
  4. Sequence assembly: AI orders test cases for efficient execution (minimizing setup changes)
  5. Report template: AI generates results capture and reporting infrastructure

Example: AI-generated power supply verification bench:

"""
@file test_power_supply_verification.py
@brief AI-generated testbench for power supply verification
@trace HWE-REQ-PS-001 through HWE-REQ-PS-012
@generated AI-assisted, human-reviewed
"""

from dataclasses import dataclass
from typing import List, Tuple

@dataclass
class PowerSupplyTestCase:
    test_id: str
    description: str
    input_voltage: float
    load_current_ma: float
    expected_output_v: float
    tolerance_percent: float
    max_ripple_mv: float

# AI-generated test cases from requirement analysis
TEST_CASES: List[PowerSupplyTestCase] = [
    # Nominal operating conditions
    PowerSupplyTestCase("PS-TC-001", "Nominal input, no load",
                        12.0, 0.0, 3.300, 3.0, 20.0),
    PowerSupplyTestCase("PS-TC-002", "Nominal input, half load",
                        12.0, 250.0, 3.300, 3.0, 30.0),
    PowerSupplyTestCase("PS-TC-003", "Nominal input, full load",
                        12.0, 500.0, 3.300, 5.0, 50.0),
    # Boundary conditions (AI-identified from datasheet analysis)
    PowerSupplyTestCase("PS-TC-004", "Minimum input voltage",
                        6.0, 500.0, 3.300, 5.0, 80.0),
    PowerSupplyTestCase("PS-TC-005", "Maximum input voltage",
                        36.0, 500.0, 3.300, 3.0, 30.0),
    # Transient conditions
    PowerSupplyTestCase("PS-TC-006", "Load step 0 to 500mA",
                        12.0, 500.0, 3.300, 8.0, 150.0),
    PowerSupplyTestCase("PS-TC-007", "Cold crank (6V, 100ms)",
                        6.0, 500.0, 3.300, 10.0, 200.0),
]

class PowerSupplyTestBench:
    """Automated testbench for power supply verification."""

    def __init__(self, instrument_controller):
        self.instruments = instrument_controller
        self.results: List[dict] = []

    def execute_all(self) -> dict:
        """Execute all test cases and return summary."""
        for tc in TEST_CASES:
            result = self._execute_single(tc)
            self.results.append(result)

        passed = sum(1 for r in self.results if r["status"] == "PASS")
        return {
            "total": len(self.results),
            "passed": passed,
            "failed": len(self.results) - passed,
            "overall": "PASS" if passed == len(self.results) else "FAIL",
            "details": self.results,
        }

    def _execute_single(self, tc: PowerSupplyTestCase) -> dict:
        """Execute a single test case."""
        # Configure input supply
        self.instruments.set_supply_voltage(tc.input_voltage)
        self.instruments.set_electronic_load(tc.load_current_ma)
        self.instruments.wait_settle(500)  # 500ms settling time

        # Measure output
        measured_v = self.instruments.measure_dc_voltage("VOUT_3V3")
        measured_ripple = self.instruments.measure_ripple_mv("VOUT_3V3")

        # Evaluate pass/fail
        v_min = tc.expected_output_v * (1 - tc.tolerance_percent / 100)
        v_max = tc.expected_output_v * (1 + tc.tolerance_percent / 100)
        v_pass = v_min <= measured_v <= v_max
        ripple_pass = measured_ripple <= tc.max_ripple_mv

        return {
            "test_id": tc.test_id,
            "status": "PASS" if (v_pass and ripple_pass) else "FAIL",
            "measured_voltage": measured_v,
            "measured_ripple_mv": measured_ripple,
            "voltage_pass": v_pass,
            "ripple_pass": ripple_pass,
        }

Coverage Analysis

Coverage analysis in hardware verification ensures that the verification plan exercises all requirements, all operating conditions, and all critical design parameters. AI significantly improves coverage closure efficiency.

Code Coverage (HDL Designs)

For designs that include programmable logic (FPGA, CPLD), code coverage metrics apply to the HDL source.

Coverage Metric Description Target
Statement coverage Every HDL statement executed at least once >95%
Branch coverage Every conditional branch taken in both directions >90%
Toggle coverage Every signal toggles between 0 and 1 >85%
FSM coverage Every state visited and every transition exercised 100% for safety-critical FSMs
Expression coverage Every sub-expression in complex conditions evaluated independently >80%

Functional Coverage

Functional coverage measures whether the verification plan has exercised the design across its intended operating space. Unlike code coverage, functional coverage is requirement-driven and defined by the engineer.

Functional coverage categories for hardware verification:

Category Coverage Points Measurement Method
Requirement coverage Every HW requirement has at least one verification measure mapped to it Traceability matrix analysis
Parameter coverage All specified operating parameters tested at nominal, minimum, and maximum values Test case parameter matrix
Environmental coverage All environmental conditions (temperature, humidity, vibration, EMC) tested per specification Environmental test matrix
Interface coverage Every HW interface exercised in both directions with nominal and fault conditions Interface verification matrix
Corner case coverage Worst-case combinations of parameters tested (e.g., max temperature + max load + min voltage) Corner case matrix
Failure mode coverage All identified failure modes from FMEA have corresponding verification measures FMEA-to-test mapping

AI-Driven Coverage Closure

AI accelerates coverage closure by intelligently targeting the most impactful untested scenarios.

AI coverage closure process:

  1. Gap identification: AI analyzes current coverage metrics and identifies the most significant gaps relative to target thresholds
  2. Impact ranking: AI ranks gaps by risk (requirement criticality times coverage deficit) to prioritize closure effort
  3. Test generation: AI generates targeted test vectors designed to close specific coverage gaps with minimal additional test time
  4. Execution optimization: AI sequences tests to minimize equipment reconfiguration and environmental chamber transitions
  5. Progress tracking: AI continuously updates coverage metrics and projects remaining effort to reach targets
Coverage Closure Dashboard (AI-generated):
|----------------------------------------------------|
| Category          | Current | Target | Status      |
|----------------------------------------------------|
| Requirement       |   98%   |  100%  | 2 gaps      |
| Parameter         |   92%   |   95%  | On track    |
| Environmental     |   88%   |   90%  | 1 gap       |
| Interface         |  100%   |  100%  | Complete    |
| Corner case       |   85%   |   90%  | 3 gaps      |
| Failure mode      |   95%   |  100%  | 2 gaps      |
|----------------------------------------------------|
| AI Recommendation: Prioritize corner case gaps     |
| (high criticality). Estimated 8 additional tests   |
| needed. Projected closure: 2 days.                 |
|----------------------------------------------------|

Formal Methods with AI

Formal methods provide mathematical guarantees about hardware behavior that simulation and testing alone cannot achieve. AI enhances formal methods by automating property specification and managing the computational complexity of formal proofs.

Property Checking

Property checking verifies that a design satisfies specified properties under all possible input conditions and state sequences. AI assists in two critical ways:

  1. Property extraction: AI reads hardware specifications and datasheets to automatically generate formal properties. For example, from the specification "regulator output shall not exceed 3.6V under any load condition," AI generates the corresponding formal assertion with all relevant boundary conditions.

  2. Counterexample analysis: When formal verification finds a property violation, AI analyzes the counterexample trace to identify the root cause and suggest design corrections. This reduces the manual effort of interpreting complex counterexample waveforms.

Equivalence Checking

Equivalence checking proves that two representations of a design are functionally identical. Common applications in hardware verification include:

Check Type Comparison Purpose
Schematic vs. netlist Design intent vs. synthesized implementation Catch synthesis tool errors or unintended optimizations
Pre-layout vs. post-layout Logical design vs. physical implementation Detect layout-induced functional changes (e.g., antenna effects in CMOS)
Revision A vs. Revision B Original design vs. modified design Confirm that only intended changes were made between hardware revisions
Simulation model vs. HDL Behavioral model vs. RTL implementation Verify that the implementation matches the specification model

AI role in equivalence checking: AI identifies the structural differences between two design versions and classifies each difference as intentional (matching a change request) or unintentional (potential error). This dramatically reduces the manual review burden when designs undergo frequent revisions.


HITL Protocol for HW Verification

Hardware verification involves physical testing, safety-critical decisions, and regulatory compliance. The Human-in-the-Loop (HITL) protocol defines mandatory human sign-off gates throughout the verification process.

Sign-Off Gates

Gate Activity AI Contribution Human Responsibility Sign-Off Artifact
G1: Verification Plan Approval Review and approve the HW verification plan including test categories, pass/fail criteria, and equipment requirements AI drafts verification plan from requirements, suggests coverage strategies HW lead reviews plan for completeness, validates that all safety-critical requirements have verification measures Signed verification plan
G2: Simulation Review Review simulation results before proceeding to physical prototyping AI runs simulations, summarizes results, highlights marginal parameters HW engineer validates simulation setup, confirms models are appropriate, approves progression to prototype Simulation review report
G3: Prototype Test Approval Approve test procedures before executing on physical prototypes AI generates test procedures from verification plan, calculates expected values Test engineer reviews procedures for safety, validates equipment setup, approves execution Approved test procedures
G4: Test Result Validation Validate all verification results including pass, fail, and marginal outcomes AI collates results, performs statistical analysis, flags anomalies, drafts verification report HW lead validates every failure root cause, confirms marginal passes have adequate design margin, approves verification status Signed verification results
G5: Verification Completion Final sign-off that all hardware requirements have been verified AI generates final coverage summary, produces traceability evidence, drafts Design Verification Report Project manager, HW lead, and QA confirm all requirements verified, all defects resolved, design ready for release Signed Design Verification Report

Escalation Criteria

The following conditions require escalation beyond the standard HITL protocol:

  • Any safety-critical requirement failure: Escalate to safety manager and project lead immediately
  • Multiple marginal passes on the same subsystem: Escalate to design authority for margin adequacy review
  • Environmental test failure: Escalate to EMC/reliability specialist before re-test
  • Coverage target not achievable: Escalate to project manager for scope negotiation
  • AI-human disagreement on pass/fail: Human decision prevails; disagreement documented with rationale

Design Verification Report

Example Report Structure:

Test Summary:
|- ICT: 99/100 PASS
|- Thermal: PASS (95 deg C junction < 125 deg C limit)
|- EMC: PASS (all limits met)
|- Reliability: PASS (500 power cycles, no failures)
|_ Overall: [PASS] APPROVED FOR PRODUCTION

Defects Found & Resolved:
|- 1 board with cold solder joint (reworked, passed re-test)
|_ No design flaws identified

Design Margin Analysis:
|- Thermal: >20 deg C margin [PASS]
|- Voltage regulation: >2% margin [PASS]
|- Signal integrity: Adequate [PASS]
|_ EMC: 20dB margin [PASS]

Risk Assessment:
|- No residual risks identified
|_ Design ready for production release

Work Products (Information Items per ASPICE PAM v4.0)

IMPORTANT: ASPICE PAM v4.0 uses "Information Items" terminology. The IDs below are aligned with the official PAM work product table for hardware verification.

Information Item ID Information Item Name Outcomes Supported AI Role
08-60 Verification Measure O1 AI drafts verification measures (test specifications, simulation plans, environmental test procedures) from hardware requirements. Human validates technical adequacy.
08-58 Verification Measure Selection Set O2 AI suggests optimal selection based on coverage analysis, regression criteria, and release scope. Human decides final selection.
03-50 Verification Measure Data O3 AI captures raw measurement data, simulation logs, environmental chamber recordings, and ICT results. AI organizes data for analysis.
15-52 Verification Results O3 AI performs initial pass/fail assessment, statistical analysis of measurement populations, and trend detection. Human validates all results.
13-51 Consistency Evidence O4 AI generates bidirectional traceability matrices (HW requirements to verification measures, verification measures to results). AI flags gaps and orphans. Human validates completeness.
13-52 Communication Evidence O5 AI drafts verification summary reports, risk assessments, and status dashboards. Human reviews, approves, and communicates to affected parties.

Note on Work Product IDs:

  • 08-60 Verification Measure: Includes simulation test benches, ICT programs, thermal test procedures, EMC test plans, reliability test specifications
  • 08-58 Verification Measure Selection Set: Documents which measures are executed for a given release, with justification for exclusions
  • 03-50 Verification Measure Data: Raw oscilloscope captures, thermal camera images, EMC spectrum plots, ICT logs, simulation waveforms
  • 15-52 Verification Results: Pass/fail verdicts, defect reports, root cause analysis, design margin calculations
  • 13-51 Consistency Evidence: Traceability matrices linking HW requirements to verification measures to results
  • 13-52 Communication Evidence: Meeting minutes, verification status reports, stakeholder sign-offs, release decision records

Implementation Checklist

Use this checklist to verify that HWE.4 activities are complete and ASPICE-compliant.

Verification Planning

  • Hardware verification plan documented and approved (Gate G1)
  • All hardware requirements mapped to at least one verification measure
  • Pass/fail criteria defined for every verification measure
  • Required test equipment and environments identified
  • Verification schedule aligned with project milestones
  • AI tools configured for coverage analysis and test generation

Simulation Verification

  • SPICE simulations executed for all analog circuits
  • Signal integrity analysis completed for high-speed interfaces
  • Power integrity analysis confirms PDN impedance targets met
  • Thermal simulation predicts safe junction temperatures
  • EMC pre-compliance simulation performed
  • Simulation results reviewed and approved (Gate G2)

Physical Verification

  • ICT program developed and validated on known-good boards
  • ICT executed on prototype population with results recorded
  • Thermal testing performed under worst-case conditions
  • EMC emissions testing completed (conducted and radiated)
  • EMC immunity testing completed (ESD, RF, power disturbance)
  • Reliability testing (HALT/HASS) executed per specification
  • All test procedures approved before execution (Gate G3)

HW-SW Co-Verification

  • All HW-SW interfaces verified in both directions
  • Register map access verified with target firmware
  • Interrupt timing and latency measured on target hardware
  • Communication bus protocols verified (CAN, SPI, I2C, UART)
  • Power mode transitions verified (sleep, wake, standby)

Coverage and Traceability

  • Coverage analysis shows all requirements verified
  • Bidirectional traceability established (requirements to measures to results)
  • Coverage gaps identified and addressed or justified
  • AI-generated traceability matrices validated by human reviewer

Results and Reporting

  • All verification results recorded with pass/fail status
  • All failures root-caused and resolved or documented with risk assessment
  • Design margin analysis completed for all critical parameters
  • Design Verification Report drafted and reviewed
  • Verification results validated by HW lead (Gate G4)
  • Final verification completion sign-off obtained (Gate G5)
  • Results communicated to all affected parties (HW, SW, system, project management)

Summary

HWE.4 Verification Outputs:

  • [PASS] Functional testing (ICT) shows >99% pass rate
  • [PASS] Thermal analysis confirms safe operation
  • [PASS] EMC testing meets regulatory requirements
  • [PASS] Reliability testing demonstrates robustness
  • [PASS] Design verification report with approval
  • [PASS] All design margins verified

AI Integration by Base Practice:

  • BP1 (Specify): L1-L2 - AI drafts verification measures, human validates
  • BP2 (Select): L1 - AI suggests selection, human decides
  • BP3 (Verify): L2 - AI executes simulation/bench tests, human validates results
  • BP4 (Traceability): L2 - AI generates traceability matrices, human validates
  • BP5 (Communicate): L1 - AI drafts reports, human communicates

Human-in-the-Loop (HITL) Requirements:

  • ALL base practices require human review and validation
  • Five mandatory sign-off gates (G1 through G5) with defined artifacts
  • Safety-critical failures require immediate escalation
  • AI-human disagreements resolved in favor of human judgment with documented rationale

Key ASPICE Compliance Points:

  • Verification measures must trace to HARDWARE REQUIREMENTS (not system requirements)
  • Bidirectional traceability required between: (1) verification measures and HW requirements, (2) verification results and verification measures
  • Verification covers both design (simulation) and implementation (physical testing)
  • Human accountability for verification summary and communication to stakeholders

Next: HWE.5 (Production release)