5.2: SUP.2 Verification


Process Definition

Purpose

SUP.2 Purpose: To confirm that work products properly reflect the requirements.

ASPICE 4.0 Note: In ASPICE PAM v4.0, the standalone SUP.2 Verification process has been removed as a separate process. Verification activities are now embedded directly within each engineering process (SYS.1-SYS.5, SWE.1-SWE.6). This chapter retains SUP.2 as a cross-cutting concern because: (a) many organizations still operate under ASPICE 3.1 where SUP.2 exists, (b) verification strategy and governance remain project-level activities regardless of the process model version, and (c) the principles of independent verification apply across all engineering processes.

Outcomes

Outcome Description
O1 Verification strategy is developed
O2 Verification criteria are defined
O3 Verification activities are performed
O4 Defects are identified and recorded
O5 Results are documented

Base Practices (ASPICE 3.1 Reference)

BP Base Practice AI Level AI Application HITL Required
BP1 Develop verification strategy L1 AI drafts strategy templates based on project parameters YES - Human approves strategy
BP2 Develop verification criteria L2 AI derives criteria from requirements attributes YES - Human validates criteria completeness
BP3 Conduct verification L2 AI performs pre-review analysis, runs static checks YES - Human makes pass/fail decisions
BP4 Determine actions for verification results L1 AI categorizes findings and suggests priorities YES - Human assigns corrective actions
BP5 Track verification findings to closure L2 AI monitors status and flags overdue items YES - Human confirms closure

Verification in ASPICE 4.0

Where Verification Now Lives

ASPICE PAM v4.0 dissolved SUP.2 as a standalone process and distributed verification responsibilities into each engineering process. The table below maps the former SUP.2 outcomes to their new homes in v4.0.

Former SUP.2 Activity ASPICE 4.0 Location Base Practice Description
Verification strategy SWE.1 BP3, SWE.2 BP4, SWE.3 BP4 Specify verification measures Each engineering process defines its own verification measures
Verification criteria SWE.4 BP1, SWE.5 BP1, SWE.6 BP1 Pass/fail criteria in verification measures Criteria embedded in verification measure specifications
Conduct verification SWE.4 BP3, SWE.5 BP3, SWE.6 BP3 Perform verification Execution within each process level
Record defects SWE.4 BP3, SWE.5 BP3, SWE.6 BP3 Record verification results Results captured per verification level
Traceability SWE.1 BP5, SWE.2 BP6, SWE.6 BP4 Ensure consistency and bidirectional traceability Traceability now explicit in each process

Practical Impact: Organizations migrating from ASPICE 3.1 to 4.0 must redistribute their SUP.2 verification plan content into the individual SYS/SWE process plans. The verification strategy document remains valuable as a cross-cutting governance artifact, even if assessors will evaluate verification within each process.

System-Level Verification Mapping

Former SUP.2 Activity ASPICE 4.0 System Process Base Practice
System requirements verification SYS.1 BP5 Ensure consistency and establish bidirectional traceability
Architecture verification SYS.2 BP6 Ensure consistency and establish bidirectional traceability
Integration verification SYS.4 BP1-BP5 System integration and verification measures
System qualification SYS.5 BP1-BP5 System qualification verification measures

Migration Guidance

Recommendation: When migrating from ASPICE 3.1 to 4.0, do not simply delete the SUP.2 verification plan. Instead, decompose it into per-process verification sections and retain a lightweight project-level verification governance document that ensures consistency across process levels.

Migration Step Action AI Assistance
1. Inventory Catalog all SUP.2 artifacts and map to engineering processes L2 - AI scans documents and proposes mapping
2. Distribute Move verification criteria into SWE/SYS process plans L1 - AI generates templates for each process
3. Consolidate Create cross-cutting verification governance document L1 - AI drafts governance framework
4. Validate Verify no verification gaps after redistribution L2 - AI checks coverage completeness
5. Baseline Establish new configuration baseline L3 - Automated CM operations

Verification Strategy

Strategy Components

A verification strategy defines the overall approach to confirming that work products satisfy their requirements. AI assists at each stage of strategy development.

Strategy Element Description AI Contribution
Scope Which work products require verification AI analyzes project structure to identify all verifiable artifacts
Methods Inspection, walkthrough, review, static analysis AI recommends methods based on artifact type and ASIL level
Roles Moderator, reviewer, author, recorder AI suggests reviewer allocation based on expertise mapping
Schedule When verification occurs relative to milestones AI aligns verification gates with V-model phases
Entry/Exit Criteria Conditions to start and complete verification AI derives criteria from requirements attributes
Tools Static analyzers, review platforms, traceability tools AI recommends tool chains based on project constraints
Metrics Finding density, review rate, coverage AI defines baseline metrics from historical data

AI-Assisted Strategy Generation

Note: The strategy template below is illustrative. Actual strategy documents must be tailored to project-specific ASIL levels, domain standards, and organizational processes.

# Verification Strategy (AI-generated draft, requires human review)
verification_strategy:
  project: BCM-DoorLock
  aspice_version: "4.0"
  safety_level: ASIL-B

  scope:
    - artifact: "Software Requirements Specification"
      method: Technical Review + AI Pre-Analysis
      frequency: Per baseline
      ai_level: L2
    - artifact: "Software Architecture Document"
      method: Inspection + Static Analysis
      frequency: Per release
      ai_level: L2
    - artifact: "Detailed Design"
      method: Peer Review + AI Pre-Analysis
      frequency: Per change
      ai_level: L2
    - artifact: "Source Code"
      method: Peer Review + Static Analysis + AI Review
      frequency: Per merge request
      ai_level: L2-L3
    - artifact: "Test Specifications"
      method: Technical Review
      frequency: Per baseline
      ai_level: L1

  method_selection_criteria:
    asil_a: "Peer review with AI pre-analysis"
    asil_b: "Technical review with AI pre-analysis and static analysis"
    asil_c: "Inspection with AI pre-analysis, static analysis, formal methods"
    asil_d: "Inspection with AI pre-analysis, static analysis, formal methods, independent review"

  review_capacity:
    max_review_size: "200 LOC or 30 requirements per session"
    preparation_time: "1 hour minimum per reviewer"
    ai_preparation: "Automated, results available before human preparation"

Verification Methods

Method Selection Matrix

Method Formality AI Automation Best For
Inspection High L1 Safety-critical, complex
Walkthrough Medium L1 Early design, new team
Technical Review Medium L2 General verification
Peer Review Low L2 Code, daily work
Static Analysis N/A L3 Code, automated checks

Method-Artifact Suitability

Artifact Type Inspection Tech Review Peer Review Static Analysis AI Pre-Analysis
System Requirements Recommended Acceptable -- -- Recommended
SW Requirements Recommended Recommended Acceptable -- Recommended
Architecture Recommended Recommended -- Partial Recommended
Detailed Design Acceptable Recommended Recommended Partial Recommended
Source Code Acceptable Acceptable Recommended Recommended Recommended
Test Specifications Acceptable Recommended Recommended -- Recommended

AI-Assisted Review Framework

The following diagram illustrates the AI-assisted pre-review preparation workflow, showing how AI pre-analyzes artifacts, flags potential issues, and prepares review packages before human reviewers begin their assessment.

Pre-Review Preparation


Review Automation

AI-Powered Review by Artifact Type

AI-powered review automation applies differently across artifact types. The following subsections describe the approach for each major category.

Requirements Review

AI Role: L2 -- AI performs pre-analysis and flags issues; humans validate findings and make acceptance decisions.

Check Category AI Capability Detection Examples
Ambiguity NLP analysis of vague terms "quickly," "appropriate," "sufficient," "as needed"
Completeness Cross-reference against templates and standards Missing error handling, missing timing constraints
Consistency Terminology and cross-reference validation Conflicting units, contradictory conditions, renamed signals
Testability Semantic analysis of verifiability Requirements without measurable criteria
Traceability Automated link verification Missing parent links, orphan requirements
Conformance Pattern matching against standards Missing ASIL tags, non-compliant requirement structure
# AI Requirements Review Output (illustrative)
requirements_review:
  document: SRS-BCM-v1.2
  total_requirements: 48
  ai_findings:
    ambiguity:
      - req: SWE-BCM-015
        term: "quickly"
        suggestion: "Replace with measurable criterion, e.g., 'within 10ms'"
      - req: SWE-BCM-033
        term: "appropriate level"
        suggestion: "Specify exact threshold value"
    completeness:
      - req: SWE-BCM-031
        gap: "No error handling specified for CAN timeout"
        suggestion: "Add timeout handling requirement with DTC specification"
    consistency:
      - req: SWE-BCM-008
        issue: "Uses 'actuator' but SWE-BCM-020 uses 'motor' for same component"
        suggestion: "Standardize to 'actuator' per glossary"
    testability:
      - req: SWE-BCM-041
        issue: "Requirement states 'system shall be reliable' without measurable criteria"
        suggestion: "Define MTBF target or failure rate threshold"
  summary:
    total_findings: 12
    by_severity: { critical: 0, major: 4, minor: 6, suggestion: 2 }
    human_review_required: true

Design Review

AI Role: L2 -- AI checks architectural patterns, interface consistency, and compliance with design standards.

Check Category AI Capability Detection Examples
Pattern Compliance Match against approved architectural patterns Unauthorized direct HW access bypassing HAL
Interface Consistency Cross-check interface definitions Mismatched data types between caller and callee
Coupling Analysis Dependency graph analysis Circular dependencies, excessive fan-out
Resource Usage Static estimation of memory/CPU budget Stack depth violations, RAM overallocation
Safety Compliance ASIL decomposition and freedom from interference Missing partition boundaries, shared resource conflicts

Code Review

AI Role: L2-L3 -- AI automates static checks (L3) and performs semantic analysis (L2) requiring human validation.

Check Category AI Capability Automation Level
Coding Standard MISRA C:2012 compliance checking L3 (fully automated)
Security CWE/CVE pattern detection L3 (automated, human triages)
Logic Semantic analysis of control flow and data flow L2 (AI flags, human validates)
Performance Complexity metrics, resource usage estimation L2 (AI measures, human interprets)
Maintainability Complexity, duplication, naming conventions L2 (AI reports, human decides)

Review Record Template

Note: Participant names are illustrative; actual records use real project team members.

# Technical Review Record (illustrative example)
review:
  id: TR-SWE1-001
  type: Technical Review
  date: (review date)
  duration: 2 hours

  artifact:
    name: "SW Requirements Specification"
    id: SRS-BCM-v1.2
    version: 1.2
    author: (Author Name)
    size: 48 requirements

  participants:
    moderator: (Moderator) (QA Lead)
    author: (Author) (SW Lead)
    reviewers:
      - (Reviewer 1) (System Architect)
      - (Reviewer 2) (Safety Engineer)
      - (Reviewer 3) (Test Lead)

  preparation:
    ai_analysis: completed
    ai_findings: 12 items
    prep_time_avg: 1.5 hours

  findings:
    major:
      - id: F-001
        requirement: SWE-BCM-015
        description: "Timing requirement 'quickly' is ambiguous"
        category: ambiguity
        source: AI + Human

      - id: F-002
        requirement: SWE-BCM-031
        description: "Error handling not specified for timeout"
        category: completeness
        source: Human

    minor:
      - id: F-003
        requirement: SWE-BCM-008
        description: "Inconsistent terminology: 'actuator' vs 'motor'"
        category: consistency
        source: AI

      - id: F-004
        requirement: SWE-BCM-022
        description: "Reference to deleted requirement SWE-BCM-050"
        category: consistency
        source: AI

    total_findings: 8
    major: 2
    minor: 6

  metrics:
    preparation_rate: 32 req/hour
    review_rate: 24 req/hour
    finding_density: 0.17 findings/requirement
    ai_contribution: 50% (4 of 8 findings)

  outcome: CONDITIONAL_ACCEPT
  rework_required: true
  rework_deadline: 2025-01-20
  verification_needed: true

Code Review Automation

AI-Powered Code Review

Note: Python code examples are illustrative; helper functions (run_static_analysis, check_misra_compliance, etc.) require project-specific implementation.

"""
AI-assisted code review for embedded software
"""

def analyze_code_change(diff: str, context: dict) -> dict:
    """
    Perform AI-assisted code review analysis.

    Args:
        diff: Git diff of changes
        context: Project context (standards, patterns)

    Returns:
        Review findings and recommendations
    """

    findings = []

    # 1. Static analysis checks (L3 - fully automated)
    static_findings = run_static_analysis(diff)
    findings.extend(static_findings)

    # 2. MISRA compliance (L3 - automated)
    misra_findings = check_misra_compliance(diff)
    findings.extend(misra_findings)

    # 3. AI semantic analysis (L2 - AI with human review)
    ai_findings = ai_semantic_review(diff, context)
    findings.extend(ai_findings)

    # 4. Security analysis (L2)
    security_findings = analyze_security(diff)
    findings.extend(security_findings)

    # Generate review summary
    summary = {
        'total_findings': len(findings),
        'by_severity': categorize_by_severity(findings),
        'by_category': categorize_by_type(findings),
        'recommendation': get_recommendation(findings),
        'human_review_items': get_human_review_items(findings)
    }

    return {
        'findings': findings,
        'summary': summary
    }


def ai_semantic_review(diff: str, context: dict) -> list:
    """AI semantic analysis of code changes."""

    prompt = f"""
    Review this embedded C code change for:
    1. Logic correctness
    2. Safety implications
    3. Performance concerns
    4. Maintainability issues
    5. Alignment with architecture patterns

    Standards: {context.get('standards', ['MISRA C:2012'])}
    Project: {context.get('project_type', 'Automotive BCM')}

    Code diff:
    ```
    {diff}
    ```

    Provide findings with:
    - Location (file:line)
    - Category
    - Severity (critical/major/minor)
    - Description
    - Recommendation
    """

    # Call AI service
    response = ai_service.analyze(prompt)

    return parse_ai_findings(response)

Static Analysis with AI

Beyond Traditional SAST

Traditional static analysis tools (SAST) apply fixed rule sets to detect coding standard violations, common defects, and security vulnerabilities. AI-enhanced static analysis goes further by learning project-specific patterns and detecting semantic issues that rule-based tools miss.

Capability Traditional SAST AI-Enhanced Analysis
Rule Coverage Predefined rules (e.g., MISRA C, CWE) Predefined rules + learned project patterns
False Positive Rate Often high (30-70%) Reduced by contextual understanding (estimated 15-40%)
Semantic Understanding Limited to syntactic patterns Understands intent and detects logic errors
Cross-File Analysis Limited by tool boundaries AI correlates findings across modules
Historical Learning No learning from past findings Learns from resolved findings to prioritize new ones
Custom Patterns Requires manual rule authoring AI suggests custom rules from codebase analysis

AI-Enhanced Analysis Workflow

Note: Tool names are illustrative. Actual tool selection depends on project constraints and vendor evaluation.

Phase Activity Tool Example AI Level
1. Rule-Based Scan Run MISRA/CWE checks Klocwork, Polyspace, cppcheck L3 (automated)
2. AI Triage Classify findings as true/false positive AI classifier trained on project history L2 (AI suggests, human confirms)
3. Semantic Scan Detect logic errors, race conditions, resource leaks AI code analysis agent L2 (AI flags, human validates)
4. Cross-Module Check Verify interface contracts across compilation units AI-assisted dependency analysis L2 (AI identifies, human reviews)
5. Trend Analysis Compare findings against previous baselines AI metrics dashboard L2 (AI reports, human interprets)

False Positive Reduction

Key Benefit: AI-assisted triage can reduce the human effort spent on false positives by learning from historical disposition decisions. Over successive releases, the AI model improves its precision in flagging genuine defects.

# AI False Positive Triage (illustrative)
triage_result:
  tool: Klocwork
  total_findings: 342
  ai_triage:
    true_positive_likely: 87
    false_positive_likely: 198
    uncertain: 57
  human_review_queue:
    priority_1: 87   # True positives - review and fix
    priority_2: 57   # Uncertain - human classification needed
    priority_3: 198  # Likely false positives - spot-check sample
  estimated_effort_saved: "~60% reduction in triage time"

Verification Criteria

AI-Generated Verification Criteria from Requirements

Verification criteria define the measurable conditions under which a work product is considered to satisfy its requirements. AI assists by extracting testable attributes from natural-language requirements and proposing criteria that humans refine.

Requirement Attribute Derived Criterion Type Example
Timing constraint Measurement against threshold "Response time measured <= 10ms in 100 consecutive trials"
Value range Boundary value verification "Output voltage within 4.75V to 5.25V under all specified load conditions"
State transition Sequence verification "State machine transitions from INIT to READY within 50ms after power-on"
Error handling Fault injection test "DTC set within 100ms of fault detection; recovery within 500ms"
Interface protocol Conformance test "CAN message ID 0x200 transmitted at 10ms cycle time +/- 1ms"
Resource constraint Static measurement "Stack usage does not exceed 2048 bytes per task"

Criteria Derivation Process

Step Activity AI Contribution Human Responsibility
1 Parse requirements for measurable attributes AI extracts quantitative and qualitative attributes using NLP Validate extraction completeness
2 Propose verification method per attribute AI suggests method (review, analysis, test, demonstration) Approve or override method selection
3 Draft pass/fail criteria AI generates measurable criteria with tolerances Validate technical correctness of thresholds
4 Map criteria to verification levels AI assigns criteria to SWE.4/SWE.5/SWE.6 levels Confirm level assignment
5 Check criteria coverage AI verifies every requirement has at least one criterion Approve coverage or add missing criteria

ASPICE Compliance: In ASPICE 4.0, verification criteria are part of the "Verification Measure" work product (08-60). Each verification measure must include explicit pass/fail criteria traceable to the requirement it verifies.


Traceability Verification

Automated Bidirectional Traceability Checking

Bidirectional traceability is a core ASPICE requirement across all engineering processes. AI automates the detection of traceability gaps, orphan items, and inconsistencies.

Traceability Link Direction AI Check
System Req --> SW Req Forward Every system requirement allocated to software has at least one SW requirement
SW Req --> System Req Backward Every SW requirement traces to a parent system requirement
SW Req --> Architecture Element Forward Every SW requirement is allocated to an architecture component
Architecture Element --> Detailed Design Forward Every architecture element has a corresponding design module
SW Req --> Verification Measure Forward Every SW requirement has at least one verification measure
Verification Measure --> SW Req Backward Every verification measure traces to at least one SW requirement
Verification Result --> Verification Measure Backward Every verification result links to the measure that produced it

Gap Detection Report

# AI Traceability Verification Report (illustrative)
traceability_check:
  project: BCM-DoorLock
  date: (analysis date)
  tool: AI Traceability Agent

  coverage:
    system_to_sw: 47/48 (97.9%)
    sw_to_architecture: 45/48 (93.8%)
    sw_to_verification: 43/48 (89.6%)
    verification_to_results: 41/43 (95.3%)

  gaps:
    missing_forward:
      - from: SWE-BCM-046
        expected_link: Architecture component
        severity: HIGH
        suggestion: "Allocate to Door_Lock_Service component based on content analysis"
      - from: SWE-BCM-047
        expected_link: Architecture component
        severity: HIGH
        suggestion: "Allocate to Safety_Monitor component based on content analysis"
      - from: SWE-BCM-048
        expected_link: Architecture component
        severity: MEDIUM
        suggestion: "Allocate to Diagnostics component based on content analysis"

    missing_backward:
      - from: SWE-VM-BCM-022
        expected_link: SW Requirement
        severity: HIGH
        suggestion: "Orphan verification measure - link to SWE-BCM-105 or remove"

    orphan_requirements:
      - id: SWE-BCM-050
        status: "Deleted but still referenced by SWE-BCM-022"
        action: "Remove reference or restore requirement"

  recommendation: "5 traceability gaps require human resolution before baseline"

Consistency Checks

Check Description Automated
Referential Integrity All cross-references point to existing items L3 (fully automated)
Version Consistency Linked items reference correct versions L3 (fully automated)
Status Alignment Approved requirements do not link to draft designs L2 (AI flags, human validates)
Completeness Every item at level N links to at least one item at level N+1 and N-1 L3 (fully automated)
Semantic Consistency Linked items are about the same topic L2 (AI NLP analysis, human confirms)

Independent Verification

Maintaining Independence When AI Assists

Independence is a fundamental principle of verification: the verifier must not be the author of the work product. When AI is introduced into the verification process, new independence considerations arise.

Independence Concern Risk Mitigation
AI trained on author's code AI may inherit author's blind spots if trained on the same codebase Use general-purpose AI models, not project-fine-tuned models for verification
Same AI for creation and verification No independence if the same AI instance wrote and reviews the code Ensure different AI configurations or models for generation vs. verification
AI replacing human reviewers Loss of human domain judgment and diverse perspectives AI supplements but does not replace human reviewers
Reviewer over-reliance on AI Reviewers may rubber-stamp AI findings without critical thought Require reviewers to document their own findings before seeing AI results
AI bias in finding prioritization AI may consistently under-prioritize certain defect categories Periodically audit AI finding distributions against human-only baselines

Independence Levels for AI-Assisted Verification

Reference: ISO 26262 Part 2, Table 3 defines independence levels for verification activities.

ASIL Level Required Independence AI Integration Approach
QM Same team, different person AI pre-analysis + peer review
ASIL A Same team, different person AI pre-analysis + peer review with documented rationale
ASIL B Different team or independent person AI pre-analysis + review by independent team member; AI findings do not substitute for human judgment
ASIL C Different department or external AI pre-analysis available to independent reviewer; independence of reviewer is from the development team, not from AI tools
ASIL D Different department or external AI pre-analysis available to independent reviewer; additional independent assessment of AI tool effectiveness required

HITL Protocol for Verification Decisions

Human-in-the-Loop Requirements

Principle: AI assists verification execution; humans own verification decisions. No verification outcome may be approved without explicit human sign-off.

Decision Point AI Role Human Role Escalation Trigger
Review initiation AI determines readiness (entry criteria check) Human approves review start Entry criteria not met
Finding classification AI proposes severity and category Human confirms or overrides classification Disagreement between AI and reviewer
Finding disposition AI suggests corrective action Human decides disposition (fix, defer, reject) Safety-critical finding
Pass/fail decision AI summarizes results against criteria Human makes final pass/fail decision Any failed criterion
Rework verification AI re-checks reworked items Human confirms rework adequacy Rework introduces new findings
Review closure AI verifies all actions complete Human signs off on closure Open actions remaining

HITL Verification Workflow

# HITL Verification Workflow (illustrative)
hitl_workflow:
  phase_1_preparation:
    ai_actions:
      - Run static analysis on artifact
      - Perform NLP analysis for ambiguity/completeness
      - Check traceability links
      - Generate pre-review findings report
    human_actions:
      - Review AI findings report
      - Prepare own review notes independently
      - Confirm review readiness

  phase_2_review:
    ai_actions:
      - Present findings in structured format
      - Provide cross-references and evidence
      - Record findings in real-time (if using AI-assisted recording)
    human_actions:
      - Discuss each finding
      - Add findings not detected by AI
      - Classify and prioritize all findings
      - Decide outcome (accept, conditional accept, reject)

  phase_3_follow_up:
    ai_actions:
      - Track rework items to closure
      - Re-verify corrected items
      - Generate closure report
    human_actions:
      - Validate rework quality
      - Confirm all actions resolved
      - Sign off on verification closure

  accountability:
    finding_decisions: "Human only"
    pass_fail_verdict: "Human only"
    corrective_actions: "Human assigns, AI tracks"
    closure_approval: "Human only"

Audit Evidence: For ASPICE assessments, the review record must clearly document which findings originated from AI analysis and which from human reviewers. The disposition decision must always be attributed to a named human participant.


Tool Integration

Verification Tool Landscape

Tool Category AI Features ASPICE Fit Typical Use
Klocwork Static analysis (SAST) AI-assisted triage, defect prediction SUP.2 / SWE.4 C/C++ code analysis, MISRA compliance
Polyspace Static analysis (formal) Abstract interpretation, runtime error proving SUP.2 / SWE.4 Proving absence of runtime errors
SonarQube Static analysis AI rules, quality gate automation SUP.2 / SWE.4 Multi-language code quality
CodeRabbit AI code review Full AI-powered review SUP.2 Automated code review in CI/CD
Gerrit Code review platform None (extensible via plugins) SUP.2 Peer review workflow
Polarion ALM Limited AI SUP.2 / Full lifecycle Traceability, review management
codebeamer ALM AI roadmap SUP.2 / Full lifecycle Requirements and review management
DOORS Next Requirements management Limited AI SUP.2 / SWE.1 Requirements traceability

Integration Architecture

Note: The integration pattern below is illustrative. Actual integration depends on the organization's tool landscape and CI/CD infrastructure.

# Verification Tool Integration (illustrative CI/CD pipeline)
verification_pipeline:
  trigger: merge_request

  stage_1_automated:
    tools:
      - name: cppcheck
        type: static_analysis
        config: "--enable=all --std=c11 --addon=misra.json"
        ai_level: L3
      - name: Klocwork
        type: sast
        config: "project-specific analysis profile"
        ai_level: L3
      - name: Polyspace
        type: formal_analysis
        config: "Bug Finder + Code Prover"
        ai_level: L3
    output: static_analysis_report.json

  stage_2_ai_analysis:
    tools:
      - name: AI Review Agent
        type: semantic_analysis
        config: "embedded C review profile"
        ai_level: L2
      - name: AI Triage Agent
        type: finding_classification
        config: "trained on project history"
        ai_level: L2
    output: ai_review_report.md

  stage_3_human_review:
    tools:
      - name: Gerrit / GitLab MR
        type: review_platform
    input:
      - static_analysis_report.json
      - ai_review_report.md
    human_actions:
      - Review AI findings
      - Add human observations
      - Make pass/fail decision

  stage_4_traceability:
    tools:
      - name: AI Traceability Agent
        type: traceability_verification
        ai_level: L2
    output: traceability_report.json

AI Verification Agent Configuration

Note: Agent configuration is illustrative. Adapt to your AI infrastructure (cloud-hosted LLM, on-premises model, etc.).

# AI Verification Agent Configuration
ai_verification_agent:
  role: "Verification Assistant"
  capabilities:
    - pre_review_analysis
    - static_analysis_triage
    - traceability_verification
    - review_report_generation

  rules:
    - "Never make pass/fail decisions autonomously"
    - "Always flag safety-critical findings for human review"
    - "Clearly label all findings as AI-generated"
    - "Do not suppress findings without human approval"
    - "Maintain audit trail of all AI actions"

  integration:
    ci_cd: "GitLab CI / GitHub Actions"
    review_platform: "Gerrit / GitLab MR"
    alm: "Polarion / codebeamer"
    static_analysis: "Klocwork / Polyspace / SonarQube"

  reporting:
    format: "YAML + Markdown"
    traceability: "Link findings to requirements and design elements"
    metrics: "Finding density, AI contribution rate, false positive rate"

Verification Metrics

The diagram below presents a verification metrics report, summarizing finding density, review coverage, and AI contribution rates to help teams assess review effectiveness.

Verification Metrics Report

Key Performance Indicators

Metric Formula Target AI Contribution
Finding Density Total findings / artifact size Varies by artifact type AI increases detection rate by 30-50% (estimated)
Review Rate Items reviewed / hour 20-40 requirements/hour AI pre-analysis reduces review time
AI Contribution Rate AI-originated findings / total findings 30-60% Direct measure of AI effectiveness
False Positive Rate False positives / total AI findings < 30% Monitor and retrain as needed
Rework Rate Items requiring rework / total items < 15% AI catches issues earlier, reducing rework
Verification Coverage Verified items / total items 100% AI checks for coverage gaps
Mean Time to Resolution Avg days from finding to closure < 5 days for major, < 10 for minor AI tracks and escalates overdue items

Work Products

Detailed Work Product Table

WP ID Work Product Outcomes Supported AI Role Description
08-12 Verification Plan O1 L1 - AI drafts strategy templates Documents verification strategy, methods, schedule, entry/exit criteria
08-60 Verification Measure O2 L2 - AI derives criteria from requirements Specifies verification measures with pass/fail criteria (ASPICE 4.0 term)
13-04 Review Record O3, O4 L2 - AI assists finding documentation Records review participants, findings, disposition, and outcome
13-05 Verification Report O5 L2 - AI generates summary and analysis Summarizes verification results, metrics, and recommendations
13-51 Consistency Evidence O3 L2 - AI generates traceability matrices Documents bidirectional traceability between artifacts
08-27 Problem Report O4 L2 - AI classifies and routes findings Records defects found during verification for resolution tracking

Work Product Quality Criteria

Work Product Quality Criterion AI Check
Verification Plan Covers all engineering processes and artifact types L2 - AI validates coverage against project scope
Verification Measure Every requirement has at least one verification measure L3 - Automated coverage check
Review Record All mandatory fields populated; findings classified L2 - AI validates record completeness
Verification Report Metrics calculated; pass/fail clearly stated L2 - AI generates metrics and flags anomalies
Consistency Evidence No orphan links; all items traced L3 - Automated traceability check

Implementation Checklist

Verification Process Setup

Step Activity Responsible AI Support Status
1 Define verification strategy aligned with project ASIL level Verification Lead L1 - Template generation [ ]
2 Select and configure static analysis tools Tool Engineer L1 - Tool recommendation [ ]
3 Configure AI pre-review analysis pipeline DevOps / AI Engineer L2 - Pipeline setup [ ]
4 Define verification criteria for each requirement type Verification Lead L2 - Criteria derivation [ ]
5 Establish review record templates QA Lead L1 - Template generation [ ]
6 Train team on AI-assisted review workflow Project Manager L1 - Training material [ ]
7 Configure traceability verification automation Tool Engineer L2 - Traceability agent [ ]
8 Define metrics collection and reporting QA Lead L2 - Dashboard setup [ ]
9 Conduct pilot review with AI assistance Verification Lead L2 - Full pipeline [ ]
10 Calibrate AI false positive thresholds Verification Lead L2 - Threshold tuning [ ]
11 Document HITL protocol for verification decisions QA Lead L1 - Protocol template [ ]
12 Establish independence requirements per ASIL level Safety Manager L1 - Guidance reference [ ]

ASPICE 4.0 Migration Checklist

Step Activity Responsible Status
1 Inventory existing SUP.2 verification plan content QA Lead [ ]
2 Map SUP.2 activities to SWE/SYS base practices Process Engineer [ ]
3 Redistribute verification criteria into per-process plans Verification Lead [ ]
4 Create project-level verification governance document QA Lead [ ]
5 Update traceability to reflect new process structure Tool Engineer [ ]
6 Validate no verification gaps after migration QA Lead + AI [ ]
7 Update assessment evidence mapping Process Engineer [ ]
8 Brief assessors on new verification structure Project Manager [ ]

Summary

SUP.2 Verification:

  • AI Level: L2 (AI pre-analysis, human decision)
  • Primary AI Value: Pre-review analysis, finding detection, traceability verification, static analysis triage
  • Human Essential: Technical judgment, design decisions, pass/fail verdicts, finding disposition
  • Key Outputs: Review records, verification reports, traceability evidence
  • Efficiency Gain: ~35% time reduction with AI assistance (estimated; calibrate based on project data)
  • ASPICE 4.0: SUP.2 removed as standalone process; verification now embedded in SYS/SWE processes
  • Independence: AI supplements but does not replace human reviewer independence requirements
  • HITL: All verification decisions require explicit human sign-off with audit trail