2.0: Process Execution Instructions

Purpose and Scope

For AI Agents Reading This Chapter

Audience: AI agents (Requirements Agent, Architecture Agent, Implementation Agent, Verification Agent, Review Agent) executing ASPICE tasks

Purpose: Provide step-by-step execution instructions for each ASPICE process, including:

  1. Input work products (what you need before starting)
  2. Execution steps (how to perform the task)
  3. Output work products (what you must generate)
  4. Quality criteria (how to verify correctness)
  5. Escalation triggers (when to ask human for help)

How to Use This Guide:

  • Read the section corresponding to your agent role (30.01-30.05)
  • Follow the execution steps sequentially (do not skip steps)
  • Check quality criteria before submitting output
  • Escalate proactively when encountering triggers (safety decisions, ambiguities, conflicts)

ASPICE Process Coverage

Agent-to-Process Mapping

The following diagram maps each specialized AI agent to its corresponding ASPICE process, showing which base practices each agent supports and the human review gates between them.

Process Execution Instructions

Note: The Documentation Agent (SUP.1) is not included here — it is covered in Chapter 29.01 and is primarily automated

Process Dependency Awareness: Agents should understand upstream/downstream dependencies: Requirements Agent outputs feed Architecture Agent, which feeds Implementation Agent. When escalating issues, consider impact on dependent processes and notify relevant downstream agents of delays or changes.


Standard Execution Pattern

Universal Workflow for All Agents

Every AI agent task follows this pattern:

Step 1: Validate Inputs
├─ Check that all required input work products exist
├─ Verify input quality (no placeholder text, approved by human)
└─ Escalate if inputs missing or incomplete

Step 2: Execute Task
├─ Follow agent-specific instructions (Sections 30.01-30.05)
├─ Generate output work products
└─ Apply quality checks (ASPICE criteria, coding standards)

Step 3: Self-Review
├─ Run automated checks (MISRA, tests, linters)
├─ Verify traceability (requirements → design → code → tests)
└─ Check completeness (all required fields, no TODOs)

Step 4: Generate Review Package
├─ Create pull request with outputs
├─ Attach quality reports (test results, coverage, MISRA)
└─ Add human-readable summary (what changed, why)

Step 5: Submit for Human Review (HITL)
├─ Assign to appropriate human reviewer (architect, engineer, tester)
├─ Tag escalation items (safety decisions, ambiguities)
└─ Wait for approval before proceeding

Critical Rule: Never skip Step 5 — human review is mandatory for ASPICE compliance


Input Work Product Quality Criteria

What AI Agents Should Expect

Inputs to AI agents must meet these criteria (if not, escalate immediately):

1. System Requirements (Input to SWE.1 Requirements Agent)

  • [PASS] Good Input: "ECU shall detect object distance <5m within 50ms (ASIL-B)"
    • Quantified (5m, 50ms), clear, testable
  • [FAIL] Bad Input: "ECU shall respond quickly to obstacles"
    • Vague ("quickly"), not testable, no safety classification

2. Software Requirements (Input to SWE.2 Architecture Agent)

  • [PASS] Good Input: "[SWE-045] ACC module shall calculate safe distance using formula: d_safe = v² / (2 × a_max), where v = vehicle speed (m/s), a_max = 5 m/s²"
    • ID, formula, units, constraints
  • [FAIL] Bad Input: "[SWE-045] ACC calculates safe distance"
    • Missing formula, units, constraints

3. Software Architecture (Input to SWE.3 Implementation Agent)

  • [PASS] Good Input: UML class diagram showing ACC_Controller class with methods Calculate_Safe_Distance(float speed), return type, units
  • [FAIL] Bad Input: Hand-drawn sketch with no method signatures

4. Source Code (Input to SWE.4 Verification Agent)

  • [PASS] Good Input: C function with Doxygen header, @implements tag, compiles without errors
  • [FAIL] Bad Input: Code snippet with syntax errors, no documentation

Escalation Rule: If input quality is [FAIL] Bad, do not proceed → Generate clarification request → Assign to human requirements engineer


Output Work Product Standards

What AI Agents Must Generate

All AI-generated outputs must include:

1. Metadata Header (YAML front matter)

---
work_product: Software Requirements Specification
aspice_process: SWE.1
document_id: SRS-ACC-001
version: 1.2.0
author: AI Requirements Agent
reviewed_by: [TBD - Human Reviewer]
approved_by: [TBD - Project Manager]
date: 2025-12-17
status: Draft  # Draft | Review | Approved
---

2. Traceability Tags (in code, requirements, tests)

/**
 * @brief Calculate CRC-32 checksum
 * @implements [SWE-078] CRC Checksum Calculation
 * @safety_class ASIL-B
 * @verified_by [TC-SWE-078-1, TC-SWE-078-2]
 */
uint32_t CRC32_Calculate(const uint8_t* data, size_t length);

3. Human-Readable Summary (in pull request description)

## Summary
- Generated 3 new requirements based on system spec (Section 3.2)
- Added 2 interface definitions (CAN, Ethernet)
- Detected 1 ambiguity: "quick response" (escalated to @requirements_lead)

## AI Confidence
- High confidence: 2/3 requirements (standard radar interface)
- Medium confidence: 1/3 requirements (fail-safe behavior, needs safety engineer review)

## Human Action Required
- Review fail-safe requirement [SWE-089] (line 145)
- Approve CAN message IDs (conflict with body control module?)

4. Quality Metrics (automated checks)

Quality Report:
  MISRA Compliance: 0 violations (cppcheck, PC-lint)
  Test Coverage: 95% statement, 88% branch
  Build Status: [PASS] Success (gcc 11.3, warnings treated as errors)
  Traceability: 100% (all functions have @implements tags)

Escalation Decision Tree

When to Escalate to Human

Start: AI agent encounters situation during task execution
    │
    ▼
Is this a safety-critical decision?
(e.g., ASIL-B/C function, fail-safe behavior, SIL 3)
    │
    ├─ Yes → [ESCALATION] ESCALATE to safety engineer
    │         (Example: "Should we use 1oo2 or 2oo3 voting?")
    │
    └─ No → Is the requirement/input ambiguous or incomplete?
            (e.g., missing units, vague terms like "quickly")
            │
            ├─ Yes → [ESCALATION] ESCALATE to requirements engineer
            │         (Example: "Requirement says 'fast', specify latency in ms")
            │
            └─ No → Can you satisfy all constraints simultaneously?
                    (e.g., latency ≤10ms AND complex algorithm)
                    │
                    ├─ No → [ESCALATION] ESCALATE to architect
                    │        (Example: "Kalman filter needs 50ms, conflicts with 10ms requirement")
                    │
                    └─ Yes → Is this task within your training/capability?
                             (e.g., standard CRC vs custom hardware driver)
                             │
                             ├─ No → [ESCALATION] ESCALATE to domain expert
                             │        (Example: "No datasheet for proprietary ASIC")
                             │
                             └─ Yes → [PASS] PROCEED with task execution
                                      (Generate output, submit for human review)

Escalation Template: Use the template from Chapter 29.04 Limitation Acknowledgment. If you have not read that chapter, the minimum required fields are: Agent name, Issue type (safety/ambiguity/conflict/out-of-scope), Context (function or requirement affected), Question (what decision is needed), AI Recommendation (your best suggestion with rationale), Required Action, Urgency, and Assignee.

Escalation Priority Classification: (1) Critical/Blocker: Safety decisions, PR blockers - 4 hour SLA; (2) High: Ambiguities affecting multiple requirements - 24 hour SLA; (3) Medium: Tool selection, optimization questions - 48 hour SLA; (4) Low: Documentation improvements - next sprint. Include priority in escalation request for appropriate routing.


Agent-Specific Instructions

Detailed Guidance by Role

Section Agent Role ASPICE Process Primary Tasks
30.01 Requirements Agent SWE.1 Extract requirements, analyze ambiguities, generate traceability
30.02 Architecture Agent SWE.2 Design architecture, create ADRs, generate UML diagrams
30.03 Implementation Agent SWE.3 Generate C code, Doxygen comments, MISRA compliance
30.04 Verification Agent SWE.4 Generate unit tests, run coverage analysis, test reports
30.05 Review Agent SUP.2 Perform code reviews, MISRA checks, traceability verification

Read your agent-specific section for detailed instructions


Quality Gates

Minimum Acceptance Criteria for AI Outputs

Before submitting any work product for human review, verify:

Quality Gate Checklist (AI Self-Check):
─────────────────────────────────────────────────────────

 Completeness
    All required sections present (no "TBD" or "TODO")
    Metadata header complete (author, date, version)
    Traceability tags present (@implements, @verified_by)

 Correctness
    No placeholder values (e.g., "insert value here")
    Units specified for all numeric values (ms, m/s, kg)
    Formulas verified (checked against reference spec)

 Compliance
    MISRA C:2012 (if code): 0 Required rule violations
    Coding style (if code): Follows project conventions
    ASPICE BP criteria: Met for target Capability Level (BP = Base Practice, the specific activity an ASPICE process requires; each process has numbered BPs, e.g., SWE.1 BP1–BP6)

 Traceability
    Upstream links: All requirements trace to system spec
    Downstream links: All code traces to requirements
    Test links: All requirements have test cases

 Reviewability
    Human-readable summary provided (what/why)
    AI confidence level stated (high/medium/low per item)
    Escalation items clearly tagged
    Pull request description complete

Verdict:
  [PASS] PASS: Submit for human review
  [FAIL] FAIL: Fix issues, re-run checklist

Fail Rate Target: Fewer than 10% of AI outputs should fail the quality gate (a higher rate indicates the AI needs retraining)


Summary

Process Execution Instructions Overview:

  1. Standard Pattern: Validate inputs → Execute task → Self-review → Generate review package → Submit for HITL
  2. Input Quality: Expect clear, quantified, approved inputs (escalate if ambiguous)
  3. Output Standards: Include metadata, traceability tags, summary, quality metrics
  4. Escalation: Proactively escalate safety decisions, ambiguities, conflicts, out-of-scope tasks
  5. Quality Gates: Self-check completeness, correctness, compliance, traceability before submission

Next: Agent-specific execution instructions (Requirements, Architecture, Implementation, Verification, Review)