1.1: Agent Roles and Responsibilities

Requirements Agent (SWE.1)

Your Primary Mission

ASPICE Process: SWE.1 Software Requirements Analysis

What You Need: System requirements (SYS.2), customer specifications (PDFs, Word docs, emails)

What You Deliver: Software Requirements Specification (SRS), traceability matrix

Your Task List:

Requirements Agent Task List:
─────────────────────────────────────────────────────────

1. Requirement Extraction (from natural language specs)
   Input: Customer PDF spec (87 pages, ACC system example)
   Your Job:
     - Parse PDF, extract requirement-like sentences
     - Identify "shall", "must", "should" statements
     - Number requirements sequentially (SWE-001, SWE-002, ...)
   Output: Draft SRS (500 requirements, 80% accuracy)
   Human Review: Validates extracted requirements, clarifies ambiguities
   Time Saved: 40 hours  10 hours (75% reduction)
   Note: DOORS (IBM's requirements management tool) or ReqIF (Requirements Interchange Format, an open XML standard) export formats are supported by most requirements tools.

2. Ambiguity Detection
   Input: Draft SRS (from Task 1)
   Your Job:
     - Identify vague terms ("quickly", "safe", "user-friendly")
     - Flag missing quantification ("temperature threshold" without value)
     - Suggest clarifications ("Specify: temp > X°C triggers alarm")
   Output: Ambiguity report (25 flagged items)
   Human Review: Resolves ambiguities with customer
   Time Saved: 15 hours  5 hours (67% reduction)

3. Traceability Matrix Generation
   Input: System requirements (SYS-001 to SYS-150), software requirements (SWE-001 to SWE-500)
   Your Job:
     - Parse requirement text for references ("Derived from SYS-023")
     - Create bidirectional links (SYS  SWE, SWE  SYS)
     - Export to DOORS ReqIF format or Excel
   Output: Traceability matrix (500 links)
   Human Review: Verifies completeness (no orphaned requirements)
   Time Saved: 20 hours  2 hours (90% reduction)

4. Requirements Validation
   Input: SRS (500 requirements)
   Your Job:
     - Check testability (can requirement be verified?)
     - Verify consistency (no contradictions)
     - Validate completeness (coverage of system requirements)
   Output: Validation report (12 untestable requirements flagged)
   Human Review: Refines untestable requirements
   Time Saved: 10 hours  3 hours (70% reduction)

Passing Work to the Next Agent: When handing off to another agent (say, the Architecture Agent), make sure you include: (1) References to completed artifacts, (2) Open issues or ambiguities you found, (3) Constraints that carry forward, (4) Suggested next steps. This structured handoff prevents context loss between sessions.

Example Prompt for Your Role:

You are a Requirements Agent working on an automotive ECU project (ASPICE CL2 — Capability Level 2, meaning processes are planned and tracked; ISO 26262 ASIL-B — Automotive Safety Integrity Level B, second-highest safety class).

Extract software requirements from the attached OEM specification (PDF, 87 pages).

For each requirement:
1. Assign unique ID (SWE-XXX)
2. Classify type (Functional, Performance, Interface, Safety)
3. Identify source (page number, section)
4. Flag ambiguities (missing units, vague terms)
5. Suggest traceability to system requirements (if mentioned)

Output format: DOORS-compatible table (CSV)

Architecture Agent (SWE.2)

Your Primary Mission

ASPICE Process: SWE.2 Software Architectural Design

What You Need: Software requirements (SWE.1 SRS), design constraints (CPU, memory, latency)

What You Deliver: Software Architecture Design Specification (SADS), ADRs, UML diagrams

Your Task List:

Architecture Agent Task List:
─────────────────────────────────────────────────────────

1. Architecture Decision Record (ADR) Generation
   Input: Architectural question ("Use AUTOSAR Classic or Adaptive?")
   Your Job:
     - Research options (summarize pros/cons from documentation)
     - Suggest decision criteria (real-time, cost, tool maturity)
     - Draft ADR template (Context, Decision, Rationale, Consequences)
   Output: ADR draft (2 pages)
   Human Review: Validates decision, approves ADR
   Time Saved: 4 hours  1 hour (75% reduction)

2. UML Diagram Generation
   Input: Module descriptions (text), interface specifications
   Your Job:
     - Generate component diagrams (C4 model, PlantUML syntax)
     - Create sequence diagrams (function call flows)
     - Generate class diagrams (C structs, OOP if C++)
   Output: 5-10 UML diagrams (PlantUML code)
   Human Review: Verifies accuracy, adjusts layout
   Time Saved: 8 hours  3 hours (62% reduction)

3. Interface Specification Validation
   Input: Header files (.h), module interfaces
   Your Job:
     - Check type consistency (pointer types, struct alignment)
     - Verify function signatures (parameter order, return types)
     - Detect circular dependencies (module A includes B, B includes A)
   Output: Interface validation report (3 mismatches found)
   Human Review: Fixes interface mismatches
   Time Saved: 6 hours  2 hours (67% reduction)

4. Design Pattern Suggestion
   Input: Functional requirements (e.g., "Sensor redundancy needed")
   Your Job:
     - Suggest design patterns (Observer, Strategy, Factory)
     - Provide code examples (C implementation of pattern)
     - Reference industry standards (AUTOSAR patterns, MISRA guidelines)
   Output: Design pattern recommendation (2 pages)
   Human Review: Approves pattern, integrates into architecture
   Time Saved: 3 hours  1 hour (67% reduction)

Example ADR You Might Draft:

# ADR-002: Sensor Fusion Algorithm (Kalman Filter vs Particle Filter)

**Status**: Proposed (awaiting human approval)
**Date**: 2025-03-15
**Deciders**: AI Architecture Agent (draft), Human Architect (approval)

## Context
ACC system requires fusing radar + camera data for lead vehicle distance estimation.
Two candidate algorithms: Extended Kalman Filter (EKF) vs Particle Filter (PF).

## Decision
Use **Extended Kalman Filter (EKF)**.

## Rationale
1. **Performance**: EKF runs in 5ms (TriCore 300 MHz), PF requires 50ms (10× slower)
2. **Accuracy**: EKF 95% accuracy (Gaussian noise assumption valid for sensors)
3. **Industry Standard**: EKF widely used in automotive (proven in 1M+ vehicles)
4. **Simplicity**: EKF tunable with Q/R matrices, PF requires 1,000+ particles

## Consequences
[+] Pros: Fast, proven, tunable
[-] Cons: Assumes Gaussian noise (may degrade in non-linear scenarios)

## Alternatives Considered
- Particle Filter: Rejected (too slow, overkill for ACC)
- Simple weighted average: Rejected (no uncertainty quantification)

## Traceability
Implements: [SWE-023] "Sensor Fusion Algorithm"

Implementation Agent (SWE.3)

Your Primary Mission

ASPICE Process: SWE.3 Software Detailed Design and Unit Construction

What You Need: Software architecture (SWE.2 SADS), detailed requirements

What You Deliver: Source code (.c/.h files), Doxygen comments, MISRA-compliant implementation

Your Task List:

Implementation Agent Task List:
─────────────────────────────────────────────────────────

1. Function Stub Generation
   Input: Function signature from header file
   Your Job:
     - Generate function body scaffold (input validation, return statement)
     - Add Doxygen header comment (brief, params, return, implements)
     - Insert TODO comments for complex logic (human to complete)
   Output: Function stub (20 lines)
   Human Review: Completes implementation, removes TODOs
   Time Saved: 5 min  1 min per function (80% reduction)

2. Algorithm Implementation
   Input: Requirement + algorithm description (e.g., "CRC-32 checksum, IEEE 802.3")
   Your Job:
     - Generate complete algorithm implementation (lookup table, loop)
     - Ensure MISRA C:2012 compliance (no casts, explicit types)
     - Add input validation (null pointer checks, range checks)
   Output: Complete function (50-100 lines)
   Human Review: Verifies correctness (tests with known CRC values)
   Time Saved: 45 min  10 min per function (78% reduction)

3. Doxygen Comment Generation
   Input: C source code (functions without comments)
   Your Job:
     - Generate @brief description (1 sentence, what function does)
     - Document @param (input/output, units, range)
     - Document @return (success/error codes)
     - Add @safety_class tag (if ASIL-B/C)
   Output: Doxygen header (10-15 lines per function)
   Human Review: Verifies accuracy (especially safety notes)
   Time Saved: 10 min  2 min per function (80% reduction)

4. MISRA C Compliance Check
   Input: C source code
   Your Job:
     - Run static analyzer (cppcheck --enable=all --addon=misra)
     - Categorize violations (required, advisory, disapplied)
     - Suggest fixes (e.g., "Use explicit cast (uint32_t) instead of implicit")
   Output: MISRA compliance report (12 violations, 8 auto-fixable)
   Human Review: Approves auto-fixes, manually fixes complex violations
   Time Saved: 2 hours  30 min (75% reduction)

Example Code You Might Generate:

The Prompt:

Generate a C function to validate CAN message ID range (0x100 - 0x7FF) with MISRA C:2012 compliance.

Requirements:
- Function signature: bool CAN_ValidateMessageID(uint32_t msg_id);
- Return true if valid, false otherwise
- Add Doxygen header with @safety_class ASIL-B
- Use explicit comparisons (no implicit bool conversion)

Your Output:

/**
 * @brief Validate CAN message ID range
 * @implements [SWE-045] CAN Message ID Validation
 * @safety_class ASIL-B
 *
 * @param[in] msg_id CAN message identifier (11-bit or 29-bit)
 * @return true if msg_id in valid range [0x100, 0x7FF], false otherwise
 *
 * @note MISRA C:2012 compliant (explicit comparisons, no implicit bool)
 */
bool CAN_ValidateMessageID(uint32_t msg_id)
{
    bool is_valid;

    /* MISRA Rule 14.4: Explicit comparison (not if(msg_id)) */
    if ((msg_id >= 0x100U) && (msg_id <= 0x7FFU)) {
        is_valid = true;
    } else {
        is_valid = false;
    }

    return is_valid;
}

Human Review: [APPROVED] (100% correct, MISRA-compliant)


Verification Agent (SWE.4)

Your Primary Mission

ASPICE Process: SWE.4 Software Unit Verification

What You Need: Source code (.c files), function specifications

What You Deliver: Unit test code (Google Test, Unity), test reports, coverage analysis

Your Task List:

Verification Agent Task List:
─────────────────────────────────────────────────────────

1. Unit Test Generation
   Input: C function (e.g., CAN_ValidateMessageID)
   Your Job:
     - Generate test cases (typical values, boundary values, invalid inputs)
     - Create Google Test scaffolding (TEST(), ASSERT_TRUE(), etc.)
     - Add test data (valid IDs: 0x100, 0x500, 0x7FF; invalid: 0x0FF, 0x800)
   Output: Unit test file (test_can_validation.cpp, 15 test cases)
   Human Review: Adds edge cases (e.g., max uint32_t value)
   Time Saved: 1 hour  15 min (75% reduction)

2. Coverage Analysis
   Input: Unit test execution results (gcov output)
   Your Job:
     - Parse gcov report, identify uncovered lines
     - Suggest additional test cases to cover missing branches
     - Generate coverage report (HTML, 92% statement coverage)
   Output: Coverage report + suggestions (5 missing test cases)
   Human Review: Writes missing tests to reach 100%
   Time Saved: 30 min  10 min (67% reduction)

3. Test Oracle Generation
   Input: Function + expected behavior (e.g., "CRC-32 of 'hello' = 0x3610A686")
   Your Job:
     - Calculate expected outputs for test inputs (using reference implementation)
     - Generate ASSERT_EQ(expected, actual) assertions
   Output: Test oracle data (10 input/output pairs)
   Human Review: Verifies oracle correctness (cross-check with spec)
   Time Saved: 20 min  5 min (75% reduction)

4. Test Report Generation
   Input: Unit test results (passed/failed), coverage data
   Your Job:
     - Generate ASPICE SWE.4 BP5 test report (BP5 = Base Practice 5 of SWE.4, which requires test results to be summarized and archived; template-based)
     - Summarize results (150 tests, 148 pass, 2 fail, 95% coverage)
     - List failed tests (test names, failure reasons)
   Output: Test report (PDF, 5 pages)
   Human Review: Signs off on report, archives for ASPICE assessment
   Time Saved: 2 hours  15 min (87% reduction)

Example Unit Test You Might Generate:

/**
 * @file test_can_validation.cpp
 * @brief Unit tests for CAN message ID validation
 * @implements [TC-SWE-045-1] CAN ID Validation Test
 */

#include <gtest/gtest.h>
#include "can_validation.h"

/** Test valid IDs (lower boundary) */
TEST(CANValidation, ValidID_LowerBoundary) {
    ASSERT_TRUE(CAN_ValidateMessageID(0x100));  // Minimum valid
}

/** Test valid IDs (mid-range) */
TEST(CANValidation, ValidID_MidRange) {
    ASSERT_TRUE(CAN_ValidateMessageID(0x500));  // Typical valid
}

/** Test valid IDs (upper boundary) */
TEST(CANValidation, ValidID_UpperBoundary) {
    ASSERT_TRUE(CAN_ValidateMessageID(0x7FF));  // Maximum valid
}

/** Test invalid IDs (below range) */
TEST(CANValidation, InvalidID_BelowRange) {
    ASSERT_FALSE(CAN_ValidateMessageID(0x0FF));  // Just below min
}

/** Test invalid IDs (above range) */
TEST(CANValidation, InvalidID_AboveRange) {
    ASSERT_FALSE(CAN_ValidateMessageID(0x800));  // Just above max
}

/** Test edge case (zero ID) */
TEST(CANValidation, InvalidID_Zero) {
    ASSERT_FALSE(CAN_ValidateMessageID(0x000));  // Zero invalid
}

Coverage: 6 tests achieve 100% branch coverage [PASS]


Review Agent (SUP.2)

Your Primary Mission

ASPICE Process: SUP.2 Verification (peer reviews and inspections of work products, including code reviews)

What You Need: Source code (commits, pull requests), coding standards (MISRA C:2012)

What You Deliver: Code review comments, MISRA violation reports, approval/rejection

Your Task List:

Review Agent Task List:
─────────────────────────────────────────────────────────

1. MISRA C Compliance Check
   Input: C source code (Git diff, pull request)
   Your Job:
     - Run PC-lint Plus or cppcheck (MISRA addon)
     - Categorize violations (severity, rule number)
     - Generate review comment for each violation
   Output: Review report (8 MISRA violations found)
   Human Review: Approves or requests fixes
   Time Saved: 1 hour  10 min (83% reduction)

2. Code Style Verification
   Input: C source code
   Your Job:
     - Check naming conventions (snake_case for functions, UPPER_CASE for macros)
     - Verify indentation (4 spaces, no tabs)
     - Validate comment style (Doxygen format)
   Output: Style violation report (3 issues)
   Human Review: Auto-fixes trivial issues (indentation), manually fixes others
   Time Saved: 30 min  5 min (83% reduction)

3. Traceability Verification
   Input: Source code + requirements database
   Your Job:
     - Parse code comments for @implements tags
     - Verify all requirements have implementing code
     - Flag orphaned code (not traced to any requirement)
   Output: Traceability gap report (2 functions not traced)
   Human Review: Adds missing @implements tags
   Time Saved: 45 min  10 min (78% reduction)

4. Security Vulnerability Scan
   Input: C source code
   Your Job:
     - Run static analyzer (e.g., Coverity, Fortify)
     - Detect vulnerabilities (buffer overflow, SQL injection, etc.)
     - Prioritize findings (critical, high, medium, low)
   Output: Security scan report (1 critical: buffer overflow)
   Human Review: Fixes critical vulnerabilities immediately
   Time Saved: 2 hours  30 min (75% reduction)

Example Review Comment You Might Generate:

## Code Review: Pull Request #142 (CAN Message Validation)

**File**: src/can_validation.c
**Reviewer**: Review Agent
**Date**: 2025-04-15

### Findings:

**1. MISRA Violation (Rule 14.4) - Line 23**
Severity: Required
Issue: Implicit boolean conversion
Code: `if (msg_id)`
Fix: Use explicit comparison `if (msg_id != 0U)`

**2. Missing Doxygen Comment - Function `CAN_SendMessage()`**
Severity: Minor (coding standard)
Issue: No @brief, @param, @return tags
Fix: Add complete Doxygen header

**3. Traceability Gap - Function `CAN_SendMessage()`**
Severity: Major (ASPICE compliance)
Issue: Missing @implements tag (not traced to requirement)
Fix: Add `@implements [SWE-046]` or identify correct requirement

### Recommendation: **Request Changes**
Critical: 1 (MISRA Required violation)
Major: 1 (traceability gap)
Minor: 1 (missing comment)

**Action**: Resolve critical and major issues before merge.

Documentation Agent (SUP.1)

Your Primary Mission

ASPICE Process: SUP.1 Quality Assurance (generating and maintaining evidence that processes and work products meet defined standards)

What You Need: Source code, requirements, test results, design documents

What You Deliver: Software design documents, user manuals, traceability matrices, ASPICE work products

Your Task List:

Documentation Agent Task List:
─────────────────────────────────────────────────────────

1. Software Design Specification (SDS) Generation
   Input: Source code (.c/.h files), Doxygen comments
   Your Job:
     - Run Doxygen (generate HTML documentation)
     - Extract module descriptions, function signatures
     - Create PDF (LaTeX output from Doxygen)
   Output: SDS (PDF, 150 pages, auto-generated from code)
   Human Review: Verifies accuracy, adds architectural overview
   Time Saved: 40 hours  5 hours (87% reduction)

2. Traceability Matrix Maintenance
   Input: Requirements database (DOORS), source code (@implements tags)
   Your Job:
     - Parse code for traceability tags
     - Update matrix (add new links, remove obsolete)
     - Export to Excel (Requirements  Code  Tests)
   Output: Traceability matrix (500 links, updated automatically)
   Human Review: Spot-checks 10% of links for correctness
   Time Saved: 10 hours  1 hour (90% reduction)

3. Test Report Generation
   Input: Test results (Google Test XML, gcov coverage)
   Your Job:
     - Parse XML, extract pass/fail counts
     - Generate summary tables (tests by module, coverage by file)
     - Create ASPICE-compliant report (SWE.4 BP5 template)
   Output: Test report (PDF, 8 pages)
   Human Review: Signs off on report
   Time Saved: 3 hours  20 min (89% reduction)

4. Release Notes Generation
   Input: Git commit history (last release  current)
   Your Job:
     - Summarize changes (new features, bug fixes, known issues)
     - Group by category (features, fixes, documentation)
     - Format as markdown
   Output: Release notes (2 pages)
   Human Review: Edits for clarity, adds customer-facing notes
   Time Saved: 2 hours  30 min (75% reduction)

The Big Picture: Agent Roles at a Glance

Agent ASPICE Process Key Tasks Time Savings
Requirements SWE.1 Extract reqs, detect ambiguities, generate traceability 75-90%
Architecture SWE.2 Draft ADRs, generate UML, validate interfaces 62-75%
Implementation SWE.3 Generate code, Doxygen comments, MISRA checks 75-80%
Verification SWE.4 Generate unit tests, coverage analysis, test reports 67-87%
Review SUP.2 MISRA compliance, code style, traceability verification 75-83%
Documentation SUP.1 SDS generation, traceability maintenance, test reports 75-90%

Overall Productivity Gain: 40-50% (measured across case studies in Chapters 25-28)

A Note on These Estimates: The time savings above come from real case studies in Chapters 25-28. Your actual results will vary based on: (1) Model quality (GPT-4 outperforms GPT-3.5), (2) Task complexity, (3) Domain specificity (automotive vs generic), (4) Prompt quality. Establish project-specific baselines for accurate measurement.

Next Up: Human-in-the-Loop (HITL) integration protocol (Chapter 29.2)