2.1: Requirements Agent Instructions

Role Definition

Requirements Agent (SWE.1)

Primary Responsibility: Analyze system requirements, extract software requirements, detect ambiguities, generate traceability

ASPICE Process: SWE.1 Software Requirements Analysis

Success Metrics:

  • Requirement Extraction Accuracy: 70-80% (human reviews, corrects 20-30%)
  • Ambiguity Detection: 85-90% recall (flags vague terms, missing units)
  • Traceability Generation: 95-100% accuracy (automated parsing)

Input Work Products

Required Inputs (Must Exist Before Starting)

1. System Requirements Specification (SYS.2 Output)

  • Format: PDF, Word (.docx), DOORS database export (DOORS = IBM's requirements management tool, common in automotive), ReqIF XML (Requirements Interchange Format — an open standard for exchanging requirements between tools)
  • Content: Functional requirements, non-functional requirements, interface definitions
  • Quality Criteria: Requirements must have IDs (e.g., [SYS-001]), clear descriptions, acceptance criteria

Example System Requirement:

[SYS-045] Emergency Braking Function
Description: The vehicle shall activate emergency braking if an obstacle
is detected at distance <5m and closing speed >10 km/h.
Acceptance Criteria:
  - Detection latency ≤50ms
  - Braking force applied within 100ms
Safety Class: ASIL-B

2. Project Context Documents

  • Coding Standards: MISRA C:2012, CERT C (defines implementation constraints)
  • System Architecture: High-level block diagram (shows SW/HW boundary)
  • Glossary: Domain-specific terms (e.g., "ACC" = Adaptive Cruise Control)

Execution Steps

Step-by-Step Workflow

Step 1: Parse System Requirements

Action: Extract requirements from input documents

Tools (install with pip install pypdf2 pdfplumber python-docx pandas):

  • PDF parsing: PyPDF2, pdfplumber (Python)
  • Word parsing: python-docx
  • DOORS integration: ReqIF parser (if available)

Example Code (Python):

import re
from docx import Document

def extract_requirements(docx_path):
    """
    Extract requirements from Word document (SYS requirements)
    Returns: List of (req_id, description, safety_class) tuples
    """
    doc = Document(docx_path)
    requirements = []

    for para in doc.paragraphs:
        # Match pattern: [SYS-XXX] Requirement text
        match = re.match(r'\[SYS-(\d+)\]\s+(.*)', para.text)
        if match:
            req_id = f"SYS-{match.group(1)}"
            description = match.group(2).strip()

            # Extract safety class (if present)
            safety_match = re.search(r'ASIL-([A-D])', para.text)
            safety_class = safety_match.group(0) if safety_match else "QM"

            requirements.append((req_id, description, safety_class))

    return requirements

# Usage
sys_reqs = extract_requirements("System_Requirements.docx")
print(f"Extracted {len(sys_reqs)} system requirements")

Output Check: Verify that more than 90% of system requirements were successfully parsed (if less than 90%, escalate: "Document format not parseable, needs manual extraction")


Step 2: Analyze Ambiguities and Incompleteness

Action: Detect vague terms, missing units, untestable requirements

Ambiguity Patterns to Flag:

AMBIGUOUS_TERMS = [
    "quickly", "fast", "slow", "sufficient", "adequate",
    "user-friendly", "intuitive", "robust", "efficient",
    "as soon as possible", "in a timely manner"
]

MISSING_UNITS = [
    # Pattern: Number without unit (e.g., "distance < 5" instead of "distance < 5m")
    r'\b(\d+\.?\d*)\b(?!\s*(m|km|s|ms|kg|°C|Hz|kHz|MHz))'
]

def detect_ambiguities(requirement_text):
    """
    Detect ambiguous or incomplete requirements
    Returns: List of issues found
    """
    issues = []

    # Check for ambiguous terms
    for term in AMBIGUOUS_TERMS:
        if term.lower() in requirement_text.lower():
            issues.append(f"Ambiguous term: '{term}' (quantify with numeric value)")

    # Check for missing units
    if re.search(MISSING_UNITS[0], requirement_text):
        issues.append("Numeric value missing units (add m, s, kg, etc.)")

    # Check for testability (requirement should have measurable criteria)
    if "shall" not in requirement_text.lower():
        issues.append("Requirement not testable (use 'shall' + measurable criteria)")

    return issues

# Example
req_text = "The system shall respond quickly to user input"
issues = detect_ambiguities(req_text)
# Output: ["Ambiguous term: 'quickly' (quantify with numeric value)", ...]

Output: Generate Ambiguity Report (Markdown table)

| Requirement ID | Issue Type | Description | Suggested Fix |
|----------------|------------|-------------|---------------|
| SYS-045 | Missing Unit | "distance < 5" | Specify "distance < 5 meters" |
| SYS-078 | Ambiguous Term | "respond quickly" | Specify "response time ≤ 100ms" |
| SYS-089 | Not Testable | "shall be robust" | Define quantitative reliability metric |

Escalation: If more than 20% of requirements have critical issues (ambiguous or not testable), escalate to the requirements engineer: "20% of requirements need clarification before proceeding"


Step 3: Derive Software Requirements

Action: Decompose system requirements into software requirements

Derivation Rules:

  1. Allocate to Software: Determine which system requirements are implemented in SW (vs HW)
  2. Decompose: Break complex system requirements into multiple SW requirements
  3. Add Implementation Constraints: Include MISRA C, memory limits, latency requirements

Example Derivation:

System Requirement:

[SYS-045] Emergency Braking Function
The vehicle shall activate emergency braking if obstacle <5m and closing speed >10 km/h.
Safety Class: ASIL-B

Derived Software Requirements (AI-generated):

[SWE-045-1] Obstacle Distance Calculation
The ACC software shall calculate obstacle distance using radar sensor data.
- Input: Radar sensor raw data (CAN message ID 0x200)
- Output: Distance in meters (float32, range: 0.0 - 200.0m)
- Latency: ≤20ms (from CAN message receipt to distance calculation)
- Safety Class: ASIL-B
- Implements: [SYS-045]

[SWE-045-2] Closing Speed Calculation
The ACC software shall calculate closing speed using consecutive distance measurements.
- Formula: v_closing = (d_prev - d_current) / Δt
- Sampling rate: 50 Hz (Δt = 20ms)
- Output: Speed in km/h (float32, range: -200.0 to 200.0 km/h)
- Safety Class: ASIL-B
- Implements: [SYS-045]

[SWE-045-3] Emergency Brake Trigger Logic
The ACC software shall activate emergency braking if:
  - obstacle_distance < 5.0m AND
  - closing_speed > 10.0 km/h AND
  - ACC system is enabled
- Output: Brake activation command (CAN message ID 0x300)
- Latency: ≤30ms (total latency ≤50ms per SYS-045)
- Safety Class: ASIL-B
- Implements: [SYS-045]

Quality Check: Verify that:

  • [OK] All derived requirements have @implements [SYS-XXX] tag (traceability)
  • [OK] Units specified (m, km/h, ms)
  • [OK] Safety class inherited from system requirement
  • [OK] Latency requirements sum to system-level latency (20 + 30 = 50ms within 50ms budget [PASS])

Step 4: Define Software Interfaces

Action: Specify interfaces between software components, external systems (HW, other ECUs)

Interface Types:

  1. CAN Bus: Message IDs, data layout, frequency
  2. Function Calls: API signatures (C functions)
  3. Memory-Mapped I/O: Register addresses (for HW drivers)

Example Interface Definition:

/**
 * @brief CAN Message: Radar Sensor Data
 * @interface CAN_RADAR_DATA
 * @message_id 0x200
 * @frequency 50 Hz (20ms period)
 * @safety_class ASIL-B
 * @implements [SWE-045-1]
 */
typedef struct {
    uint16_t object_distance_mm;  /**< Distance in millimeters (0-65535) */
    int16_t  relative_speed_cmps; /**< Speed in cm/s (-32768 to 32767) */
    uint8_t  object_valid;        /**< 1 = valid, 0 = invalid */
    uint8_t  reserved;            /**< Padding for 8-byte CAN frame */
} CAN_RadarData_t;

/**
 * @brief API: Get Obstacle Distance
 * @interface ACC_GetObstacleDistance
 * @param[out] distance_m Pointer to distance output (meters)
 * @return 0 = success, -1 = sensor invalid
 * @implements [SWE-045-1]
 */
int ACC_GetObstacleDistance(float* distance_m);

Quality Check: Ensure all interfaces include:

  • [OK] Data types, ranges, units
  • [OK] Timing (frequency, latency)
  • [OK] Error handling (return codes, invalid data)
  • [OK] Traceability (@implements tag)

Step 5: Generate Traceability Matrix

Action: Create bidirectional traceability (SYS → SWE)

Output Format: Excel spreadsheet or Markdown table

| System Req | Software Req | Rationale | Status |
|------------|--------------|-----------|--------|
| SYS-045 | SWE-045-1 | Obstacle distance calculation | Draft |
| SYS-045 | SWE-045-2 | Closing speed calculation | Draft |
| SYS-045 | SWE-045-3 | Brake trigger logic | Draft |

Automation (Python):

def generate_traceability_matrix(sys_reqs, swe_reqs):
    """
    Generate SYS → SWE traceability matrix
    """
    matrix = []

    for swe_req in swe_reqs:
        # Extract @implements tag from SWE requirement
        match = re.search(r'@implements\s+\[([^\]]+)\]', swe_req.text)
        if match:
            sys_req_id = match.group(1)
            matrix.append({
                'System Req': sys_req_id,
                'Software Req': swe_req.id,
                'Rationale': swe_req.brief_description,
                'Status': swe_req.status
            })

    # Export to Excel
    import pandas as pd
    df = pd.DataFrame(matrix)
    df.to_excel("Traceability_Matrix_SYS_SWE.xlsx", index=False)

    return matrix

# Usage
matrix = generate_traceability_matrix(sys_reqs, swe_reqs)
print(f"Generated {len(matrix)} traceability links")

Quality Check: Verify 100% traceability — every SWE requirement must trace to at least one SYS requirement


Step 6: Ensure Verifiability

Action: For each software requirement, define how it will be tested

Verifiability Criteria:

  • [OK] Requirement specifies measurable criteria (e.g., "latency 50ms or less")
  • [OK] Test method identified (unit test, integration test, HIL test — Hardware-in-the-Loop, testing software against real or simulated hardware on a dedicated test bench)
  • [OK] Test data specified (input values, expected outputs)

Example (add to requirement):

[SWE-045-1] Obstacle Distance Calculation
...
Verification Method: Unit test
Test Cases:
  - TC-SWE-045-1-1: Valid radar data (distance = 10m) → Output = 10.0 ± 0.1m
  - TC-SWE-045-1-2: Invalid radar data (object_valid = 0) → Return error code -1
  - TC-SWE-045-1-3: Boundary value (distance = 0m) → Output = 0.0m
  - TC-SWE-045-1-4: Boundary value (distance = 200m) → Output = 200.0m

Output Work Products

What Requirements Agent Must Generate

1. Software Requirements Specification (SRS) Document

  • Format: Markdown, Word, or DOORS database
  • Content:
    • Software requirements (SWE-XXX-Y)
    • Interface definitions (CAN, API)
    • Traceability to system requirements
    • Verification criteria
  • Status: Draft (awaiting human review)

2. Ambiguity Report

  • Format: Markdown table (see Step 2)
  • Content: Issues found in system requirements, suggested fixes
  • Action: Send to requirements engineer for clarification

3. Traceability Matrix (SYS → SWE)

  • Format: Excel spreadsheet
  • Content: System-to-software requirement links

4. Pull Request with Review Summary

## Summary
- Extracted 12 system requirements from System_Spec_v1.2.docx
- Generated 34 software requirements (2.8 SWE per SYS on average)
- Defined 8 CAN interfaces, 15 function APIs
- Detected 3 ambiguities (escalated to @requirements_lead)

## AI Confidence
- High confidence: 31/34 requirements (standard radar/brake interfaces)
- Medium confidence: 3/34 requirements (fail-safe behavior, needs safety review)

## Traceability
- 100% SWE → SYS coverage (34/34 requirements linked)
- 100% SYS → SWE coverage (12/12 system requirements implemented)

## Human Action Required
1. Review fail-safe requirements [SWE-045-4, SWE-089-1, SWE-123-2]
2. Clarify ambiguities (see Ambiguity_Report.md)
3. Approve CAN message IDs (potential conflict with body control module)

## Quality Metrics
- Requirements with units: 100% (34/34)
- Requirements with testability criteria: 100% (34/34)
- ASPICE SWE.1 BP coverage: BP1-6 fully addressed

Quality Criteria

Acceptance Criteria for Requirements Agent Output

Requirements Agent Quality Checklist:
──────────────────────────────────────────────────────

 Completeness
    All system requirements analyzed (100% coverage)
    All SWE requirements have IDs (SWE-XXX-Y format)
    All interfaces defined (CAN, API, memory-mapped I/O)

 Correctness
    Units specified (m, s, kg, °C) for all numeric values
    Formulas verified (cross-checked with system spec)
    Safety classes consistent (ASIL-B inherited from SYS-045)

 Traceability
    100% SWE  SYS links (@implements tags present)
    100% SYS  SWE links (all system reqs have SWE children)
    Traceability matrix generated (Excel file)

 Verifiability
    All requirements have test criteria (measurable)
    Test methods identified (unit/integration/HIL  Hardware-in-the-Loop)
    Boundary values specified (min, max, invalid)

 Ambiguity Detection
    Flagged vague terms ("quickly", "robust")
    Flagged missing units (numbers without m/s/kg)
    Generated clarification questions

Verdict:
  [PASS]: Submit SRS for human review
  [FAIL]: Fix issues, re-run checklist

Escalation Triggers

When Requirements Agent Must Escalate

1. Critical Ambiguity [ESCALATION]

  • Trigger: System requirement lacks quantification (e.g., "shall respond quickly")
  • Action: Generate clarification question, escalate to requirements engineer
  • Example:
    [ESCALATION] ESCALATION: SYS-078 - Ambiguous Requirement
    Issue: "System shall respond quickly" (no latency specified)
    Question: What is the maximum acceptable response time? (suggest: ≤100ms based on similar automotive systems)
    Assignee: @requirements_lead
    

2. Safety-Critical Decision [ESCALATION]

  • Trigger: System requirement implies safety logic, but fail-safe behavior not specified
  • Action: Escalate to safety engineer
  • Example:
    [ESCALATION] ESCALATION: SWE-045-4 - Fail-Safe Behavior Undefined
    Context: Emergency braking function (ASIL-B)
    Question: If radar sensor fails (CAN timeout), should we:
      A) Disable ACC, alert driver (safe state)
      B) Use last known distance (degraded mode)
      C) Activate redundant sensor (if available)
    Recommendation: Option A (ISO 26262 safe state transition)
    Assignee: @safety_engineer
    

3. Conflicting Requirements [ESCALATION]

  • Trigger: Two system requirements cannot be satisfied simultaneously
  • Action: Escalate to system architect
  • Example:
    [ESCALATION] ESCALATION: Requirement Conflict
    Conflict: SYS-045 requires ≤50ms latency, but SYS-078 requires complex Kalman filter (estimated 80ms)
    Options:
      A) Simplify Kalman filter (sacrifice accuracy)
      B) Relax latency to 80ms (negotiate with customer)
      C) Use faster CPU (cost increase)
    Assignee: @system_architect
    

4. Out-of-Scope Complexity [ESCALATION]

  • Trigger: System requirement implies domain knowledge AI lacks (e.g., custom protocol)
  • Action: Escalate to domain expert
  • Example:
    [ESCALATION] ESCALATION: Custom Protocol (Out of AI Scope)
    Issue: SYS-123 requires "OEM-proprietary bus protocol" (no public spec available)
    Action: Human engineer must define interface based on OEM datasheet
    Assignee: @embedded_engineer
    

Examples

Complete Requirements Agent Workflow

Input: System Requirements Specification (12 requirements)

Output 1: Software Requirements Specification (34 requirements)

  • 34 SWE requirements derived from 12 SYS requirements
  • All requirements include units, safety class, traceability
  • 8 CAN interfaces, 15 function APIs defined

Output 2: Ambiguity Report

  • 3 ambiguities detected (SYS-078, SYS-089, SYS-123)
  • Escalated to requirements engineer with suggested fixes

Output 3: Traceability Matrix

  • 100% SYS → SWE coverage (12/12 system requirements)
  • 100% SWE → SYS coverage (34/34 software requirements)
  • Excel file: Traceability_Matrix_SYS_SWE.xlsx

Output 4: Pull Request

  • Branch: feature/swe1-requirements
  • Files: SRS.md, Ambiguity_Report.md, Traceability_Matrix.xlsx
  • Assignee: @requirements_lead (human review)
  • Status: Awaiting approval

Human Review Time: 2 hours (baseline: 8 hours manual) → 75% time savings


Summary

Requirements Agent Key Responsibilities:

  1. Parse System Requirements: Extract from PDF/Word/DOORS (70-80% accuracy)
  2. Detect Ambiguities: Flag vague terms, missing units (85-90% recall)
  3. Derive Software Requirements: Decompose system → software (2-3 SWE per SYS)
  4. Define Interfaces: CAN, API, memory-mapped I/O (with types, ranges, timing)
  5. Generate Traceability: SYS → SWE matrix (95-100% accuracy)
  6. Ensure Verifiability: Define test criteria for all requirements

Escalation: Safety decisions, ambiguities, conflicts, out-of-scope complexity

Success Metrics: 70-80% requirement extraction, 85-90% ambiguity detection, 95-100% traceability