4.0: Prompt Templates

Purpose and Scope

For Humans Interacting with AI Agents

Audience: Engineers, architects, managers using AI agents for ASPICE tasks

Purpose: Provide ready-to-use prompt templates for:

  1. Requirements Analysis (SWE.1): Extract requirements, detect ambiguities
  2. Code Generation (SWE.3): Generate C functions, MISRA-compliant code
  3. Code Review (SUP.2): Review code, check MISRA, verify traceability
  4. Test Generation (SWE.4): Generate unit tests, achieve coverage

How to Use Templates:

  • Copy template
  • Replace placeholders (e.g., {REQUIREMENT_TEXT}, {FUNCTION_NAME})
  • Paste into AI agent (ChatGPT, Claude, GitHub Copilot Chat)
  • Review AI output, iterate if needed

Prompt Engineering Best Practices

Effective Prompts for ASPICE Tasks

Anatomy of a Good Prompt:

Component Description Example
1. Role Definition Define AI expertise "You are an AI expert in..."
2. Context Project background "I'm working on an ASIL-B ECU..."
3. Task Description What to generate "Generate a C function that..."
4. Constraints Standards, limits "Must comply with MISRA C:2012..."
5. Expected Output Output structure "Output: C code with Doxygen..."
6. Examples (optional) Sample I/O "Example input/output..."

Good Prompt Example:

You are an AI code generator specialized in automotive embedded C (MISRA C:2012).

Context: I'm developing an ACC ECU for automotive ASIL-B (ISO 26262).
Task: Generate a C function that calculates obstacle distance from radar sensor data.

Requirements:
- Input: uint16_t radar_raw_mm (distance in millimeters, range 0-65535)
- Output: float* distance_m (distance in meters, range 0.0-65.535)
- Return: 0 = success, -1 = invalid sensor data (radar_raw_mm == 0xFFFF)
- Constraints: MISRA C:2012 compliant, defensive programming (null pointer check)

Expected Output:
- C function with Doxygen header (@brief, @param, @return, @implements [SWE-045-1])
- MISRA compliant (explicit casts, named constants, no magic numbers)

Bad Prompt Example [FAIL]:

Write a function to calculate distance

(Too vague: no context, constraints, or expected output)


Prompt Template Structure

Standard Format

Template Components:

  1. Role: Define AI expertise (e.g., "You are an embedded C expert...")
  2. Context: Project background (e.g., "ASIL-B ACC ECU...")
  3. Task: What to generate (e.g., "Generate unit tests...")
  4. Constraints: Standards, limits (e.g., "MISRA C:2012, no dynamic memory...")
  5. Output Format: Expected structure (e.g., "C code with Doxygen comments...")
  6. Example (optional): Input/output sample

Placeholder Notation:

  • {PLACEHOLDER}: Replace with actual value (e.g., {REQUIREMENT_ID}SWE-045-1)
  • [OPTIONAL]: Optional information (can be omitted)
  • <CHOICE_A | CHOICE_B>: Choose one option

Prompt Versioning Strategy: Maintain prompt templates in version control alongside code. When prompts produce consistent incorrect results, update template and increment version. Track prompt versions in work product metadata for reproducibility (e.g., "Generated with prompt template v1.3").


Template Categories

By ASPICE Process

Section Process Templates
32.01 SWE.1 Requirements Extract requirements, detect ambiguities, generate traceability
32.02 SWE.3 Implementation Generate C code, Doxygen comments, MISRA-compliant functions
32.03 SUP.2 Review Code review, MISRA check, traceability verification
32.04 SWE.4 Verification Generate unit tests, coverage analysis, test reports

General-Purpose Templates

Cross-Process Prompts

1. Explain Code (Understanding):

You are an AI assistant helping engineers understand embedded C code.

Code:

{CODE_SNIPPET}


Task: Explain what this code does in simple terms (2-3 sentences), then provide a detailed analysis:
1. Purpose: What is the function's role?
2. Algorithm: How does it work? (step-by-step)
3. Safety considerations: Any ASIL-related concerns?
4. Edge cases: What inputs could cause issues?
5. MISRA compliance: Any violations?

Output: Plain English explanation + technical analysis

2. Debug Code (Troubleshooting):

You are an AI debugging expert for automotive embedded C (MISRA C:2012).

Context: {PROJECT_CONTEXT (e.g., "ASIL-B ACC ECU, TriCore TC397")}

Code:

{BUGGY_CODE}


Issue: {BUG_DESCRIPTION (e.g., "Function returns incorrect value (expected 5.0, got 5.12)")}

Task: Identify root cause and provide fix:
1. Root cause: What's wrong? (algorithm error, data type issue, logic flaw?)
2. Fix: Corrected code (MISRA compliant)
3. Explanation: Why did the bug occur?
4. Test case: Unit test to prevent regression

Output: Corrected code + explanation + test case

3. Optimize Code (Performance):

You are an AI expert in embedded systems performance optimization (TriCore architecture).

Context: {PROJECT_CONTEXT}

Code:

{SLOW_CODE}


Performance Issue: {ISSUE_DESCRIPTION (e.g., "Function takes 50ms, requirement is ≤20ms")}

Task: Optimize code for latency while maintaining MISRA C:2012 compliance:
1. Bottleneck: Identify performance bottleneck (hot path)
2. Optimization: Apply optimization (algorithmic, compiler, data structure)
3. Trade-offs: Any accuracy/memory trade-offs?
4. Verification: How to verify correctness after optimization?

Constraints:
- MISRA C:2012 compliant
- Maintain functionality (no behavior change)
- Target: {TARGET_LATENCY}

Output: Optimized code + explanation + performance estimate

4. Refactor Code (Code Quality):

You are an AI software architect for embedded systems.

Code:

{MESSY_CODE}


Issues: {QUALITY_ISSUES (e.g., "High cyclomatic complexity (25), poor naming, no error handling")}

Task: Refactor code to improve quality:
1. Structure: Break into smaller functions (cyclomatic complexity <10)
2. Naming: Use descriptive variable/function names (snake_case)
3. Error Handling: Add defensive programming (null checks, bounds checks)
4. Doxygen: Add complete documentation

Constraints:
- MISRA C:2012 compliant
- Maintain functionality (no behavior change)
- Keep backward compatibility (same API signature)

Output: Refactored code + explanation of improvements

Prompt Iteration Strategies

How to Refine Prompts

If AI Output Is Incorrect:

  1. Add More Context: Specify units, ranges, formulas
  2. Provide Example: Show input/output pair
  3. Tighten Constraints: Add MISRA rules, safety requirements
  4. Request Step-by-Step: Ask AI to explain reasoning first

Example Iteration:

Iteration 1 (vague):

Generate function to calculate distance

Result: AI generates generic code (not MISRA compliant)

Iteration 2 (add context):

Generate C function for ASIL-B ACC ECU to calculate obstacle distance.
Input: uint16_t radar_mm, Output: float* distance_m
Must be MISRA C:2012 compliant.

Result: AI generates better code, but Doxygen is missing

Iteration 3 (add output format):

Generate C function for ASIL-B ACC ECU to calculate obstacle distance.
Input: uint16_t radar_mm, Output: float* distance_m
Must be MISRA C:2012 compliant with complete Doxygen header (@brief, @param, @return, @implements [SWE-045-1]).

Result: AI generates correct code [PASS]


Template Customization

Project-Specific Adaptations

Create Project Prompt Library:

project-prompts/
├── common/
│   ├── project_context.txt       # Reusable context (project, standards, CPU)
│   └── coding_standards.txt      # MISRA rules, naming conventions
├── swe1/
│   ├── extract_requirements.txt
│   └── detect_ambiguities.txt
├── swe3/
│   ├── generate_function.txt
│   └── generate_doxygen.txt
└── swe4/
    ├── generate_unit_tests.txt
    └── coverage_analysis.txt

Example: Project Context Template (project_context.txt):

Project: ACC ECU (Adaptive Cruise Control)
Safety Class: ASIL-B (ISO 26262)
Target CPU: Infineon AURIX TriCore TC397 (300 MHz)
RAM: 2 MB, Flash: 8 MB
RTOS: AUTOSAR Classic R4.4.0
Coding Standard: MISRA C:2012 (Required rules: 0 violations, Advisory: minimize)
Communication: CAN 2.0B (500 kbps)

Usage: Prepend to every prompt to avoid repetition

{PROJECT_CONTEXT}

Task: Generate unit test for ACC_GetObstacleDistance function...

Summary

Prompt Templates Overview:

  1. Requirements Prompts (32.01): Extract requirements, detect ambiguities, generate traceability
  2. Code Generation Prompts (32.02): Generate C functions, MISRA-compliant code, Doxygen
  3. Review Prompts (32.03): Code review, MISRA check, traceability verification
  4. Testing Prompts (32.04): Generate unit tests, coverage analysis, test reports

Best Practices:

  • Define role, context, task, constraints, and output format
  • Iterate prompts if output is incorrect (add context, examples, or constraints)
  • Create a project prompt library (reusable templates)

Next: Section-specific prompt templates (Requirements, Code Generation, Review, Testing)