3.1: Effective Prompting

Prompt Engineering for Code Generation

The Anatomy of a Good Prompt

Structure:

1. Context (What is the background?)
2. Task (What do you want?)
3. Constraints (What are the requirements?)
4. Format (How should the output look?)
5. Example (Optional: Show desired output)

Bad Prompt (vague):

"Write a function to calculate distance"

Anti-Pattern Examples: Other vague prompts to avoid: "Make it faster," "Fix the bug," "Improve this code," "Generate some tests." Each lacks specific context, constraints, or expected output format.

Good Prompt (structured):

Context: I'm developing an ACC ECU for automotive (ASIL-B, ASPICE CL2)

Task: Generate a C function that calculates safe following distance

Constraints:
- Function name: ACC_CalculateSafeDistance
- Implements requirement [SWE-045-11]
- Input: vehicle_speed_kmh (float)
- Output: safe_distance_m (float)
- Logic: d_safe = v × 2.0 seconds
- Must be MISRA C:2012 compliant
- Must validate input (negative speed → 0)

Format:
- Include Doxygen header with @implements tag
- Include input validation
- Use named constants (no magic numbers)

Example:
/**
 * @brief Calculate safe following distance
 * @implements [SWE-045-11] Safe Following Distance
 * ...
 */
float ACC_CalculateSafeDistance(float vehicle_speed_kmh) {
    /* Implementation */
}

Prompt Templates by Task

1. Code Generation

Template:

Generate a [LANGUAGE] function for [STANDARD] [SAFETY_CLASS]:

Context:
- Project: [PROJECT_NAME]
- Module: [MODULE_NAME]
- Safety class: [ASIL-X]

Task:
- Function name: [FUNCTION_NAME]
- Requirement: [REQ_ID] [REQ_DESCRIPTION]
- Purpose: [HIGH_LEVEL_DESCRIPTION]

Inputs:
- [PARAM1]: [TYPE] - [DESCRIPTION]
- [PARAM2]: [TYPE] - [DESCRIPTION]

Output:
- [RETURN_TYPE] - [DESCRIPTION]

Logic:
- [STEP_1]
- [STEP_2]
- [STEP_3]

Constraints:
- Coding standard: [MISRA C:2012 / CERT C]
- Error handling: [DEFENSIVE / FAIL-SAFE]
- Include Doxygen header with @implements tag

Example:

Generate a C function for ASPICE SWE.3 ASIL-B:

Context:
- Project: ACC ECU
- Module: Sensor Fusion
- Safety class: ASIL-B

Task:
- Function name: SensorFusion_CalculateFusedDistance
- Requirement: [SWE-045-1] Sensor Fusion Algorithm
- Purpose: Fuse radar and camera data using weighted average

Inputs:
- radar_distance_m: float - Distance from radar sensor (meters)
- camera_distance_m: float - Distance from camera sensor (meters)

Output:
- float - Fused distance in meters (0 = error)

Logic:
- Validate inputs (both ≥0, ≤200m)
- Weighted average: radar 60%, camera 40%
- If either invalid, use other sensor
- If both invalid, return 0 (error)

Constraints:
- MISRA C:2012 compliant
- Defensive error handling
- Include Doxygen header with @implements [SWE-045-1]

2. Test Generation

Template:

Generate [TEST_FRAMEWORK] unit tests for [FUNCTION_NAME]:

Function signature:
[FULL_FUNCTION_SIGNATURE]

Test cases to cover:
1. Nominal: [DESCRIPTION]
2. Boundary: [DESCRIPTION]
3. Error: [DESCRIPTION]

Requirements:
- Each test has @verifies tag linking to requirement
- Use descriptive test names (Given_When_Then format)
- Include comments explaining test purpose
- Aim for [X]% code coverage

Example:

Generate Google Test unit tests for ACC_CalculateSafeDistance:

Function signature:
float ACC_CalculateSafeDistance(float vehicle_speed_kmh);

Test cases to cover:
1. Nominal: 50 km/h → 27.78m
2. Boundary: 0 km/h → 0m
3. Boundary: 150 km/h → 83.34m
4. Error: Negative speed → 0m (defensive)
5. Equivalence: Low speed (30 km/h)
6. Equivalence: High speed (100 km/h)

Requirements:
- Each test has @verifies [SWE-045-11]
- Test names: CalculateSafeDistance_50kmh_Returns27m
- Tolerance: ±0.1m for float comparison
- Aim for 100% statement coverage

3. Requirements Extraction

Template:

Extract software requirements from the following document:

Document: [PASTE TEXT OR ATTACH FILE]

Extract:
- Functional requirements (system behavior)
- Non-functional requirements (performance, safety)
- Interfaces (CAN, Ethernet, APIs)
- Constraints (regulatory, standards)

Output format:
For each requirement:
- ID: [SWE-XXX]
- Title: [SHORT_TITLE]
- Description: [QUANTIFIED_DESCRIPTION]
- Category: [FUNCTIONAL / NON-FUNCTIONAL / INTERFACE]
- Verification method: [UNIT_TEST / INTEGRATION_TEST / HIL]
- Safety class: [ASIL-X / QM]

4. Code Review

Template:

Review the following C code for [STANDARD] [SAFETY_CLASS]:

Code:
[PASTE CODE]

Review checklist:
☐ Correctness (implements requirements correctly?)
☐ MISRA C:2012 compliance
☐ Error handling (defensive programming?)
☐ Readability (clear names, small functions?)
☐ Testability (can be unit tested?)
☐ Traceability (@implements tags present?)
☐ Safety (fail-safe behavior defined?)

Provide:
1. List of issues found (with line numbers)
2. Severity (Critical / Major / Minor)
3. Suggested fixes
4. MISRA violations (specific rule numbers)

5. Refactoring

Template:

Refactor the following code:

Code:
[PASTE CODE]

Refactoring goals:
- [EXTRACT_FUNCTION / SIMPLIFY / RENAME / REMOVE_DUPLICATION]

Constraints:
- Preserve behavior (no functional changes)
- Maintain MISRA C:2012 compliance
- Keep functions small (5-15 lines)
- Improve testability

Iterative Prompting

Start General, Then Refine

Iteration 1 (General):

"Generate a PID controller in C"

→ AI outputs basic PID (no edge cases)

Iteration 2 (Add requirements):

"Add anti-windup for integral term (clamp at ±100)"

→ AI adds integral clamping

Iteration 3 (Add safety):

"Add output saturation to [-100, +100] range"

→ AI adds output limits

Iteration 4 (Add standards):

"Make it MISRA C:2012 compliant (Rule 10.4, 12.1)"

→ AI fixes type conversions

Iteration 5 (Add traceability):

"Add Doxygen header with @implements [SWE-045-9] tag"

→ AI adds documentation

Final Output: Production-ready PID controller (5 iterations)


Common Prompt Mistakes

Mistake 1: Too Vague

[FAIL] Bad: "Fix this code" [PASS] Good: "Fix MISRA C:2012 Rule 10.4 violation on line 45 (in the code snippet provided above - implicit type conversion)"

Mistake 2: No Context

[FAIL] Bad: "Generate radar parser" [PASS] Good: "Generate CAN parser for radar Message ID 0x200, 8 bytes, format: [distance_mm uint16, speed_cmps int16, valid uint8]"

Mistake 3: Expecting Perfection on the First Try

[FAIL] Bad: Give up after the first output doesn't meet all requirements [PASS] Good: Iterate — add requirements incrementally and refine the output

Mistake 4: Not Specifying Standards

[FAIL] Bad: "Generate C code" [PASS] Good: "Generate MISRA C:2012 compliant C code for ASIL-B"


Advanced Techniques

1. Few-shot Prompting

Technique: Show AI examples of desired output

Generate Doxygen headers like these examples:

Example 1:
/**
 * @brief Read CAN message from radar sensor
 * @implements [SWE-045-12] Radar CAN Interface
 * @param[in] msg_id CAN message ID
 * @param[out] buffer Output buffer (8 bytes)
 * @return 0 = success, -1 = timeout
 */

Example 2:
/**
 * @brief Calculate obstacle distance from sensor data
 * @implements [SWE-045-13] Obstacle Detection
 * @param[in] radar_m Radar distance in meters
 * @param[in] camera_m Camera distance in meters
 * @return Fused distance in meters, 0 = error
 */

Now generate a header for: ACC_CalculateTargetSpeed

2. Chain-of-Thought Prompting

Technique: Ask AI to explain reasoning

Calculate the safe following distance for a vehicle traveling at 100 km/h.

Think step-by-step:
1. Convert 100 km/h to m/s
2. Apply 2-second following time rule
3. Calculate distance in meters

Show your work.

AI Output:

Step 1: 100 km/h = (100 ÷ 3.6) = 27.78 m/s
Step 2: Following time = 2.0 seconds
Step 3: Distance = 27.78 m/s × 2.0 s = 55.56 meters

Answer: 55.56 meters

3. Role Prompting

Technique: Ask AI to assume a role

You are a safety engineer reviewing ASIL-B code for ISO 26262 compliance.

Review this emergency braking function:
[PASTE CODE]

Focus on:
- Fail-safe behavior
- Sensor redundancy
- Error handling
- Safety requirements [SYS-089]

Summary

Good Prompt Structure: Context + Task + Constraints + Format + Example

Prompt Templates: Code generation, test generation, requirements extraction, code review, refactoring

Iterative Prompting: Start general, refine with each iteration (add requirements, safety, standards, traceability)

Common Mistakes: Too vague, no context, expecting perfection first try, not specifying standards

Advanced Techniques: Few-shot (show examples), chain-of-thought (explain reasoning), role prompting (assume expert role)

Prompt Debugging: When AI output is incorrect, debug your prompt: (1) Was context complete? (2) Were constraints clear? (3) Was output format specified? (4) Did you provide examples? Iterate on one element at a time to identify the issue.

Next: Reviewing AI Output (35.02) — How to critically evaluate AI-generated code and requirements


Self-Assessment Quiz

Test your understanding of effective AI prompting. Answers are at the bottom.

Question 1: What's wrong with this prompt: "Generate a CAN parser in C"?

  • A) Nothing, it's clear enough
  • B) Missing context (message ID, format, standards, safety class)
  • C) Too long
  • D) Should ask for Python instead

Question 2: What is the recommended prompt structure?

  • A) Just describe what you want
  • B) Context + Task + Constraints + Format + Example
  • C) Ask a simple question
  • D) Copy-paste from StackOverflow

Question 3: When should you use iterative prompting?

  • A) Never—get it right the first time
  • B) Always—start general, add requirements incrementally
  • C) Only for complex algorithms
  • D) Only when the AI makes mistakes

Question 4: What is few-shot prompting?

  • A) Asking AI to respond in fewer words
  • B) Showing AI examples of the desired output format
  • C) Asking multiple questions at once
  • D) Limiting AI to a few programming languages

Question 5: What should you do when AI output doesn't meet requirements?

  • A) Give up and write it manually
  • B) Debug your prompt: check context, constraints, format, examples
  • C) Switch to a different AI model
  • D) Accept the output and fix it later

Quiz Answers

  1. B - Missing critical context. Should specify: message ID, byte format, MISRA compliance, safety class, error handling requirements.

  2. B - The 5-part structure (Context + Task + Constraints + Format + Example) produces the most accurate, usable outputs.

  3. B - Always iterate. Start with general prompts, then refine by adding requirements, safety constraints, standards, and traceability.

  4. B - Few-shot prompting shows the AI examples of your desired output format, improving consistency and accuracy.

  5. B - Treat incorrect output as a prompt debugging exercise. Iterate on context, constraints, format, and examples one at a time.

Score Interpretation:

  • 5/5: Excellent prompting skills
  • 3-4/5: Good foundation, practice with the templates provided
  • 1-2/5: Re-read the chapter, try the iterative prompting examples