4.4: Testing Prompt Templates

SWE.4 Verification Prompts

Purpose

Use Cases:

  1. Generate unit tests from function signatures
  2. Create test cases for boundary values, edge cases
  3. Achieve code coverage (statement, branch, MC/DC)
  4. Generate test reports (ASPICE SWE.4 compliance)

Template 1: Generate Unit Tests

Generate Test Cases from Function

Prompt:

You are an AI test engineer specialized in automotive embedded C unit testing (Google Test, Unity, VectorCAST).

Context: I'm testing an {PROJECT_NAME} ECU for automotive {SAFETY_CLASS}.
Test Framework: {TEST_FRAMEWORK (e.g., "Google Test", "Unity")}

Function Under Test:
```c
{FUNCTION_CODE}

Requirement:

{REQUIREMENT_TEXT}

Task: Generate comprehensive unit tests:

  1. Typical Values: Test cases for normal operation (happy path)
  2. Boundary Values: Test min, max, off-by-one values
  3. Invalid Inputs: Test null pointers, out-of-range values
  4. Error Conditions: Test failure modes (sensor fault, timeout)
  5. Traceability: Link tests to requirement (@verified_by tag)

Output Format (Google Test):

/**
 * @file test_{module}_name.cpp
 * @brief Unit tests for {module_name}
 * @verified_by [{REQUIREMENT_ID}]
 */

#include <gtest/gtest.h>
extern "C" {
    #include "{module_name}.h"
}

/* Mock functions (if needed) */
static {mock_data_type} g_mock_{function_name}_result = {default_value};
{return_type} Mock_{function_name}({parameters}) {
    return g_mock_{function_name}_result;
}

/* Test Fixture */
class {ModuleName}Test : public ::testing::Test {
protected:
    void SetUp() override {
        /* Setup code (initialize mocks, etc.) */
    }

    void TearDown() override {
        /* Cleanup code */
    }
};

/**
 * @test TC-{REQUIREMENT_ID}-1: Typical value
 * @verified_by [{REQUIREMENT_ID}]
 */
TEST_F({ModuleName}Test, {FunctionName}_TypicalValue) {
    {test_input_type} input = {typical_value};
    {test_output_type} output;

    /* Execute */
    int result = {function_name}(&output);

    /* Verify */
    ASSERT_EQ(result, 0);  /* Success */
    EXPECT_NEAR(output, {expected_value}, {tolerance});
}

/**
 * @test TC-{REQUIREMENT_ID}-2: Boundary value (minimum)
 * @verified_by [{REQUIREMENT_ID}]
 */
TEST_F({ModuleName}Test, {FunctionName}_BoundaryValue_Min) {
    /* Test minimum valid input */
}

/**
 * @test TC-{REQUIREMENT_ID}-3: Boundary value (maximum)
 * @verified_by [{REQUIREMENT_ID}]
 */
TEST_F({ModuleName}Test, {FunctionName}_BoundaryValue_Max) {
    /* Test maximum valid input */
}

/**
 * @test TC-{REQUIREMENT_ID}-4: Invalid input (null pointer)
 * @verified_by [{REQUIREMENT_ID}]
 */
TEST_F({ModuleName}Test, {FunctionName}_InvalidInput_NullPointer) {
    /* Test null pointer handling */
    int result = {function_name}(NULL);
    ASSERT_EQ(result, -1);  /* Error */
}

/**
 * @test TC-{REQUIREMENT_ID}-5: Error condition (sensor fault)
 * @verified_by [{REQUIREMENT_ID}]
 */
TEST_F({ModuleName}Test, {FunctionName}_ErrorCondition_SensorFault) {
    /* Test sensor fault handling */
}

/* Add more test cases to achieve 100% coverage */

Constraints:

  • Generate at least 5 test cases (typical, boundary min, boundary max, invalid, error)
  • Use descriptive test names (e.g., GetObstacleDistance_TypicalValue_5m)
  • Add @verified_by tags for traceability
  • Use appropriate assertions (ASSERT_EQ for critical, EXPECT_EQ for non-critical)

---

## Template 2: Coverage-Driven Test Generation

### Generate Tests to Achieve Coverage Target

**Prompt**:

You are an AI test engineer focused on code coverage for safety-critical systems.

Function Under Test:

{FUNCTION_CODE}

Current Coverage:

Statement Coverage: {current_statement}% (target: 100%)
Branch Coverage: {current_branch}% (target: 100%)

Uncovered Lines/Branches:

{UNCOVERED_CODE_LIST}

Task: Generate additional test cases to achieve 100% coverage:

  1. Uncovered Lines: Identify test inputs that execute uncovered lines
  2. Uncovered Branches: Create test cases for untaken branches (if/else, switch)
  3. MC/DC Coverage (if ASIL-C/D): Test all decision outcomes independently

Output Format:

/* Additional test cases to achieve 100% coverage */

/**
 * @test TC-{REQUIREMENT_ID}-{number}: Cover uncovered line {line_number}
 * @coverage_target Line {line_number} (currently uncovered)
 */
TEST_F({ModuleName}Test, {FunctionName}_CoverLine{number}) {
    /* Setup: Configure inputs to execute line {line_number} */
    {setup_code}

    /* Execute */
    {function_call}

    /* Verify: Expected behavior when line {line_number} executes */
    {assertions}
}

/**
 * @test TC-{REQUIREMENT_ID}-{number}: Cover untaken branch (else path)
 * @coverage_target Branch at line {line_number} (else path)
 */
TEST_F({ModuleName}Test, {FunctionName}_CoverBranch{number}_ElsePath) {
    /* Setup: Configure inputs to take else branch */
    {setup_code}

    /* Execute */
    {function_call}

    /* Verify: Expected behavior on else path */
    {assertions}
}

Constraints:

  • Focus on uncovered lines/branches first
  • Provide clear explanation of what each test covers
  • Aim for 100% statement coverage (ASIL-B minimum)

---

## Template 3: Test Data Generation

### Create Test Data (Boundary Values, Edge Cases)

**Prompt**:

You are an AI test data generator for embedded systems testing.

Function:

{FUNCTION_SIGNATURE}

Input Constraints:

{INPUT_CONSTRAINTS (e.g., "uint16_t radar_mm, range: 0-65535")}

Task: Generate test data covering:

  1. Typical Values: Representative normal operation (3-5 values)
  2. Boundary Values: Min, max, min-1, max+1, zero
  3. Edge Cases: Off-by-one, overflow, underflow
  4. Invalid Values: Out-of-range, special markers (e.g., 0xFFFF)

Output Format:

## Test Data Table

| Test Case | Input | Expected Output | Rationale |
|-----------|-------|-----------------|-----------|
| TC-1 (Typical) | 5000 mm | 5.0 m ± 0.01 | Normal operation (5 meters) |
| TC-2 (Typical) | 10000 mm | 10.0 m ± 0.01 | Normal operation (10 meters) |
| TC-3 (Boundary Min) | 0 mm | 0.0 m ± 0.01 | Minimum valid value |
| TC-4 (Boundary Max) | 65535 mm | 65.535 m ± 0.01 | Maximum valid value (uint16_t) |
| TC-5 (Invalid) | 0xFFFF | Error (-1) | Invalid sensor data marker |
| TC-6 (Edge Case) | 1 mm | 0.001 m ± 0.0001 | Off-by-one (minimum non-zero) |
| TC-7 (Edge Case) | 65534 mm | 65.534 m ± 0.01 | Off-by-one (maximum - 1) |

Constraints:

  • Cover all input ranges
  • Include both valid and invalid inputs
  • Provide expected outputs with tolerance (for floating-point)

---

## Template 4: Mock Function Generation

### Generate Mock Functions for Testing

**Prompt**:

You are an AI test infrastructure engineer specialized in mocking for unit tests.

Function Under Test:

{FUNCTION_UNDER_TEST}

External Dependencies (to mock):

{EXTERNAL_DEPENDENCIES (e.g., "CAN_ReadMessage()", "RTOS_GetTime()")}

Task: Generate mock functions for external dependencies:

  1. Mock Implementation: Controllable behavior (return values, side effects)
  2. Mock Control: Global variables to set mock behavior
  3. Mock Reset: Cleanup function for test teardown

Output Format (Google Test):

/* Mock Functions for Unit Testing */

/* Mock CAN_ReadMessage */
static uint16_t g_mock_can_data = 0;
static int g_mock_can_result = 0;  /* 0 = success, -1 = error */

int Mock_CAN_ReadMessage(uint16_t* data) {
    if (g_mock_can_result == 0) {
        *data = g_mock_can_data;
    }
    return g_mock_can_result;
}

/* Mock RTOS_GetTime */
static uint32_t g_mock_rtos_time_ms = 0;

uint32_t Mock_RTOS_GetTime(void) {
    return g_mock_rtos_time_ms;
}

/* Mock Dependency Injection */
void {ModuleName}_SetMocks(void) {
    /* Inject mock functions (using function pointers or weak linking) */
    CAN_ReadMessage_ptr = Mock_CAN_ReadMessage;
    RTOS_GetTime_ptr = Mock_RTOS_GetTime;
}

/* Mock Reset (for test teardown) */
void {ModuleName}_ResetMocks(void) {
    g_mock_can_data = 0;
    g_mock_can_result = 0;
    g_mock_rtos_time_ms = 0;
}

/* Example Usage in Test */
TEST_F({ModuleName}Test, {FunctionName}_MockExample) {
    /* Setup mock behavior */
    g_mock_can_data = 5000;  /* Simulate 5000 mm */
    g_mock_can_result = 0;   /* Success */

    /* Execute function under test */
    {test_execution}

    /* Verify results */
    {assertions}

    /* Cleanup */
    {ModuleName}_ResetMocks();
}

Constraints:

  • Mock all external dependencies (CAN, RTOS, GPIO, etc.)
  • Provide control variables (global state)
  • Include reset function for cleanup

---

## Template 5: Test Report Generation

### Generate ASPICE SWE.4 Test Report

**Prompt**:

You are an AI test reporter for ASPICE-compliant projects.

Test Execution Results:

{TEST_RESULTS (e.g., "24/24 tests passed, 0 failures")}

Coverage Results:

Statement Coverage: {percentage}%
Branch Coverage: {percentage}%
Function Coverage: {percentage}%

Requirements Verified:

{REQUIREMENTS_LIST}

Task: Generate ASPICE SWE.4 compliant test report:

  1. Test Summary: Pass/fail counts, execution time
  2. Coverage Results: Statement/branch/function coverage (with justification for gaps)
  3. Traceability: Requirements → Test cases → Results
  4. Defects: List any test failures or issues

Output Format:

# Software Unit Verification Report (SWE.4)

## Test Summary
- **Test Date**: {date}
- **Tested Component**: {component_name}
- **Test Framework**: {framework (e.g., "Google Test 1.14.0")}
- **Test Environment**: {environment (e.g., "Ubuntu 22.04, gcc 11.3")}
- **Tester**: {tester_name}

## Test Results
| Test Case ID | Description | Requirement | Status | Notes |
|--------------|-------------|-------------|--------|-------|
| TC-SWE-045-1-1 | Typical value (5m) | SWE-045-1 | [PASS] PASS | |
| TC-SWE-045-1-2 | Boundary (0m) | SWE-045-1 | [PASS] PASS | |
| TC-SWE-045-1-3 | Boundary (max) | SWE-045-1 | [PASS] PASS | |
| TC-SWE-045-1-4 | Invalid sensor | SWE-045-1 | [PASS] PASS | |
| TC-SWE-045-1-5 | Null pointer | SWE-045-1 | [PASS] PASS | |
| TC-SWE-045-1-6 | CAN failure | SWE-045-1 | [PASS] PASS | |

**Total Tests**: {total}
**Passed**: {passed} ({percentage}%)
**Failed**: {failed} ({percentage}%)
**Blocked**: {blocked} ({percentage}%)

## Coverage Results
- **Statement Coverage**: {percentage}% ({lines_covered}/{total_lines} lines)
- **Branch Coverage**: {percentage}% ({branches_covered}/{total_branches} branches)
- **Function Coverage**: {percentage}% ({functions_covered}/{total_functions} functions)

**Target**: 100% statement/branch (ASIL-B requirement)
**Gap**: {gap_percentage}% (justification below)

### Coverage Gaps (Justification)
| Line | Code | Justification |
|------|------|---------------|
| 145 | Integer overflow check | Unreachable (input range validated by CAN protocol) |
| 178 | Diagnostic logging | Hardware-dependent (tested in integration tests) |

## Traceability
- **Requirements Verified**: {count}/{total} (100% coverage)
- **Functions Tested**: {count}/{total} (100% coverage)
- **Test-to-Requirement Matrix**: See Appendix A

## Defects Found
None (all tests passed)

## Recommendations
1. Achieve 100% coverage or justify exclusions (ISO 26262 safety argument)
2. Add integration tests for hardware-dependent code (lines 178, 203)

## Approval
- **Test Engineer**: [TBD - Human Review Required]
- **Date**: [TBD]

---
**Appendix A: Traceability Matrix**

| Requirement | Test Cases | Coverage |
|-------------|------------|----------|
| [SWE-045-1] | TC-SWE-045-1-1 to 6 | [PASS] 100% |
| [SWE-045-2] | TC-SWE-045-2-1 to 4 | [PASS] 100% |

Constraints:

  • Follow ASPICE SWE.4 format
  • Include traceability (requirement → test → result)
  • Justify coverage gaps (hardware-dependent, unreachable)

---

## Summary

**Testing Prompts Covered**:

1. **Generate Unit Tests**: Create test cases (typical, boundary, invalid, error)
2. **Coverage-Driven Testing**: Generate tests to achieve 100% coverage
3. **Test Data Generation**: Create test data table (boundary values, edge cases)
4. **Mock Function Generation**: Generate mocks for external dependencies
5. **Test Report Generation**: Create ASPICE SWE.4 compliant test report

**Success Metrics**: 80-85% AI-generated coverage, 90-95% test pass rate, and 100% requirement verification

---

**Navigation**: [← 32.03 Review Prompts](32.03_Review_Prompts.md) | [Contents](../00_Front_Matter/00.06_Table_of_Contents.md) | [33.0 Thinking Like a Systems Engineer →](../Part_VII_Engineer_Tutorial/33.00_Thinking_Like_a_Systems_Engineer.md)