7.4: Tool Qualification Evidence Generation for Aviation and Automotive
Key Terms
Key terms specific to this chapter (see Appendix G and Appendix H for the complete glossary):
- DO-330: Software Tool Qualification Considerations — the aviation standard governing tool qualification evidence
- TQL: Tool Qualification Level (TQL-1 through TQL-5) — DO-330 classification determining qualification rigor
- TCL: Tool Confidence Level (TCL-0 through TCL-3) — ISO 26262-8 classification for automotive tool qualification
- TQP: Tool Qualification Plan — defines the qualification strategy, scope, and activities
- TOR: Tool Operational Requirements — specifies what the tool must do correctly in its operational context
- TVCP: Tool Verification Cases & Procedures — test cases demonstrating the tool meets its TOR
- TVR: Tool Verification Results — documented evidence that TVCP has been executed successfully
- TAS: Tool Accomplishment Summary — final summary document confirming tool qualification is complete
- TQD: Tool Qualification Data — complete package of qualification evidence artifacts
- DAL: Design Assurance Level (A-E) — DO-178C severity classification that drives TQL determination
- TI/TD: Tool Impact (TI1/TI2) and Tool error Detection (TD1/TD2/TD3) — ISO 26262-8 factors for TCL determination
Introduction
Tool qualification is a critical requirement in safety-critical software development. When software tools eliminate, reduce, or automate verification activities, they must themselves be qualified to ensure they do not introduce errors into the development process. In this chapter, you'll discover comprehensive guidance on generating tool qualification evidence for both aviation (DO-330) and automotive (ISO 26262-8) domains, with integration to ASPICE 4.0 processes.
Why Tool Qualification Matters
Without proper tool qualification, a tool error can:
- Generate incorrect code that passes undetected into production
- Fail to detect software defects during verification
- Corrupt configuration data or test results
- Violate safety requirements without raising alarms
Example failure scenarios:
- Code Generator: Generates incorrect object code from correct source code → software malfunction in field
- Static Analyzer: Fails to detect buffer overflow vulnerability → safety-critical defect escapes to production
- Requirements Management Tool: Loses traceability links → incomplete verification coverage
- Test Framework: Reports false positive test results → untested code deployed
Regulatory perspective:
- FAA/EASA (Aviation): DO-330 qualification mandatory for all tools impacting DO-178C certification
- ISO 26262 (Automotive): Tool Confidence Level (TCL) determination and qualification per Part 8
- IEC 61508 (Industrial): Tool validation requirements per Part 3, Annex A
DO-330 Framework Overview
Purpose of DO-330
RTCA DO-330 ("Software Tool Qualification Considerations", December 2011) provides guidance for qualifying software tools used in DO-178C airborne software development. It is a standalone document that repeats relevant DO-178C guidance so tool developers need not read DO-178C.
Key Principles:
- Tool Qualification is Project-Specific: A tool qualified for one project (e.g., DAL C autopilot) is not automatically qualified for another project (e.g., DAL A flight control)
- Tool Lifecycle: Tools must be developed following software development processes similar to DO-178C
- Operational Requirements: Tools must be qualified for their specific operational usage, not generic capabilities
- Evidence-Based: Tool qualification relies on demonstrable evidence (test results, reviews, analyses)
Tool Qualification Levels (TQL)
DO-330 defines five Tool Qualification Levels based on:
- Software Level of the software being developed (DAL A/B/C/D/E)
- Tool Usage: Development tool (generates code) vs. Verification tool (detects errors)
| TQL | Applicable DAL | Tool Category | Tool Impact | Qualification Rigor |
|---|---|---|---|---|
| TQL-1 | Level A | Development Tool | Tool output becomes executable object code without verification | Highest: Full development lifecycle + operational verification + independence |
| TQL-2 | Level B | Development Tool | Same as TQL-1 | High: Full lifecycle + operational verification + reduced independence |
| TQL-3 | Level C | Development Tool | Same as TQL-1 | Medium: Reduced lifecycle activities + operational verification |
| TQL-4 | Level A/B/C | Verification Tool | Tool automates verification and detects errors; failure may fail to detect software error | Medium: Operational verification only (no full development lifecycle) |
| TQL-5 | Level A/B/C/D | Verification Tool | Tool automates verification but does not detect errors; failure cannot introduce error | Low: Basic operational verification |
Source: DO-330 Section 2.3, Table 1
Tool Classification Decision Process
Step 1: Determine if tool requires qualification
Does the tool's output eliminate, reduce, or automate a DO-178C objective?
│
├─ YES → Tool requires qualification (proceed to Step 2)
│
└─ NO → Tool does not require qualification (normal process verification is sufficient)
Examples of tools NOT requiring qualification:
- Text editors (output is manually reviewed)
- Compilers where object code is fully verified (e.g., manual assembly review)
- Project management tools (do not impact technical outputs)
- Version control systems (Git, SVN) - unless used as configuration management tool per DO-178C Section 7
Step 2: Classify tool as Development or Verification
Does the tool generate software (code, data, documentation) that becomes part of the airborne software?
│
├─ YES → Development Tool
│ │
│ └─ Is the tool output verified by other means before becoming part of airborne software?
│ │
│ ├─ NO (tool output is not verified) → TQL-1/2/3 (based on DAL)
│ │
│ └─ YES (tool output is verified) → No qualification required
│ (verification catches tool errors)
│
└─ NO → Verification Tool
│
└─ Does the tool detect errors in software (eliminating verification activities)?
│
├─ YES (tool failure may fail to detect software error) → TQL-4
│
└─ NO (tool cannot introduce error in software) → TQL-5
Step 3: Determine TQL based on Software DAL
| Tool Category | DAL A | DAL B | DAL C | DAL D | DAL E |
|---|---|---|---|---|---|
| Development Tool (output not verified) | TQL-1 | TQL-2 | TQL-3 | Not specified | N/A |
| Verification Tool (detects errors) | TQL-4 | TQL-4 | TQL-4 | Not specified | N/A |
| Verification Tool (does not detect errors) | TQL-5 | TQL-5 | TQL-5 | TQL-5 | N/A |
Note: DAL E software has no DO-178C objectives, so no tool qualification is required.
Tool Qualification Artifacts (DO-330)
DO-330 requires specific artifacts (Tool Lifecycle Data) to demonstrate tool qualification. The required artifacts vary by TQL.
Summary Table: Required Artifacts by TQL
| Artifact | TQL-1 | TQL-2 | TQL-3 | TQL-4 | TQL-5 | Description |
|---|---|---|---|---|---|---|
| Planning | ||||||
| Tool Qualification Plan (TQP) | R | R | R | R | R | Overall strategy for tool qualification |
| Tool Operational Requirements (TOR) | R | R | R | R | R | How tool will be used in DO-178C lifecycle |
| Tool Development Plan | R | R | R | NR | NR | Development process for tool itself |
| Requirements | ||||||
| Tool Requirements | R | R | R | Reduced | Reduced | Functional requirements of tool |
| Requirements Traceability | R | R | R | NR | NR | Bi-directional traceability |
| Design | ||||||
| Tool Design Description | R | R | Reduced | NR | NR | Architecture and detailed design of tool |
| Design Traceability | R | R | Reduced | NR | NR | Design to requirements traceability |
| Implementation | ||||||
| Tool Source Code | R | R | Reduced | NR | NR | Actual tool implementation |
| Tool Executable | R | R | R | R | R | Binary/executable of qualified tool |
| Verification | ||||||
| Tool Verification Cases & Procedures | R | R | R | R | R | Test cases for TOR verification |
| Tool Verification Results | R | R | R | R | R | Evidence of TOR satisfaction |
| Tool Requirements Coverage | R | R | R | Reduced | Reduced | Coverage of tool requirements by tests |
| Configuration Management | ||||||
| Tool Configuration Management | R | R | R | R | R | Version control, baselines, change control |
| Tool Configuration Index | R | R | R | R | R | List of all tool components under CM |
| Quality Assurance | ||||||
| Tool Quality Assurance | R | R | R | R | R | QA process for tool development |
| Tool QA Records | R | R | R | R | R | QA review findings, audits |
| Certification | ||||||
| Tool Qualification Data (TQD) | R | R | R | R | R | Summary document submitted to FAA/EASA |
| Tool Accomplishment Summary | R | R | R | R | R | Statement of TQL achievement |
Legend: R = Required, NR = Not Required, Reduced = Required with reduced rigor
Source: DO-330 Annex A, Tables A-1 through A-5
Key Artifact Descriptions
Tool Operational Requirements (TOR)
The TOR is the most critical document for tool qualification. It defines HOW the tool will be used in the specific DO-178C project, not the general capabilities of the tool.
TOR Content (DO-330 Section 5.1):
- Tool Usage Description: Specific lifecycle processes where tool is used (e.g., "used to generate C code from Simulink models for SWE.3 implementation")
- Tool Operational Environment: Host platform (OS, hardware), target platform (if applicable), dependencies (libraries, other tools)
- Tool Inputs: What data is provided to the tool (e.g., Simulink model files in .slx format, coding standard configuration)
- Tool Outputs: What data is produced by the tool (e.g., C source code files, header files, build scripts)
- Tool Features Used: Specific features/capabilities used in the project (e.g., "Simulink Embedded Coder with MISRA C compliance checking enabled")
- Tool Features NOT Used: Capabilities explicitly excluded from qualification (e.g., "code optimization disabled", "floating-point math not used")
- Operational Constraints: Limitations on tool usage (e.g., "maximum model size: 1000 blocks", "no dynamic memory allocation in generated code")
- Error Detection Mechanisms: How tool errors will be detected (e.g., "object code review", "back-to-back testing", "static analysis of generated code")
- Tool Configuration: Tool settings, options, parameters that impact output (e.g., "code generation template: ert.tlc", "optimization level: -O0")
TOR Example (for Simulink Embedded Coder, TQL-1):
Tool Operational Requirements: Simulink Embedded Coder v24.1
Project: Flight Control Computer, DAL A Software
TQL: TQL-1 (DAL A Development Tool)
1. Tool Usage:
- Generate C source code from Simulink models for low-level requirements implementation
- Used in SWE.3 (Detailed Design & Unit Construction) process
- Generated code becomes executable object code without source-level verification
2. Operational Environment:
- Host: MATLAB R2024a on Windows 10 x64
- Target: Texas Instruments TMS570LC4357 (ARM Cortex-R5F, dual-core lockstep)
- Code Generation Template: Embedded Coder ert_shrlib.tlc (embedded real-time)
3. Tool Inputs:
- Simulink model files (.slx) representing low-level requirements
- Model configuration parameters (.mat)
- Custom code generation template (.tlc)
- MISRA C:2012 compliance configuration
4. Tool Outputs:
- C source code files (.c)
- C header files (.h)
- Build scripts (make files)
- Code generation report (HTML)
5. Tool Features Used:
- Embedded Coder: Production code generation
- MISRA C:2012 checker (integrated)
- Fixed-point arithmetic
- Rate-based execution (100 Hz, 10 Hz scheduling)
- Model-in-the-Loop (MIL) simulation
6. Tool Features NOT Used (excluded from qualification scope):
- Floating-point code generation (all computations use fixed-point)
- Dynamic memory allocation (all memory statically allocated)
- C++ code generation (C only)
- Code optimization (optimization disabled: -O0)
- AUTOSAR code generation
- Stateflow charts with complex junctions
7. Operational Constraints:
- Maximum model size: 2000 blocks
- Maximum nesting depth: 5 levels
- No MATLAB Function blocks (custom C code)
- No external mode simulation
- All blocks must have fixed-point data types
8. Error Detection Mechanisms:
- Object code review: Assembly-level review of generated code for DAL A
- Back-to-back testing: MIL vs. Software-in-the-Loop (SIL) vs. Processor-in-the-Loop (PIL)
- Static analysis: Verify generated code with LDRA or Polyspace
- MISRA C compliance: All generated code must pass MISRA C:2012 checking
9. Tool Configuration:
- Code generation template: ert_shrlib.tlc
- Compiler: TI ARM Compiler v20.2.7 (qualified separately as TQL-2)
- Optimization: Disabled (-O0)
- MISRA C compliance checking: Enabled (all mandatory rules)
- Custom storage classes: Disabled
- Code replacement libraries: None
TOR Verification Criteria:
- Each TOR requirement must be verifiable by test, analysis, or review
- TOR requirements become the basis for Tool Verification Cases & Procedures
Tool Verification Cases & Procedures (TVCP)
Tool verification demonstrates that the tool satisfies its Tool Operational Requirements (TOR).
Verification Methods (DO-330 Section 6):
- Testing: Execute tool with defined inputs, verify outputs match expected results
- Analysis: Mathematical or logical analysis of tool behavior
- Review: Inspection of tool design, code, or outputs
Test Coverage Requirements:
| TQL | Requirements Coverage | Structural Coverage | Independence |
|---|---|---|---|
| TQL-1 | 100% of TOR requirements | Statement + Decision + MC/DC | Verification by independent team |
| TQL-2 | 100% of TOR requirements | Statement + Decision + MC/DC | Verification by independent person (not same as developer) |
| TQL-3 | 100% of TOR requirements | Statement + Decision | No independence required |
| TQL-4 | 100% of TOR requirements | Not required | No independence required |
| TQL-5 | 100% of TOR requirements | Not required | No independence required |
Example Test Cases (Simulink Embedded Coder TQL-1):
Test Case: TC-TOOL-001 - Basic Code Generation
Objective: Verify tool generates syntactically correct C code from simple model
TOR Requirements: TOR-001 (Generate C code), TOR-004 (C syntax compliance)
Test Procedure:
1. Create Simulink model with basic blocks (Gain, Sum, Unit Delay)
2. Configure model for embedded target (TMS570LC4357)
3. Execute code generation
4. Verify generated .c and .h files exist
5. Verify generated code compiles without errors with TI ARM compiler
6. Verify generated code links into executable
Expected Results:
- Code generation completes without errors
- Generated files: model.c, model.h, model_data.c, ert_main.c
- Compiler produces 0 errors, 0 warnings
- Linker produces executable binary
Pass/Fail Criteria:
- PASS if all expected results achieved
- FAIL if code generation fails, compilation fails, or linking fails
Test Case: TC-TOOL-002 - Fixed-Point Code Generation
Objective: Verify tool generates correct fixed-point arithmetic code
TOR Requirements: TOR-006 (Fixed-point support), TOR-008 (Numerical accuracy)
Test Procedure:
1. Create Simulink model with fixed-point operations:
- Input: int16, scaling 2^-8 (range -128 to +127.996)
- Gain block: multiply by 1.5 (fixed-point 16-bit, 2^-8 scaling)
- Output: int16, scaling 2^-8
2. Execute code generation
3. Compile and execute generated code on target hardware
4. Inject test input values: 0.0, 10.5, 100.0, 127.0, -50.0
5. Compare PIL (Processor-in-the-Loop) output with MIL (Model-in-the-Loop) output
Expected Results:
- Generated code uses integer arithmetic (no floating-point operations)
- PIL output matches MIL output within numerical precision limits (< 1 LSB)
- All test vectors produce correct results
Pass/Fail Criteria:
- PASS if PIL vs. MIL difference < 1 LSB for all test vectors
- FAIL if any result exceeds precision tolerance
Test Case: TC-TOOL-003 - MISRA C Compliance
Objective: Verify tool generates MISRA C:2012 compliant code
TOR Requirements: TOR-005 (MISRA C compliance), TOR-008 (Coding standards)
Test Procedure:
1. Generate code from reference Simulink model
2. Run MISRA C:2012 checker (LDRA TBvision or PC-lint Plus) on generated code
3. Verify all mandatory rules are satisfied
4. Verify all required rules are satisfied (or deviations documented)
Expected Results:
- 0 violations of MISRA C:2012 mandatory rules
- 0 violations of MISRA C:2012 required rules (or all deviations justified)
Pass/Fail Criteria:
- PASS if 0 mandatory rule violations and 0 required rule violations
- FAIL if any mandatory rule violated
Test Case: TC-TOOL-015 - Robustness: Invalid Model Input
Objective: Verify tool detects and reports invalid model configuration
TOR Requirements: TOR-007 (Error detection), TOR-009 (Robustness)
Test Procedure:
1. Create Simulink model with invalid configuration:
- Floating-point data type (not allowed per TOR constraints)
2. Attempt code generation
3. Verify tool reports error and does not generate code
Expected Results:
- Code generation fails with error message
- Error message indicates floating-point data type not allowed
- No C code files generated
Pass/Fail Criteria:
- PASS if tool detects error and halts code generation
- FAIL if tool generates code despite invalid configuration
Tool Verification Results (TVR):
- Document actual results of executing all test cases
- Include pass/fail status, deviations, anomalies
- Requirements coverage matrix: Map test cases to TOR requirements
- Structural coverage report (TQL-1/2/3 only): Statement, decision, MC/DC coverage of tool source code
Tool Qualification Data (TQD)
The TQD is the summary document submitted to the certification authority (FAA, EASA) for approval. It packages all tool qualification evidence.
TQD Contents (DO-330 Section 9):
- Tool Identification: Tool name, version, vendor, TQL level
- Tool Qualification Plan (TQP): Reference to TQP document
- Tool Operational Requirements (TOR): Reference to TOR document
- Tool Verification Summary: Summary of verification activities and results
- Tool Configuration Management: CM process and baselines
- Tool Quality Assurance: QA process and records
- Tool Accomplishment Summary: Statement that tool satisfies TQL requirements
- Tool Lifecycle Data Index: List of all tool qualification artifacts
- Outstanding Issues: Any open problem reports or limitations
Certification Authority Review:
- FAA/EASA reviews TQD package
- Conducts tool audit (may request to see detailed artifacts)
- Issues Tool Qualification Approval Letter (or requests additional evidence)
ISO 26262-8 Tool Confidence Level (TCL)
ISO 26262-8 ("Software") provides tool qualification guidance for automotive safety software development. The approach differs from DO-330 but serves the same purpose: ensure tools do not introduce errors.
Tool Confidence Level (TCL) Determination
ISO 26262-8 uses Tool Confidence Level (TCL) rather than Tool Qualification Level (TQL). TCL is determined by three factors:
-
Tool Impact (TI): Likelihood that tool malfunction introduces error
- TI1: Tool malfunction can introduce error that violates safety requirement (high impact)
- TI2: Tool malfunction cannot introduce error or error will be detected by other measures (low impact)
-
Tool Error Detection (TD): Probability that tool error is detected
- TD1: High confidence that tool error will be detected (>99%)
- TD2: Medium confidence that tool error will be detected (90-99%)
- TD3: Low confidence that tool error will be detected (<90%)
-
ASIL Level: Automotive Safety Integrity Level (ASIL A/B/C/D) of the software being developed
TCL Determination Table (ISO 26262-8, Table 1):
| Tool Impact | Tool Error Detection | ASIL A | ASIL B | ASIL C | ASIL D |
|---|---|---|---|---|---|
| TI1 (can introduce error) | TD1 (high detection) | TCL 1 | TCL 1 | TCL 2 | TCL 2 |
| TI1 | TD2 (medium detection) | TCL 1 | TCL 2 | TCL 2 | TCL 3 |
| TI1 | TD3 (low detection) | TCL 2 | TCL 2 | TCL 3 | TCL 3 |
| TI2 (cannot introduce error) | TD1/TD2/TD3 (any) | TCL 0 | TCL 0 | TCL 0 | TCL 0 |
TCL Levels:
- TCL 0: No qualification required
- TCL 1: Lowest qualification rigor (basic verification)
- TCL 2: Medium qualification rigor
- TCL 3: Highest qualification rigor (full validation and verification)
Tool Qualification Methods (ISO 26262-8)
ISO 26262-8 provides four methods for tool qualification:
Method 1: Increased Confidence from Use
- Tool has established usage history in similar applications
- Suitable for TCL 1 only
- Evidence: Service history, problem reports, usage statistics
Method 2: Evaluation of Tool Development Process
- Tool was developed using rigorous software development process
- Suitable for TCL 1 and TCL 2
- Evidence: Tool development plan, verification results, process compliance
Method 3: Validation of Software Tool
- Tool is validated for specific usage in project
- Suitable for TCL 1, TCL 2, and TCL 3
- Evidence: Validation plan, test cases, validation results
Method 4: Development in Compliance with Safety Standard
- Tool itself is developed per ISO 26262 (or equivalent standard like DO-178C)
- Suitable for TCL 3
- Evidence: Full ISO 26262 compliance documentation for tool
Recommended Methods by TCL (ISO 26262-8, Table 3):
| TCL | Method 1 (Use History) | Method 2 (Process) | Method 3 (Validation) | Method 4 (Compliance) |
|---|---|---|---|---|
| TCL 1 | Recommended (++) | Recommended (+) | Recommended (+) | Not necessary |
| TCL 2 | Not sufficient | Recommended (+) | Recommended (++) | Recommended (+) |
| TCL 3 | Not sufficient | Not sufficient | Recommended (++) | Highly Recommended (++) |
Note: ++ = Highly Recommended, + = Recommended
Tool Operational Requirements (TOR) Template
This template provides a structured format for documenting Tool Operational Requirements per DO-330 Section 5.1.
# Tool Operational Requirements (TOR)
## [Tool Name] Version [X.Y.Z]
**Project**: [Project Name]
**Software Level / ASIL**: [DAL A/B/C/D] / [ASIL A/B/C/D]
**TQL / TCL**: [TQL-1/2/3/4/5] / [TCL 0/1/2/3]
**Tool Qualification Standard**: [DO-330 | ISO 26262-8]
**Document Version**: 1.0
**Date**: YYYY-MM-DD
**Author**: [Name]
**Approver**: [Name, Role]
---
## 1. Tool Identification
| Attribute | Value |
|-----------|-------|
| Tool Name | [Official tool name] |
| Tool Version | [Exact version number, including patch level] |
| Tool Vendor | [Company name] |
| Tool License | [License type, serial number] |
| Tool Classification | [Development Tool | Verification Tool] |
| TQL / TCL | [TQL-X | TCL-X] |
---
## 2. Tool Usage Description
### 2.1 Purpose
[Describe WHY the tool is used in the project]
**Example**: "Simulink Embedded Coder is used to automatically generate C source code from Simulink models representing low-level software requirements. Generated code implements control algorithms for the Autopilot Mode Controller (DAL B)."
### 2.2 Software Lifecycle Integration
[Describe WHERE in the software lifecycle the tool is used]
**ASPICE Process**: [SWE.1 | SWE.2 | SWE.3 | SWE.4 | SWE.5 | SWE.6 | SUP.2 | SUP.8]
**DO-178C Section**: [Section 5.3 - Software Coding Process]
**Lifecycle Data Produced**: [Source Code | Test Cases | Requirements | Traceability Data]
---
## 3. Operational Environment
### 3.1 Host Environment
- **Operating System**: [Windows 10 x64, Ubuntu 22.04 LTS, etc.]
- **Hardware**: [Intel Core i7, 16 GB RAM, 500 GB SSD]
- **Required Software**: [MATLAB R2024a, Python 3.10, Java 11]
### 3.2 Target Environment (if applicable)
- **Target Processor**: [Texas Instruments TMS570LC4357, ARM Cortex-R5F]
- **Target Operating System**: [VxWorks 7.0, Bare-metal, FreeRTOS]
- **Compiler/Toolchain**: [TI ARM Compiler v20.2.7, GCC 11.2]
---
## 4. Tool Inputs
| Input Type | Format | Source | Example |
|------------|--------|--------|---------|
| [Model files] | [.slx] | [Simulink GUI] | [autopilot_mode_controller.slx] |
| [Configuration] | [.m script] | [Manual creation] | [code_gen_config.m] |
| [Templates] | [.tlc] | [Vendor-provided] | [ert_shrlib.tlc] |
### 4.1 Input Constraints
[List any constraints on tool inputs, e.g., maximum file size, naming conventions, format restrictions]
**Example**:
- Simulink model size: Maximum 2000 blocks
- Block nesting depth: Maximum 5 levels
- Data types: Fixed-point only (no floating-point)
- File naming: Must follow [A-Za-z0-9_] pattern
---
## 5. Tool Outputs
| Output Type | Format | Consumer | Example |
|-------------|--------|----------|---------|
| [Source code] | [.c, .h] | [Compiler] | [autopilot_mode_controller.c] |
| [Build scripts] | [Makefile] | [Build system] | [Makefile] |
| [Reports] | [HTML, PDF] | [Human review] | [code_gen_report.html] |
### 5.1 Output Constraints
[List any constraints on tool outputs]
**Example**:
- Generated code must comply with MISRA C:2012
- Maximum function complexity: Cyclomatic complexity ≤ 15
- No dynamic memory allocation (malloc/free)
- All global variables declared static
---
## 6. Tool Features Used
[List specific features, capabilities, and options that are USED in the project]
**Example for Simulink Embedded Coder**:
- Production code generation (Embedded Coder)
- Fixed-point code generation
- MISRA C:2012 compliance checking
- Rate-based execution (multi-rate models)
- Model-in-the-Loop (MIL) simulation
- Software-in-the-Loop (SIL) simulation
- Code replacement libraries (disabled)
- Custom storage classes (disabled)
---
## 7. Tool Features NOT Used
[List features explicitly EXCLUDED from qualification scope]
**Example**:
- Floating-point code generation: NOT USED
- Dynamic memory allocation: NOT USED
- C++ code generation: NOT USED (C only)
- AUTOSAR code generation: NOT USED
- Stateflow complex junctions: NOT USED
- External mode simulation: NOT USED
- Code optimization (-O1, -O2, -O3): NOT USED (only -O0)
**Rationale**: Excluding these features reduces qualification scope and cost. If features are needed in future, tool must be re-qualified.
---
## 8. Tool Configuration
[Document all tool settings, options, and parameters that impact tool behavior]
### 8.1 Code Generation Settings
| Setting | Value | Rationale |
|---------|-------|-----------|
| Code generation template | ert_shrlib.tlc | Embedded real-time target |
| Optimization level | -O0 (disabled) | Ensure code traceability |
| MISRA C compliance | Enabled (all mandatory rules) | Safety standard compliance |
| Custom code insertion | Disabled | Avoid unqualified code |
| Inline parameters | Disabled | Maintain traceability |
### 8.2 Fixed-Point Settings
| Setting | Value |
|---------|-------|
| Default word length | 16-bit |
| Default fraction length | 8-bit |
| Overflow handling | Saturation |
| Rounding mode | Floor |
### 8.3 Tool Environment Variables
| Variable | Value | Purpose |
|----------|-------|---------|
| MATLAB_ROOT | C:\Program Files\MATLAB\R2024a | MATLAB installation path |
| SIMULINK_ROOT | C:\Program Files\MATLAB\R2024a\simulink | Simulink installation |
---
## 9. Operational Constraints
[List limitations and restrictions on tool usage]
**Example**:
1. **Model Size**: Maximum 2000 blocks per model
2. **Nesting Depth**: Maximum 5 subsystem levels
3. **Concurrency**: Tool execution is single-threaded (no parallel builds)
4. **Memory**: Minimum 16 GB RAM required for code generation
5. **Network**: Tool must run on isolated network (no internet access during code generation)
6. **User Access**: Only authorized users with tool training may execute code generation
---
## 10. Error Detection Mechanisms
[Describe how tool errors will be detected and prevented from propagating to software]
### 10.1 Built-In Error Detection
- Tool performs internal consistency checks before generating code
- Tool reports errors and warnings via diagnostic viewer
- Tool logs all errors to code_generation_log.txt
### 10.2 Verification of Tool Outputs
| Verification Method | Description | Performed By | Frequency |
|---------------------|-------------|--------------|-----------|
| **Compilation** | Generated code must compile without errors | Compiler | Every code generation |
| **Static Analysis** | MISRA C checker verifies coding standard compliance | LDRA TBvision | Every code generation |
| **Back-to-Back Testing** | MIL vs. SIL vs. PIL output comparison | Test engineer | Every model change |
| **Code Review** | Manual inspection of generated code structure | Software engineer | Major model changes |
| **Requirements Traceability** | Verify generated code traces to model blocks | Traceability tool | Every release |
### 10.3 Error Response Procedure
1. If tool error detected → STOP software development
2. Log error in problem report system
3. Investigate root cause (tool defect vs. user error vs. invalid model)
4. If tool defect → Report to tool vendor, evaluate impact on previously generated code
5. If user error → Correct model, re-generate code, re-verify
6. If invalid model → Correct model per modeling standards
---
## 11. Tool Qualification Approach
### 11.1 Qualification Method
[Select qualification method based on TQL/TCL]
**DO-330**:
- TQL-1/2/3: Full tool qualification per DO-330 (tool development lifecycle + operational verification)
- TQL-4/5: Operational verification only
**ISO 26262-8**:
- TCL 1: Method 1 (Increased confidence from use) OR Method 3 (Validation)
- TCL 2: Method 2 (Evaluation of development process) OR Method 3 (Validation)
- TCL 3: Method 3 (Validation) OR Method 4 (Development in compliance with standard)
### 11.2 Qualification Evidence
[List artifacts that will demonstrate tool qualification]
**Required Artifacts**:
- [ ] Tool Qualification Plan (TQP)
- [ ] Tool Operational Requirements (TOR) - this document
- [ ] Tool Verification Plan (TVP)
- [ ] Tool Verification Cases & Procedures (TVCP)
- [ ] Tool Verification Results (TVR)
- [ ] Tool Configuration Management Plan
- [ ] Tool Quality Assurance Plan
- [ ] Tool Accomplishment Summary
- [ ] Tool Qualification Data (TQD) package for FAA/EASA submission
---
## 12. Verification Requirements
[Define how TOR will be verified]
### 12.1 TOR Verification Strategy
| TOR Requirement ID | Verification Method | Acceptance Criteria |
|--------------------|---------------------|---------------------|
| TOR-001: Generate C code | Test | Code generation completes without errors, .c/.h files produced |
| TOR-002: MISRA C compliance | Analysis | 0 MISRA C mandatory rule violations |
| TOR-003: Fixed-point accuracy | Test | MIL vs. PIL difference < 1 LSB for all test cases |
| TOR-004: Error detection | Test | Tool detects invalid model configurations and reports error |
### 12.2 Coverage Requirements
**TQL-1/2**:
- Requirements coverage: 100% of TOR requirements verified
- Structural coverage: Statement + Decision + MC/DC coverage of tool source code
**TQL-3**:
- Requirements coverage: 100% of TOR requirements verified
- Structural coverage: Statement + Decision coverage
**TQL-4/5**:
- Requirements coverage: 100% of TOR requirements verified
- Structural coverage: Not required
---
## 13. Change Management
### 13.1 TOR Change Control
Any changes to this TOR document require:
- Change request submitted to Configuration Control Board (CCB)
- Impact analysis: Does change affect tool qualification status?
- CCB approval before implementing change
- Update TOR document version number
- Re-verify affected TOR requirements
### 13.2 Tool Version Upgrades
If tool version is upgraded (e.g., Simulink R2024a → R2024b):
- Perform impact analysis: What changed in new version?
- Determine if re-qualification required (DO-330 Section 12.3)
- If features used in TOR are unchanged → Regression testing may be sufficient
- If features changed → Full re-qualification required
---
## 14. Traceability
### 14.1 TOR to Tool Verification Cases
[Maintain traceability matrix from TOR requirements to test cases]
| TOR Requirement | Test Case(s) | Status |
|-----------------|--------------|--------|
| TOR-001 | TC-TOOL-001, TC-TOOL-002 | Verified |
| TOR-002 | TC-TOOL-003 | Verified |
| TOR-003 | TC-TOOL-002, TC-TOOL-010 | Verified |
### 14.2 TOR to Software Lifecycle Data
[Map TOR to software artifacts produced using the tool]
| Software Artifact | TOR Requirement | Relationship |
|-------------------|-----------------|--------------|
| autopilot_mode_controller.c | TOR-001 | Generated by tool |
| code_gen_report.html | TOR-005 | Generated by tool |
| MIL_vs_PIL_results.xlsx | TOR-003 | Tool output verification |
---
## 15. Approval
| Role | Name | Signature | Date |
|------|------|-----------|------|
| Tool Qualification Lead | [Name] | __________ | YYYY-MM-DD |
| Software Development Lead | [Name] | __________ | YYYY-MM-DD |
| Quality Assurance Manager | [Name] | __________ | YYYY-MM-DD |
| Certification Liaison | [Name] | __________ | YYYY-MM-DD |
---
## Document Revision History
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0 | YYYY-MM-DD | [Name] | Initial release |
Tool Verification Plan (TVP) Template
# Tool Verification Plan (TVP)
## [Tool Name] Version [X.Y.Z]
**Project**: [Project Name]
**TQL / TCL**: [TQL-X | TCL-X]
**Document Version**: 1.0
**Date**: YYYY-MM-DD
**Author**: [Name]
**Approver**: [Name, Role]
---
## 1. Introduction
### 1.1 Purpose
This Tool Verification Plan (TVP) defines the strategy, methods, and criteria for verifying that [Tool Name] satisfies its Tool Operational Requirements (TOR) as documented in [TOR Document Reference].
### 1.2 Scope
This plan covers verification of [Tool Name] version [X.Y.Z] for use in [Project Name] at [TQL-X | TCL-X] level per [DO-330 | ISO 26262-8].
---
## 2. Verification Strategy
### 2.1 Verification Methods
| Method | Description | Usage |
|--------|-------------|-------|
| **Testing** | Execute tool with defined inputs, verify outputs match expected results | Primary method for functional verification |
| **Analysis** | Mathematical or logical analysis of tool behavior | Used for algorithms, numerical accuracy |
| **Review** | Inspection of tool design, code, or documentation | Used for traceability, standards compliance |
### 2.2 Verification Environment
**Host Environment**:
- Operating System: [Windows 10 x64, Ubuntu 22.04]
- Hardware: [Intel Core i7, 32 GB RAM]
- Tool Under Test: [Tool Name] version [X.Y.Z]
- Supporting Tools: [Compilers, static analyzers, coverage tools]
**Target Environment** (if applicable):
- Target Hardware: [Development board, evaluation kit]
- Target Operating System: [RTOS, bare-metal]
---
## 3. Verification Objectives
### 3.1 Requirements Coverage
**Objective**: Verify 100% of TOR requirements are satisfied
**Method**:
- Each TOR requirement must have at least one verification test case
- Traceability matrix maintained: TOR Requirement ←→ Test Case(s)
- Verification results documented for each test case
**Acceptance Criteria**:
- All TOR requirements have verification evidence
- All test cases pass (or deviations documented and approved)
### 3.2 Structural Coverage (TQL-1/2/3 only)
**Objective**: Achieve structural coverage of tool source code
| TQL | Statement Coverage | Decision Coverage | MC/DC Coverage |
|-----|-------------------|-------------------|----------------|
| TQL-1 | 100% | 100% | 100% |
| TQL-2 | 100% | 100% | 100% |
| TQL-3 | 100% | 100% | Not required |
| TQL-4/5 | Not required | Not required | Not required |
**Method**:
- Instrument tool source code with coverage tool (e.g., GCov, LDRA)
- Execute all verification test cases
- Measure statement, decision, MC/DC coverage
- Analyze uncovered code (dead code, defensive code, error handling)
**Acceptance Criteria**:
- Coverage metrics meet TQL requirements
- All uncovered code justified (e.g., error handling for unreachable conditions)
---
## 4. Test Case Development
### 4.1 Test Case Categories
| Category | Purpose | Example |
|----------|---------|---------|
| **Functional** | Verify tool performs intended functions | Tool generates C code from model |
| **Robustness** | Verify tool handles invalid inputs gracefully | Tool detects and reports invalid model configuration |
| **Performance** | Verify tool meets performance requirements | Code generation completes within 5 minutes for 2000-block model |
| **Compliance** | Verify tool output complies with standards | Generated code passes MISRA C checker |
| **Traceability** | Verify tool maintains traceability | Generated code traceable to model blocks |
### 4.2 Test Case Specification
Each test case shall include:
- **Test Case ID**: Unique identifier (e.g., TC-TOOL-001)
- **TOR Requirements**: TOR requirements verified by this test
- **Test Objective**: What is being verified
- **Test Preconditions**: Initial state before test execution
- **Test Inputs**: Specific inputs provided to tool
- **Test Procedure**: Step-by-step execution steps
- **Expected Results**: What should happen if tool is correct
- **Pass/Fail Criteria**: Objective criteria for test success/failure
- **Test Environment**: Host/target configuration
---
## 5. Test Execution
### 5.1 Test Execution Process
1. **Setup**: Configure test environment per test case specification
2. **Execute**: Run tool with specified inputs
3. **Observe**: Record actual tool behavior and outputs
4. **Compare**: Compare actual results vs. expected results
5. **Document**: Record pass/fail status, anomalies, deviations
### 5.2 Test Execution Schedule
| Test Phase | Test Cases | Timeframe | Responsible |
|------------|-----------|-----------|-------------|
| Unit Testing | TC-TOOL-001 through TC-TOOL-050 | Weeks 1-4 | Tool verification engineer |
| Integration Testing | TC-TOOL-051 through TC-TOOL-075 | Weeks 5-6 | Integration test lead |
| Regression Testing | All test cases (after tool updates) | Week 7 | QA team |
---
## 6. Verification Results Documentation
### 6.1 Tool Verification Results (TVR)
For each test case, document:
- Test case ID
- Date executed
- Tester name
- Actual results
- Pass/Fail status
- Deviations (if any)
- Problem reports (if test failed)
### 6.2 Coverage Analysis Report
For TQL-1/2/3, document:
- Statement coverage percentage
- Decision coverage percentage
- MC/DC coverage percentage
- List of uncovered code with justification
- Coverage tool output (HTML reports, metrics)
---
## 7. Independence Requirements
| TQL | Independence Requirement |
|-----|--------------------------|
| TQL-1 | Verification performed by independent verification team (separate organizational unit) |
| TQL-2 | Verification performed by person(s) other than tool developer |
| TQL-3 | No independence required |
| TQL-4/5 | No independence required |
**Implementation**:
- TQL-1: Verification team reports to separate manager than development team
- TQL-2: Peer review process ensures test cases developed by different person than tool developer
---
## 8. Problem Reporting
### 8.1 Tool Defects
If tool verification test fails:
1. Create problem report (PR) with:
- PR ID, Date, Reporter
- Test case that failed
- Actual results vs. expected results
- Severity (Critical, Major, Minor)
- Impact assessment
2. Classify defect:
- Tool defect: Bug in tool implementation
- TOR defect: TOR requirement is incorrect or ambiguous
- Test defect: Test case is incorrect
3. Assign to appropriate team for resolution
4. Re-test after fix implemented
5. Close PR when verification passes
### 8.2 Tool Limitations
If tool cannot satisfy a TOR requirement:
- Document limitation in Tool Accomplishment Summary
- Assess impact on software development
- Implement compensating measures (e.g., additional verification of tool output)
- Obtain certification authority approval for limitation
---
## 9. Regression Testing
Regression testing is required when:
- Tool version is upgraded
- Tool configuration changes
- Operating system is upgraded
- Compiler/toolchain is upgraded
- TOR is modified
**Regression Test Strategy**:
- Re-execute all verification test cases
- Verify coverage metrics remain compliant
- Update Tool Verification Results (TVR) with new test results
---
## 10. Approval
| Role | Name | Signature | Date |
|------|------|-----------|------|
| Tool Verification Lead | [Name] | __________ | YYYY-MM-DD |
| Quality Assurance Manager | [Name] | __________ | YYYY-MM-DD |
| Certification Liaison | [Name] | __________ | YYYY-MM-DD |
Evidence Package Checklist
This checklist ensures all required tool qualification evidence is generated and submitted for certification authority review.
DO-330 Tool Qualification Evidence Checklist
Project: _________________________ Tool: _________________________ Version: _________ TQL: _________ Date: _________________________
Planning Phase
-
Tool Qualification Plan (TQP)
- Tool qualification strategy defined
- TQL level justified
- Qualification activities scheduled
- Roles and responsibilities assigned
- Approved by: _____________________ Date: __________
-
Tool Operational Requirements (TOR)
- Tool usage described
- Operational environment specified
- Tool inputs/outputs defined
- Tool features used/not used documented
- Error detection mechanisms specified
- Approved by: _____________________ Date: __________
Development Phase (TQL-1/2/3 only)
-
Tool Development Plan
- Development lifecycle defined
- Development standards specified
- Development environment documented
-
Tool Requirements Specification
- Functional requirements defined
- Performance requirements specified
- Interface requirements documented
- Traceability to TOR established
-
Tool Design Description
- Tool architecture documented
- Module descriptions provided
- Interface control documents created
- Traceability to requirements established
-
Tool Source Code
- Source code developed per coding standards
- Code reviews completed
- Traceability to low-level requirements verified
Verification Phase
-
Tool Verification Plan (TVP)
- Verification strategy defined
- Test case development approach specified
- Coverage requirements defined
- Independence requirements specified
- Approved by: _____________________ Date: __________
-
Tool Verification Cases & Procedures (TVCP)
- Test cases developed for all TOR requirements
- Test procedures documented
- Expected results specified
- Pass/fail criteria defined
- Traceability matrix: TOR ←→ Test Cases created
-
Tool Verification Results (TVR)
- All test cases executed
- Actual results documented
- Pass/fail status recorded
- Deviations documented and justified
- Problem reports (if any) resolved
-
Tool Requirements Coverage Analysis
- 100% of TOR requirements verified
- Coverage gaps (if any) justified
- Traceability verified
-
Tool Structural Coverage Analysis (TQL-1/2/3 only)
- Statement coverage achieved: ______%
- Decision coverage achieved: ______%
- MC/DC coverage achieved: ______% (TQL-1/2 only)
- Uncovered code justified
- Coverage reports generated
Configuration Management
-
Tool Configuration Management Plan
- CM process defined
- Baselines established
- Change control process documented
- Problem reporting process defined
-
Tool Configuration Index (TCI)
- All tool components identified
- Version numbers specified
- Baselines documented
-
Tool Configuration Management Records
- Change history documented
- Baseline releases tracked
- Problem reports logged and resolved
Quality Assurance
-
Tool Quality Assurance Plan
- QA process defined
- Review and audit schedule established
- QA records requirements specified
-
Tool Quality Assurance Records
- QA reviews conducted
- QA audits completed
- Non-conformances resolved
- QA sign-off obtained
Certification Liaison
-
Tool Qualification Data (TQD)
- TQD package assembled
- All required artifacts included
- Tool lifecycle data indexed
- Outstanding issues documented
-
Tool Accomplishment Summary (TAS)
- Summary of qualification activities
- Statement of TQL achievement
- Deviations documented
- Limitations identified
- Signed by: _____________________ Date: __________
Submission to Certification Authority
- TQD package submitted to FAA/EASA
- Submission date: __________
- Certification authority contact: _____________________
- Tool audit scheduled (if required): __________
- Tool qualification approval received: __________ Date: __________
Case Study 1: Jama Requirements Management Tool → TCL 5 Qualification
Tool Overview
Tool: Jama Connect (Requirements Management System) Version: 8.75 Vendor: Jama Software Project: Automotive ADAS Camera System (ASIL B) Tool Classification: Verification Tool (does not detect errors) TCL: TCL 5 (per ISO 26262-8)
Tool Impact Analysis
Tool Impact (TI): TI2 (cannot introduce error)
Rationale:
- Jama Connect is a requirements management database
- Tool stores requirements, manages traceability, generates reports
- Tool does NOT analyze requirements for correctness
- Tool does NOT detect errors in requirements
- Tool failure (e.g., database corruption, incorrect traceability) cannot introduce error into software
- All requirements are reviewed manually by humans; tool errors would be detected during reviews
Tool Error Detection (TD): TD3 (low detection confidence)
Rationale:
- Tool errors (e.g., lost traceability link) may not be detected until late in project
- No automated checking of traceability integrity
- Manual reviews may miss tool errors
TCL Determination:
- Tool Impact: TI2
- Tool Error Detection: TD3
- ASIL: B
- Result: TCL 0 (per ISO 26262-8 Table 1)
Conclusion: Jama Connect does not require qualification for this project. However, basic operational verification is recommended as good practice.
Operational Verification (Recommended Practice)
Even though TCL 0 requires no qualification, the project team performs basic operational verification:
Verification Activities:
-
Database Integrity Test:
- Create test requirements in Jama
- Link requirements to design elements and test cases
- Export requirements to Excel
- Verify all traceability links preserved in export
-
Traceability Report Test:
- Generate traceability matrix report
- Manually verify sample traceability links are correct
- Test "orphan requirements" detection feature
-
Backup/Restore Test:
- Backup Jama database
- Restore to separate test instance
- Verify all requirements and links restored correctly
-
User Access Control Test:
- Verify only authorized users can modify requirements
- Test read-only access for non-editors
Results:
- All tests passed
- No tool defects found
- Operational verification documented in Tool Usage Report
- Report archived with project quality records
Case Study 2: Simulink Model-Based Design Tool → TQL-4 Qualification
Tool Overview
Tool: Simulink (Model-Based Design Environment) Version: R2024a Vendor: MathWorks Project: Aerospace Autopilot Mode Controller (DAL B) Tool Classification: Verification Tool (detects errors via simulation) TQL: TQL-4 (per DO-330)
Tool Qualification Strategy
Tool Usage:
- Simulink is used for Model-in-the-Loop (MIL) simulation to verify control algorithms
- MIL simulation detects errors in algorithm design (e.g., incorrect gain values, unstable control loops)
- Simulink is NOT used as a code generator (separate tool: Simulink Embedded Coder qualified as TQL-2)
Tool Impact Analysis:
- Development Tool? NO (Simulink models are not deployed to aircraft; only generated code is deployed)
- Verification Tool? YES (MIL simulation verifies requirements before code generation)
- Eliminates Verification? YES (MIL simulation replaces some manual analysis of control algorithms)
- Detects Errors? YES (simulation reveals algorithm errors, stability issues, performance problems)
TQL Determination:
- Verification tool that detects errors
- DAL B software
- Result: TQL-4
Tool Operational Requirements (TOR)
TOR-SIM-001: Simulink shall accurately simulate continuous-time control algorithms
TOR-SIM-002: Simulink shall execute models at specified sample rate (100 Hz)
TOR-SIM-003: Simulink shall provide numerical accuracy of ≥ 32-bit floating-point precision
TOR-SIM-004: Simulink shall correctly implement standard blocks (Gain, Sum, Integrator, etc.)
TOR-SIM-005: Simulink shall detect algebraic loops and report error
Tool Verification Cases
Test Case: TC-SIM-001 - Basic Simulation Accuracy
Objective: Verify Simulink accurately simulates first-order low-pass filter
Model:
- Input: Step function (0 → 1.0 at t=0)
- Block: Transfer Function (1 / (s + 1)) [time constant τ = 1 second]
- Output: Filtered signal
Expected Results (analytical solution):
y(t) = 1 - e^(-t)
Test Procedure:
1. Build Simulink model with Transfer Function block
2. Configure simulation: Fixed-step solver, step size = 0.01 sec, duration = 5 sec
3. Execute simulation
4. Export output data to MATLAB workspace
5. Compare simulated output vs. analytical solution
Pass/Fail Criteria:
- Max error between simulated and analytical solution < 0.001 at all time points
- PASS if max error < 0.001
- FAIL if max error ≥ 0.001
Results:
- Simulated output: [data points exported from Simulink]
- Analytical solution: y = 1 - exp(-t)
- Maximum error: 0.00012 (at t = 0.05 sec)
- Status: PASS
Test Case: TC-SIM-002 - Algebraic Loop Detection
Objective: Verify Simulink detects and reports algebraic loops
Model:
- Block 1: Gain (K=1.0)
- Block 2: Sum (A + B)
- Connection: Output of Sum feeds input of Gain, output of Gain feeds input of Sum
- Result: Algebraic loop (y = y + constant → no solution)
Expected Results:
- Simulink detects algebraic loop during model compilation
- Error message displayed: "Algebraic loop detected involving 'Gain' and 'Sum'"
- Simulation does not execute
Test Procedure:
1. Build Simulink model with intentional algebraic loop
2. Attempt to run simulation
3. Verify error message appears
4. Verify simulation does not execute
Results:
- Error detected: YES
- Error message: "Algebraic loop containing 'sum_block' detected"
- Simulation executed: NO
- Status: PASS
Tool Qualification Results
- Total TOR Requirements: 5
- Verification Test Cases: 15
- Requirements Coverage: 100% (5/5 TOR requirements verified)
- Test Results: 15 PASS, 0 FAIL
- Tool Defects Found: 0
- TQL-4 Achievement: VERIFIED
Tool Accomplishment Summary:
- Simulink R2024a qualified for MIL simulation of DAL B control algorithms
- TQL-4 qualification achieved per DO-330
- Tool qualified for specific operational usage as defined in TOR
- No limitations or deviations
Certification Authority Acceptance:
- Tool Qualification Data (TQD) package submitted to FAA on 2026-01-15
- FAA Tool Qualification Approval Letter received on 2026-02-20
- Tool approved for use in Autopilot Mode Controller project
Case Study 3: Jenkins CI/CD Tool → TCL 3 Qualification
Tool Overview
Tool: Jenkins (Continuous Integration / Continuous Deployment) Version: 2.440 Vendor: Open Source (CloudBees commercial support) Project: Automotive Battery Management System (ASIL C) Tool Classification: Verification Tool (automates build, test, and verification processes) TCL: TCL 3 (per ISO 26262-8)
Tool Impact Analysis
Tool Usage:
- Jenkins automates the software build process (compile, link)
- Jenkins executes automated tests (unit tests, integration tests)
- Jenkins generates test reports and coverage metrics
- Jenkins enforces quality gates (e.g., "do not deploy if coverage < 90%")
Tool Impact (TI): TI1 (can introduce error)
Rationale:
- Jenkins executes test cases and reports results
- If Jenkins has a defect that causes incorrect test result reporting (e.g., false positive), defective code could be deployed
- Example: Jenkins reports "All tests passed" when in fact some tests failed → defective software released
Tool Error Detection (TD): TD3 (low detection confidence)
Rationale:
- Jenkins test result errors may not be detected until production
- No independent verification of Jenkins output
TCL Determination:
- Tool Impact: TI1
- Tool Error Detection: TD3
- ASIL: C
- Result: TCL 3 (per ISO 26262-8 Table 1)
Qualification Method
Per ISO 26262-8 Table 3, TCL 3 requires Method 3: Validation of Software Tool or Method 4: Development in Compliance with Safety Standard.
Selected Method: Method 3 (Validation)
Rationale:
- Method 4 requires full ISO 26262 compliance for Jenkins development (not feasible for open-source tool)
- Method 3 allows validation of Jenkins for specific project usage
Validation Strategy
Approach: Validate that Jenkins correctly executes build and test processes
Validation Test Cases:
-
Build Process Validation:
- Execute reference build with known-good source code
- Verify Jenkins produces identical binary as manual build
- Use checksums to verify binary integrity
-
Test Execution Validation:
- Create test suite with known results (e.g., 10 tests: 8 pass, 2 fail)
- Execute test suite via Jenkins
- Verify Jenkins reports correct pass/fail count
-
Quality Gate Validation:
- Configure Jenkins quality gate: "Fail build if code coverage < 90%"
- Execute build with coverage = 85% (below threshold)
- Verify Jenkins fails build and reports reason
- Execute build with coverage = 92% (above threshold)
- Verify Jenkins passes build
-
Error Handling Validation:
- Introduce intentional build error (e.g., syntax error in source code)
- Verify Jenkins detects error and fails build
- Verify Jenkins reports error message clearly
Validation Results
| Validation Test Case | Expected Result | Actual Result | Status |
|---|---|---|---|
| Build Process Validation | Binary checksum matches reference | Checksum: 0xABCD1234 (match) | PASS |
| Test Execution (8 pass, 2 fail) | Jenkins reports 8 pass, 2 fail | Report: 8 pass, 2 fail | PASS |
| Quality Gate (coverage 85%) | Build fails, reports coverage too low | Build failed, message: "Coverage 85% < 90%" | PASS |
| Quality Gate (coverage 92%) | Build passes | Build passed | PASS |
| Error Handling (syntax error) | Build fails, reports syntax error | Build failed, error reported | PASS |
Conclusion: All validation test cases passed. Jenkins TCL 3 qualification achieved.
Validation Report
Validation Summary:
- Tool: Jenkins 2.440
- TCL: TCL 3
- Qualification Method: ISO 26262-8 Method 3 (Validation)
- Validation Test Cases: 12
- Results: 12 PASS, 0 FAIL
- Tool Limitations: None identified
- Tool approved for use in ASIL C Battery Management System project
Documented in:
- Tool Validation Plan (TVP-JENKINS-001)
- Tool Validation Cases & Procedures (TVCP-JENKINS-001)
- Tool Validation Results (TVR-JENKINS-001)
- Tool Accomplishment Summary (TAS-JENKINS-001)
Decision Tree: Which TCL/TQL Level Do You Need?
START: Is the tool used in safety-critical software development?
│
├─ NO → No qualification required
│
└─ YES → Which standard applies?
│
├─ Aviation (DO-178C) → Use DO-330 TQL classification
│ │
│ └─ Does tool output become part of airborne software without verification?
│ │
│ ├─ YES → Development Tool
│ │ │
│ │ └─ Software DAL Level?
│ │ ├─ DAL A → TQL-1 (highest rigor)
│ │ ├─ DAL B → TQL-2
│ │ ├─ DAL C → TQL-3
│ │ ├─ DAL D → No specific TQL defined
│ │ └─ DAL E → No qualification required
│ │
│ └─ NO → Verification Tool
│ │
│ └─ Does tool detect errors (eliminate verification activities)?
│ ├─ YES → TQL-4 (for DAL A/B/C)
│ └─ NO → TQL-5 (for DAL A/B/C/D)
│
└─ Automotive (ISO 26262) → Use ISO 26262-8 TCL classification
│
└─ Step 1: Determine Tool Impact (TI)
│
├─ Can tool malfunction introduce error violating safety requirement?
│ ├─ YES → TI1 (high impact)
│ └─ NO → TI2 (low impact) → TCL 0 (no qualification)
│
└─ Step 2 (if TI1): Determine Tool Error Detection (TD)
│
├─ Confidence tool error will be detected?
│ ├─ High (>99%) → TD1
│ ├─ Medium (90-99%) → TD2
│ └─ Low (<90%) → TD3
│
└─ Step 3: Lookup TCL in ISO 26262-8 Table 1
│
└─ TCL Result:
├─ TCL 0 → No qualification required
├─ TCL 1 → Basic qualification (use history or validation)
├─ TCL 2 → Medium qualification (process evaluation or validation)
└─ TCL 3 → High qualification (validation or compliance with standard)
Summary: Tool Qualification Best Practices
Planning Phase
- Identify tools early in project lifecycle (tool qualification can take 3-6 months)
- Classify all tools using DO-330 or ISO 26262-8 decision process
- Prioritize tools for qualification (qualify development tools first)
- Budget for qualification (tool qualification is 10-30% of project cost)
Execution Phase
- Define Tool Operational Requirements (TOR) specific to project usage
- Develop verification test cases covering all TOR requirements
- Execute verification with independence (if required by TQL/TCL)
- Document evidence thoroughly (FAA/EASA audits require comprehensive records)
Maintenance Phase
- Control tool versions (version upgrades may require re-qualification)
- Track tool defects (report to vendor, assess impact on software)
- Maintain qualification records (required for entire product lifecycle)
ASPICE Integration
- Map tool qualification to SUP.9 (Tool qualification as part of problem resolution management)
- Integrate with SUP.8 (Tool version control in configuration management)
- Leverage SUP.1 (QA reviews of tool qualification evidence)
Conclusion
Tool qualification is a critical enabler for efficient, safe software development in aviation and automotive domains. By following the frameworks in DO-330 and ISO 26262-8, and using the templates and case studies in this chapter, practitioners can:
- Classify tools correctly (TQL/TCL determination)
- Define operational requirements specific to project needs
- Generate verification evidence systematically
- Achieve certification authority approval with confidence
Key Takeaways:
- Tool qualification is project-specific - a tool qualified for one project is not automatically qualified for another
- TQL/TCL drives rigor - higher levels require more comprehensive verification and independence
- Operational Requirements (TOR) are critical - TOR defines exactly how tool is used, not generic capabilities
- Verification must be complete - 100% of TOR requirements must be verified with evidence
- Early planning essential - tool qualification takes time; start early in project lifecycle
By integrating tool qualification into ASPICE processes (SUP.8, SUP.9, SUP.1), organizations can streamline compliance while maintaining the highest safety standards.