3.4: Tool Qualification for AI


What You'll Learn

By the end of this chapter, you will be able to:

  • Explain the ISO 26262 tool qualification framework and its requirements
  • Classify AI tools using TI/TD/TCL and determine qualification effort
  • Compare ISO 26262 and DO-178C tool qualification approaches
  • Select appropriate qualification methods (1a, 1b, 1c, 1d)
  • Address AI-specific challenges: non-determinism, model updates, versioning
  • Apply governance frameworks and build tool qualification plans
  • Execute a practical step-by-step tool qualification workflow

Why Tool Qualification?

For safety-critical systems, tools that can introduce errors affecting safety must be qualified. AI tools present unique qualification challenges:

  1. Non-determinism: Same input may produce different outputs
  2. Opacity: Internal decision-making is not transparent
  3. Continuous learning: Behavior may change over time
  4. Error patterns: Fail in unexpected ways

ISO 26262 Tool Classification

Tool Impact (TI)

Classification Definition Example
TI1 Tool cannot introduce or fail to detect errors in safety-related output Text editor, version control
TI2 Tool output may introduce errors, but errors are likely detected Static analyzer with human review
TI3 Tool output may directly introduce errors that may not be detected Code generator without verification

Tool Error Detection (TD)

Classification Definition Example
TD1 High confidence that tool errors will be detected Comprehensive test suite
TD2 Medium confidence that tool errors will be detected Partial test coverage
TD3 Low confidence that tool errors will be detected No independent verification

Tool Confidence Level (TCL)

The TCL matrix combines TI and TD:

TI / TD TD1 (High Detection) TD2 (Medium Detection) TD3 (Low Detection)
TI1 (No impact) TCL1 (No qual req) TCL1 (No qual req) TCL1 (No qual req)
TI2 (Low impact) TCL1 (No qual req) TCL2 (Some qual) TCL3 (Full qual)
TI3 (High impact) TCL2 (Some qual) TCL3 (Full qual) TCL3 (Full qual)

TCL Classification in Detail

Each Tool Confidence Level carries specific implications for the qualification effort required before the tool may be used in a safety-critical project.

TCL 1 --- No Qualification Required

TCL 1 applies when the tool either has no impact on safety-related outputs (TI1) or when any errors the tool might introduce will be detected with high confidence by downstream processes (TI2 + TD1). No formal tool qualification activity is needed, though the tool should still be listed in the project tool inventory.

Aspect TCL 1 Detail
Qualification effort None required
Evidence required Tool identification and version recorded in tool inventory
Requalification trigger Not applicable unless tool use context changes
Example tools Text editors, version control systems, build log viewers
Example AI tools AI-assisted code formatting with human review, AI spell-checkers

TCL 2 --- Partial Qualification Required

TCL 2 applies when the tool has some potential to introduce errors that might affect safety, but mitigating factors exist. Qualification focuses on demonstrating that the tool is suitable for its intended use through one or more methods defined in ISO 26262-8, Clause 11.4.6.

Aspect TCL 2 Detail
Qualification effort Moderate --- validation suite or increased confidence from use
Evidence required Tool qualification plan, validation results or usage history, known limitations
Requalification trigger Major version update, change in use context, or new failure mode discovered
Example tools Static analyzers with partial coverage, linkers, compilers with restricted feature set
Example AI tools AI code generators with comprehensive downstream verification (qualified static analysis + code review)

TCL 3 --- Full Qualification Required

TCL 3 applies when the tool has high potential to introduce undetected errors into safety-related outputs. This is the most demanding qualification level and requires rigorous evidence that the tool functions correctly for its intended use cases.

Aspect TCL 3 Detail
Qualification effort High --- validation suite against tool requirements, or development per safety standards
Evidence required Tool qualification plan, tool requirements specification, comprehensive validation suite, known anomalies list, operating constraints, user manual
Requalification trigger Any version change, any change in operating environment, any change in use case
Example tools Code generators without independent verification, compilers for safety-critical code, test execution frameworks whose results are trusted without review
Example AI tools AI code generators whose output is deployed without independent static analysis or human review

Important: The TCL assignment is not a property of the tool alone. It depends on the combination of the tool and its usage context. The same AI code generator might be TCL 1 in one project (where comprehensive verification catches all errors) and TCL 3 in another (where its output is trusted without independent checking).


Qualification Strategies for AI Tools

Strategy 1: Non-Critical Path

Approach: Use AI tools only for non-safety outputs.

Tool Qualification - Non-Safety Path

Examples:

  • AI generates documentation (not safety documentation)
  • AI suggests refactoring (human reviews before safety code)
  • AI assists with non-safety test creation

Result: TI1 - TCL1 (No qualification required)

When to Use:

  • AI can provide value outside safety-critical path
  • Safety-critical work done with qualified tools
  • Clear separation between paths

Strategy 2: Verification Overlay

Approach: Use qualified verification to catch AI tool errors.

Tool Qualification - Safety with Verification

Examples:

  • AI generates code - qualified static analysis + review
  • AI generates tests - human review + qualified test execution
  • AI suggests design - qualified review process

Result: TI2/TI3 + TD1 - TCL1/TCL2 (Reduced qualification)

When to Use:

  • AI provides significant efficiency gain
  • Qualified verification processes exist
  • Verification can detect AI errors

Strategy 3: Secondary Check

Approach: AI supplements but does not replace qualified tools.

Tool Qualification - Secondary Check

Examples:

  • Qualified static analysis + AI additional pattern detection
  • Qualified review + AI first-pass screening
  • Qualified test tool + AI coverage suggestions

Result: AI is TI1 (no safety impact) - TCL1

When to Use:

  • Qualified tools already in place
  • AI can add value without replacing
  • Clear delineation of responsibilities

AI Tool Governance

Policy Level

Element Content
Approved tools List of sanctioned AI tools
Use cases Permitted applications
Restrictions Safety-critical limitations
Qualification status TCL for each tool/use

Process Level

Element Content
Tool selection Evaluation criteria
Configuration Approved settings
HITL patterns Required oversight
Audit Review procedures

Technical Level

Element Content
Version control Tool version pinning
Logging Audit trail requirements
Thresholds Confidence thresholds
Escalation Override procedures

Monitoring Level

Element Content
Accuracy metrics Accept/reject rates
False positives Error rate tracking
Overrides Human intervention frequency
Tool health Performance metrics

Tool Qualification Methods

ISO 26262-8, Clause 11.4.6 defines four methods for qualifying tools. These methods can be used individually or in combination, depending on the TCL and the nature of the tool.

Method 1a: Increased Confidence from Use

Demonstrate that the tool has been used successfully in similar contexts and that its error history is acceptable.

Aspect Detail
Applicability TCL 2 (primary), TCL 3 (supplementary only)
Evidence Usage history, project records, defect reports from prior use
Strength Low effort for mature, widely-used tools
Weakness Difficult to apply to new or rapidly-evolving AI tools
AI applicability Limited --- AI tool behavior changes with model updates, making historical evidence less reliable

Method 1b: Evaluation of the Tool Development Process

Evaluate whether the tool was developed according to a recognized quality or safety standard.

Aspect Detail
Applicability TCL 2, TCL 3
Evidence Tool vendor's development process documentation, certificates, audits
Strength High confidence if vendor follows ISO 26262 or equivalent
Weakness Requires vendor cooperation and transparency
AI applicability Challenging --- most AI model vendors do not develop under ISO 26262; training processes are proprietary

Method 1c: Validation of the Tool

Validate the tool by running a test suite that exercises the tool's features as used in the project.

Aspect Detail
Applicability TCL 2, TCL 3
Evidence Tool validation plan, test cases, test results, coverage analysis
Strength Direct evidence of tool correctness for project-specific use cases
Weakness Requires investment in creating and maintaining a validation suite
AI applicability Most practical method for AI tools --- create a benchmark suite of known-correct inputs and expected outputs

Method 1d: Development According to a Safety Standard

The tool itself is developed in compliance with a safety standard (e.g., ISO 26262 for the tool's own development).

Aspect Detail
Applicability TCL 3
Evidence Full safety lifecycle evidence for the tool's own development
Strength Highest confidence level
Weakness Extremely high effort; rarely practical for COTS or AI tools
AI applicability Not currently feasible for LLM-based tools; may apply to narrow AI tools with deterministic behavior

Tip: For most AI tools in safety-critical projects, Method 1c (Validation of the Tool) combined with a Verification Overlay strategy (see Strategy 2 above) provides the most practical qualification path. Build a validation suite that covers the specific use cases, and complement it with qualified downstream verification.


Qualification Evidence

For TCL2/TCL3, qualification evidence may include:

Evidence Type Description
Tool description Capabilities, limitations, intended use
Validation evidence Testing of tool behavior
Use cases Specific project applications
Known issues Documented limitations
Version info Specific version qualification
Operating environment Platform dependencies
User guidelines How to use tool correctly

Evidence Requirements by TCL

The depth and breadth of evidence scales with the TCL assignment.

Evidence Item TCL 1 TCL 2 TCL 3
Tool listed in project tool inventory Required Required Required
Tool version recorded Required Required Required
Tool qualification plan Not required Required Required
Tool requirements specification Not required Recommended Required
Validation test suite Not required Required (may be limited scope) Required (comprehensive scope)
Validation test results Not required Required Required
Known anomalies / limitations list Not required Required Required
Operating constraints documentation Not required Recommended Required
User guidelines for safe use Not required Required Required
Tool development process evidence Not required Not required Required (Method 1b or 1d)
Change impact analysis for updates Not required Required Required
Periodic re-validation schedule Not required Recommended Required

AI Tool Classification

AI tools used in safety-critical development span a wide range of capabilities and risk profiles. Proper classification is the first step toward determining the appropriate qualification effort.

Classification by Function

AI Tool Category Description Typical TI Typical TD (with mitigation) Typical TCL
Code generators Produce source code from prompts or specifications TI3 (output directly becomes safety code) TD1 if qualified review + static analysis TCL2
Code completion assistants Suggest code snippets inline during editing TI2 (developer selects and edits suggestions) TD1 if developer reviews each suggestion TCL1
Static analysis assistants AI-enhanced static analysis tools TI2 (may fail to detect defects) TD2 if used alongside qualified analyzer TCL2
Test generators Produce test cases from requirements or code TI2 (missing tests reduce coverage) TD1 if coverage is measured independently TCL1
Requirements assistants Draft or analyze requirements text TI2 (may introduce ambiguity) TD1 if human review is mandatory TCL1
Documentation generators Produce technical documentation TI1 (non-safety docs) or TI2 (safety docs) TD1 with review TCL1
Review assistants Pre-screen artifacts before human review TI1 (supplements human review) TD1 (human is the primary reviewer) TCL1

Note: The classifications above assume the mitigation strategies described in this chapter (verification overlays, human review, qualified downstream tools). Without those mitigations, the same tools would typically receive higher TI ratings and consequently higher TCL assignments.

Classification Decision Process

To classify an AI tool for a specific project, answer these questions in order:

  1. Does the tool output directly contribute to a safety-related work product? If no, the tool is TI1 regardless of other factors.
  2. If yes, could an error in the tool output propagate to the final safety-related deliverable? If not (because other processes catch it), the tool is TI2 at most.
  3. If yes, what is the confidence that downstream processes will detect the error? This determines TD1, TD2, or TD3.
  4. Combine TI and TD using the TCL matrix to arrive at the tool's TCL for this specific use context.

ISO 26262 Part 8: Tool Qualification Requirements

ISO 26262 Part 8 (Supporting Processes), Clause 11 defines the normative requirements for software tool qualification in automotive functional safety. This section summarizes the key requirements relevant to AI tool qualification.

Scope and Applicability

Tool qualification under ISO 26262 applies to any software tool that is used in the development of safety-related systems and that could potentially introduce or fail to detect errors in safety-related work products. This includes AI tools when they are used in the safety-critical path.

Requirement Area ISO 26262-8 Reference Key Requirement
Tool classification Clause 11.4.3 Determine TI and TD for each tool in its specific use context
TCL determination Clause 11.4.4 Use the TI/TD matrix to determine TCL
Qualification methods Clause 11.4.6 Apply Methods 1a-1d appropriate to the TCL
Tool qualification plan Clause 11.4.5 Create a plan for TCL 2 and TCL 3 tools
Unique tool ID Clause 11.4.2 Each tool must be uniquely identified (name, version, configuration)
Use case documentation Clause 11.4.3 Document the specific use cases for which the tool is qualified
Re-qualification Clause 11.4.7 Evaluate impact of tool changes on qualification status

Key Principles

Principle 1: Tool qualification effort is proportional to the risk the tool introduces. A tool that cannot affect safety needs no qualification. A tool that can introduce undetected safety errors requires full qualification.

Principle 2: Tool qualification is use-case-specific. A tool qualified for one project and one set of use cases is not automatically qualified for a different project or different use cases.

Principle 3: Downstream verification can reduce the qualification effort. If errors introduced by the tool are reliably caught by subsequent qualified activities, the tool's effective TCL is lowered.


DO-178C Tool Qualification: Aviation Comparison

DO-178C (Software Considerations in Airborne Systems and Equipment Certification) uses a parallel but distinct tool qualification framework. Comparing the two approaches helps teams working across automotive and aerospace domains.

DO-178C Tool Qualification Levels (TQL)

TQL assignment depends on the software level (DAL) of the software the tool supports and whether it is a development tool (Criteria 1: output becomes airborne software without independent verification) or a verification tool (Criteria 2: may fail to detect errors; Criteria 3: automates verification activities).

TQL Level DAL Tool Category Criteria Qualification Effort Comparable ISO 26262 TCL
TQL-1 A Development Tool 1 Full DO-330 lifecycle applied to tool (highest rigor) TCL 3
TQL-2 B Development Tool 1 DO-330 objectives reduced set relative to TQL-1 TCL 3
TQL-3 C Development Tool 1 DO-330 objectives further reduced relative to TQL-2 TCL 2/3
TQL-4 A/B Verification Tool 2 or 3 Operational requirements verification; tool development lifecycle not required TCL 2
TQL-5 C/D (or Criteria 3 at any level) Verification Tool 2 or 3 Operational requirements and verification of TORs only TCL 1/2

Important: There is no DO-330 equivalent to ISO 26262 TCL 1 (no qualification required). All tools that meet the TQL assignment criteria must be qualified to the applicable TQL level. TQL-5 is the minimum rigor level, not a "no qualification" category.

Key Differences Between ISO 26262 and DO-178C

Aspect ISO 26262 (Automotive) DO-178C (Aviation)
Classification basis TI/TD matrix producing TCL 1-3 Design Assurance Level (DAL) producing TQL 1-5
Number of levels 3 confidence levels 5 qualification levels
Tool categories Not explicitly categorized Criteria 1 (development tools: output becomes airborne software), Criteria 2 (verification tools: may fail to detect errors), Criteria 3 (verification tools: eliminate or automate verification activities)
Qualification scope Use-case-specific per project Type-specific; may be reused across programs
Re-qualification On tool change or use context change On tool change; may leverage prior qualification data
Regulatory authority Self-declaration with assessor review Certification authority (FAA/EASA) approval required

Note: DO-178C explicitly categorizes tools into development tools (whose output becomes part of the software) and verification tools (that could fail to detect errors). This distinction maps approximately to TI3 (development tools) and TI2 (verification tools) in ISO 26262.


AI-Specific Challenges

AI tools present qualification challenges that go beyond those of traditional deterministic software tools. These challenges must be addressed explicitly in any tool qualification plan for AI.

Consideration Implication
Non-determinism Cannot guarantee same output for same input
Version updates Re-qualification may be needed
Training data May have biases or gaps
Context sensitivity Different prompts produce different quality
Opacity Cannot fully explain decisions

Detailed Challenge Analysis

Non-Determinism and Reproducibility

Traditional tool qualification assumes that a tool produces the same output for the same input. AI tools, particularly LLM-based tools, violate this assumption. This means that a validation test suite cannot simply compare outputs against fixed expected results.

Impact Severity Mitigation Strategy
Validation test results vary between runs High Use semantic equivalence checks rather than exact-match; run validation suites multiple times and analyze statistical distribution of results
Debugging tool failures is difficult Medium Log all inputs, outputs, model version, and configuration for every tool invocation; use deterministic settings (temperature=0) where available
Certification evidence is harder to produce High Focus evidence on downstream verification effectiveness rather than tool-level reproducibility

Model Updates and Version Control

AI tools receive model updates that can fundamentally change their behavior without changing their version number or user interface. A model update is functionally equivalent to replacing the tool with a different tool.

Impact Severity Mitigation Strategy
Behavior changes without notice Critical Pin model versions; use API versioning; contractually require vendor notification of model changes
Prior validation evidence becomes invalid High Re-run validation suite after any model update; maintain automated regression tests
Continuous deployment of model updates High Establish a model update review gate; do not allow automatic model updates in safety-critical tool chains

Versioning and Configuration Management

AI tool versioning is more complex than traditional tool versioning because the "tool" consists of multiple components: the client software, the model weights, the system prompt, the temperature and sampling parameters, and any fine-tuning data.

Component Versioning Challenge Recommendation
Client software (IDE plugin, CLI) Standard software versioning applies Pin version; track in CM system
Model (weights, architecture) Vendor may update silently; model ID may not change Require vendor model version disclosure; test regularly
System prompt / instructions Changes affect output behavior Version-control all prompts; treat prompt changes as configuration changes
Temperature / sampling parameters Affect output variability Lock parameters in project configuration; document in tool qualification plan
Fine-tuning data (if applicable) Changes model behavior Version-control training data; re-validate after any fine-tuning change
RAG knowledge base (if applicable) Changes available context Version-control knowledge base contents; re-validate after updates

Mitigation Approaches

Challenge Mitigation
Non-determinism Human review, multiple runs, semantic equivalence testing
Version changes Version pinning, change impact analysis, automated regression
Training gaps Domain-specific validation suite, known-answer tests
Context sensitivity Standardized prompts, prompt version control, guidelines
Opacity Focus on output verification, black-box validation

Practical Qualification Process

This section provides a step-by-step workflow for qualifying an AI tool in a safety-critical project.

Step 1: Inventory and Identify

List all AI tools used in the project. For each tool, record:

Field Example
Tool name GitHub Copilot
Version / model v1.143.0 / GPT-4o
Vendor GitHub (Microsoft)
Use cases in project Code completion for SWE.3 unit construction
Users Software developers (SWE team)
Integration point IDE plugin (VS Code)

Step 2: Classify Each Tool-Use-Case Pair

For each use case identified in Step 1, determine TI, TD, and TCL.

Use Case TI TD TCL Rationale
Code completion for non-safety modules TI1 -- TCL1 Output does not affect safety-related work products
Code completion for ASIL B modules TI3 TD1 (qualified review + MISRA checker) TCL2 Output may introduce errors, but qualified review and static analysis provide high detection confidence
Code completion for ASIL D modules TI3 TD1 (qualified review + MISRA checker + formal verification) TCL2 Same as above with additional formal methods

Step 3: Create Tool Qualification Plan (TCL 2 and TCL 3)

For each tool with TCL 2 or TCL 3, create a tool qualification plan. See the template in the next section.

Step 4: Execute Qualification Activities

Execute the qualification methods selected in the plan (Method 1a, 1b, 1c, or 1d).

Step 5: Document Results and Known Limitations

Compile all qualification evidence into a tool qualification report.

Step 6: Establish Monitoring and Re-Qualification Triggers

Define the conditions under which the tool must be re-qualified. Monitor these conditions continuously.

Trigger Action Required
Model version update by vendor Re-run validation suite; assess results; update qualification report
New use case added Perform TI/TD/TCL classification for new use case; extend validation suite if needed
Defect traced to tool error Investigate root cause; update known anomalies; assess whether qualification is still valid
Periodic review interval reached Review tool performance metrics; confirm qualification remains valid
Operating environment change Assess impact; re-run validation if environment affects tool behavior

Tool Qualification Plan Template

A tool qualification plan for an AI tool should contain the following elements.

Plan Section Content Notes
1. Tool Identification Tool name, version, vendor, model ID, configuration parameters Include all components: client, model, prompts, parameters
2. Use Case Description Specific project use cases for which the tool is being qualified One plan per tool-use-case combination, or consolidated
3. TI/TD/TCL Classification Classification rationale with reference to ISO 26262-8 Clause 11.4 Document assumptions about downstream verification
4. Qualification Method(s) Selected methods (1a, 1b, 1c, 1d) with justification For AI tools, Method 1c is typically primary
5. Validation Suite Description of test cases, expected results, pass/fail criteria For AI tools, define semantic equivalence criteria
6. Known Anomalies List of known tool limitations and workarounds Include AI-specific limitations (hallucination risk, domain gaps)
7. Operating Constraints Conditions under which the tool qualification is valid Include model version, prompt version, parameter settings
8. User Guidelines Instructions for safe use of the tool Include prompt templates, review requirements, escalation criteria
9. Re-Qualification Criteria Conditions triggering re-qualification Model updates, use case changes, environment changes
10. Responsibilities Named individuals responsible for qualification activities Tool qualification engineer, safety manager, project manager

Tip: Maintain the tool qualification plan as a living document under configuration management. Update it whenever the tool, its configuration, or its use context changes.


Industry Examples

The following examples illustrate how specific AI tools might be classified and qualified in a safety-critical automotive project.

Example 1: AI Code Generator for ASIL B ECU Software

Attribute Value
Tool LLM-based code generation assistant (e.g., Copilot, Cursor)
Use case Generate C code for ASIL B motor control module
TI classification TI3 --- generated code directly becomes safety-related software
Downstream verification Qualified MISRA C checker + peer code review + unit testing (MC/DC coverage)
TD classification TD1 --- high confidence that errors will be detected by the verification chain
TCL TCL2
Qualification method Method 1c --- validation suite of 200 coding tasks with known-correct reference solutions
Key constraint Model version pinned; prompts version-controlled; re-validation on any model update

Example 2: AI-Assisted Static Analyzer

Attribute Value
Tool AI-enhanced static analysis tool (e.g., AI-augmented MISRA checker)
Use case Detect MISRA violations and potential defects in ASIL C code
TI classification TI2 --- tool might fail to detect a defect (false negative), but this is a detection failure, not an error introduction
Downstream verification Independent qualified static analyzer + human code review
TD classification TD1 --- independent analysis provides high detection confidence
TCL TCL1
Qualification method No formal qualification required; tool listed in inventory
Key constraint Tool used as secondary check only; qualified static analyzer remains the primary tool

Example 3: AI Requirements Drafting Assistant

Attribute Value
Tool LLM-based requirements generation from stakeholder input
Use case Generate draft software requirements from system requirements (SWE.1)
TI classification TI2 --- generated requirements could contain ambiguities or errors
Downstream verification Mandatory human review by requirements engineer + formal requirements inspection
TD classification TD1 --- human review process has high detection confidence for requirements defects
TCL TCL1
Qualification method No formal qualification required; human review is the qualified process
Key constraint All AI-generated requirements must pass through the standard review gate before baselining

Implementation Checklist

Use this checklist to ensure complete coverage of tool qualification activities for AI tools in a safety-critical project.

Tool Inventory and Classification

  • All AI tools used in the project are listed in the tool inventory
  • Each tool entry includes: name, version, vendor, model ID, configuration
  • Each tool-use-case pair has a documented TI/TD/TCL classification
  • Classification rationale references ISO 26262-8 Clause 11.4
  • Assumptions about downstream verification are documented and validated

Qualification Planning (TCL 2 and TCL 3)

  • Tool qualification plan created for each TCL 2 and TCL 3 tool
  • Qualification method(s) selected and justified
  • Validation suite designed covering project-specific use cases
  • Pass/fail criteria defined (semantic equivalence for non-deterministic tools)
  • Known anomalies and limitations documented
  • Operating constraints specified (model version, prompt version, parameters)
  • User guidelines written and distributed to tool users
  • Re-qualification triggers defined

Qualification Execution

  • Validation suite executed and results documented
  • All test failures analyzed and dispositioned
  • Tool qualification report completed and approved
  • Qualification evidence archived under configuration management

AI-Specific Controls

  • Model version pinned and recorded in configuration management
  • Prompt templates version-controlled alongside project artifacts
  • Temperature and sampling parameters locked in project configuration
  • Automated regression suite established for re-validation after model updates
  • Model update monitoring process in place (vendor notifications, periodic checks)
  • Non-determinism addressed in validation approach (multiple runs, semantic checks)

Governance and Monitoring

  • AI tool governance policy established (approved tools, permitted use cases, restrictions)
  • HITL patterns defined for each AI tool use case (see 3.2 HITL Patterns)
  • Audit trail requirements implemented (input/output logging)
  • Tool performance metrics collected (acceptance rate, error rate, override rate)
  • Periodic review schedule established (quarterly minimum)
  • Escalation procedures defined for tool failures or qualification concerns

Summary

Tool qualification for AI requires a systematic approach spanning classification, strategy, governance, and ongoing monitoring:

  1. Classification: Determine TI, TD, and TCL for each tool-use-case pair. The same tool may have different TCL assignments depending on its usage context and downstream verification.
  2. Strategy Selection: Choose from non-critical path (avoid safety impact), verification overlay (catch errors downstream), or secondary check (supplement qualified tools).
  3. Qualification Methods: Apply ISO 26262-8 Methods 1a-1d proportional to the TCL. For AI tools, Method 1c (validation suite) is typically the most practical primary method.
  4. Evidence: Scale documentation effort to the TCL level --- from simple inventory entries for TCL 1 to comprehensive qualification reports for TCL 3.
  5. AI-Specific Controls: Address non-determinism through semantic validation, manage model updates through version pinning and automated regression, and version-control all configuration components including prompts and parameters.
  6. Cross-Standard Awareness: Understand how ISO 26262 TCL maps to DO-178C TQL when working across automotive and aerospace domains.
  7. Governance and Monitoring: Maintain ongoing oversight through tool performance metrics, periodic re-validation, and defined re-qualification triggers.

The goal is not to avoid AI but to use it safely with appropriate qualification. A well-designed qualification approach enables organizations to capture the productivity benefits of AI tools while maintaining the safety integrity that standards demand.