2.0: Software Engineering Processes

Key Terms

Key acronyms used in this chapter:

  • SWE: Software Engineering process group
  • ASPICE: Automotive SPICE (Software Process Improvement and Capability dEtermination)
  • V-Model: Verification and validation lifecycle model
  • ASIL: Automotive Safety Integrity Level (A, B, C, D)
  • SYS: System Engineering process group
  • HWE: Hardware Engineering process group
  • MLE: Machine Learning Engineering process group
  • HITL: Human-in-the-Loop (human oversight pattern)
  • L0-L3: AI Automation Levels (see 03.01 for definitions)
  • BP: Base Practice (ASPICE)
  • WP: Work Product (ASPICE)
  • MC/DC: Modified Condition/Decision Coverage
  • MISRA: Motor Industry Software Reliability Association
  • AUTOSAR: Automotive Open System Architecture
  • SIL: Software-in-the-Loop
  • PIL: Processor-in-the-Loop
  • HIL: Hardware-in-the-Loop
  • TCL: Tool Confidence Level (ISO 26262)

Note: For a comprehensive glossary of all terms, see Appendix H.


Learning Objectives

After reading this chapter, you will be able to:

  • Describe the six SWE processes and their relationships
  • Apply AI augmentation across the software lifecycle
  • Produce ASPICE-compliant software work products
  • Map SWE processes to V-Model phases
  • Identify ASPICE 4.0 changes impacting software engineering
  • Apply ASIL-dependent restrictions on AI usage in SWE processes
  • Understand how SWE processes interact with SYS, HWE, and MLE disciplines

Chapter Overview

The Software Engineering (SWE) process group is the heart of embedded software development. This chapter covers the complete software lifecycle from requirements through verification.

Software Engineering V-Model

In embedded automotive systems, the SWE process group carries a uniquely heavy burden. Unlike enterprise or web software, embedded software operates under hard real-time constraints, must comply with safety standards such as ISO 26262, and runs on resource-constrained microcontrollers where every byte of RAM and every CPU cycle matters. A defect in an airbag controller or brake-by-wire system can have lethal consequences, which is why ASPICE places rigorous structure around software engineering activities.

The six SWE processes form a tightly coupled chain. Requirements flow down from system engineering (SYS.3) into software requirements (SWE.1), which are decomposed into an architecture (SWE.2) and then into detailed designs and code (SWE.3). The right side of the V-Model mirrors this decomposition with verification at unit (SWE.4), integration (SWE.5), and software (SWE.6) levels. Each verification level checks work products against the corresponding design level, ensuring defects are caught as close to their origin as possible.

What This Means: SWE is where AI-assisted development has the most direct and measurable impact. Code generation, test generation, static analysis, and coverage measurement are all activities where AI tools can dramatically accelerate delivery -- but only if the human-in-the-loop framework is rigorously applied. The stakes are too high in safety-critical embedded software to delegate accountability to an AI tool.


ASPICE 4.0 Changes for SWE

ASPICE 4.0 introduced several significant changes to the Software Engineering process group compared to version 3.1. Understanding these changes is essential for teams transitioning to the new standard and for correctly applying AI augmentation.

Structural Changes

Aspect ASPICE 3.1 ASPICE 4.0 Impact on AI Integration
Process count SWE.1 through SWE.6 SWE.1 through SWE.6 (retained) No change in scope
Terminology "Test" used throughout "Verification" replaces "Test" AI prompts and templates must use updated terminology
SWE.4 name Software Unit Verification Software Unit Verification (unchanged) Minimal
SWE.5 name Software Integration and Integration Test Software Component Verification and Integration Verification AI work product generators must use new naming
SWE.6 name Software Qualification Test Software Verification Major -- all references to "qualification" must be updated
Work products Specific WP IDs Generic Information Item types (e.g., 08-60, 03-50) AI templates must map to new generic WP IDs

Key Conceptual Changes

Change Description Practical Effect
Verification Measures ASPICE 4.0 replaces "test cases" with "verification measures," broadening the concept to include analysis, review, simulation, and formal methods -- not just executable tests. AI tools should generate verification measures appropriate to the technique, not just test code.
Generic Work Products Work products are now identified by generic type (e.g., 08-60 "Verification Measure") rather than process-specific names. The process context determines the specific meaning. AI-generated work products must carry correct context metadata.
Bidirectional Traceability Explicit requirement for bidirectional traceability between verification measures and requirements, and between verification results and verification measures. AI traceability checkers must validate both directions.
Release Scope SWE.5 and SWE.6 now explicitly reference "release scope" for verification measure selection, including regression criteria. AI must consider release scope when suggesting verification measure selections.
Consistency Evidence New emphasis on "consistency evidence" (13-51) as a distinct work product. AI consistency checkers produce a traceable artifact, not just a report.

What This Means: Teams moving from ASPICE 3.1 to 4.0 must update their AI prompt templates, work product generators, and traceability tooling. The shift from "test" to "verification" is more than cosmetic -- it opens the door for AI to suggest a broader range of verification techniques including formal analysis and simulation, not just executable test cases.


SWE Process Summary

Process Purpose AI Automation Level
SWE.1 Software Requirements Analysis L1-L2
SWE.2 Software Architectural Design L1
SWE.3 Software Detailed Design and Unit Construction L2
SWE.4 Software Unit Verification L2-L3
SWE.5 Software Component Verification and Integration Verification L2
SWE.6 Software Verification L1-L2

What This Means: The SWE process group covers the complete software lifecycle from requirements to qualification. Notice that AI automation is highest for coding (SWE.3) and unit verification (SWE.4), where AI tools like GitHub Copilot excel at generating code and tests. Higher-level processes like requirements and architecture require more human judgment.

Detailed AI Integration per Process

Process AI Capabilities Example AI Actions HITL Requirement Key Risk
SWE.1 NLP analysis of system requirements, derivation of SW requirements, consistency and completeness checking, attribute population AI parses SYS.2 output and drafts SW requirements with rationale, traces, and attributes; AI flags ambiguities and missing coverage Human validates every derived requirement, resolves ambiguities, approves final specification AI may hallucinate requirements not implied by system requirements
SWE.2 Pattern-based architecture suggestions, interface documentation generation, AUTOSAR component mapping, resource estimation AI proposes component decomposition based on requirement clustering; generates interface definitions in ARXML or header files Human makes all architectural decisions; AI suggestions are starting points only AI lacks system-level context for timing and resource trade-offs
SWE.3 Code generation from design, MISRA-compliant code scaffolding, unit-level documentation, code completion AI generates function bodies from detailed design specifications; applies MISRA rule templates; generates Doxygen comments Human reviews every generated function; static analysis must pass before merge Generated code may compile but contain subtle semantic errors
SWE.4 Unit test generation, boundary value analysis, equivalence class partitioning, mock generation, coverage measurement AI generates test cases targeting MC/DC coverage; creates mocks for hardware abstraction layers; identifies uncovered branches Human reviews test intent and oracle correctness; validates coverage metrics AI-generated tests may achieve coverage without meaningful assertions
SWE.5 Integration test generation from architecture, interface test derivation, sequence test construction AI generates integration tests verifying component interfaces defined in SWE.2; creates sequence diagrams from test specifications Human validates integration test strategy; reviews interface assumptions AI may not understand implicit timing dependencies between components
SWE.6 Software verification measure specification, regression selection, result analysis, report generation AI drafts verification measures from SW requirements; selects regression scope; analyzes HIL results for anomalies Human approves verification strategy; validates pass/fail judgments; signs off reports AI result analysis may miss environmental or intermittent failures

AI Integration Strategy

The overarching approach to AI in software engineering follows a principle of graduated autonomy: AI autonomy increases as the activity moves from creative and safety-critical decisions toward repetitive and mechanically verifiable tasks.

Guiding Principles

Principle Description
Human Accountability A named human engineer is accountable for every work product, regardless of how much AI contributed to its creation. ASPICE capability assessments evaluate the process, not the tool.
AI as Accelerator AI generates drafts, suggestions, and analyses. It does not make decisions. The value of AI lies in reducing cycle time for first drafts and catching errors humans might miss during review.
Evidence Preservation Every AI contribution must be traceable. If an AI generated a requirement, test, or code block, the provenance must be recorded in the work product metadata or version control history.
Tool Qualification Alignment AI tools used in safety-relevant processes must meet ISO 26262 tool qualification requirements. The Tool Confidence Level (TCL) determines the rigor of qualification evidence. See Chapter 03.04.
Incremental Adoption Teams should start with low-risk, high-value applications (SWE.4 unit test generation) before expanding to higher-risk areas (SWE.1 requirements derivation).

Automation Level Decision Framework

When to allow L2 or L3 automation: Only when the AI output can be independently verified by an automated tool (e.g., compiler, static analyzer, test framework) AND a human reviewer confirms the output satisfies the intent of the corresponding ASPICE base practice.

Decision Factor L0-L1 (Human-Led) L2 (AI-Assisted) L3 (AI-Led, Human-Supervised)
Output verifiability Low -- requires expert judgment Medium -- partially automatable High -- fully automatable verification
Safety impact Direct safety relevance Indirect safety relevance No direct safety relevance
ASPICE BP coverage Multiple BPs addressed Single BP addressed Mechanical/repetitive aspect of BP
Example Architectural decisions (SWE.2) Code generation (SWE.3) Unit test execution (SWE.4)

Safety Considerations

ASIL-Dependent Restrictions on AI Usage

ISO 26262 defines four Automotive Safety Integrity Levels (ASIL A through D), with ASIL D requiring the most rigorous development practices. AI usage in SWE processes must be calibrated to the ASIL of the software component being developed.

ASIL SWE.1 Requirements SWE.2 Architecture SWE.3 Code Generation SWE.4 Unit Verification SWE.5 Integration Verification SWE.6 Software Verification
QM L2 -- AI drafts, human reviews L1-L2 -- AI suggests patterns L2 -- AI generates, human reviews L2-L3 -- AI generates and executes L2 -- AI generates tests L2 -- AI drafts measures
ASIL A L1-L2 -- AI drafts, human validates L1 -- AI suggests, human decides L2 -- AI generates, static analysis required L2 -- AI generates, coverage verified L2 -- AI generates, human validates L1-L2 -- AI drafts, human validates
ASIL B L1 -- AI assists, human authors L1 -- AI documents, human designs L1-L2 -- AI assists, MISRA compliance mandatory L2 -- AI generates, MC/DC required L1-L2 -- AI assists, human validates L1 -- AI assists, human authors
ASIL C L1 -- AI assists, dual review L1 -- Human designs, AI documents L1 -- AI assists only, full MISRA + review L2 -- AI generates, independent review L1 -- AI assists, independent review L1 -- AI assists, independent review
ASIL D L0-L1 -- Human authors, AI checks L0-L1 -- Human designs, AI checks L1 -- AI assists only, formal verification recommended L1-L2 -- AI generates, independent verification L1 -- Human authors, AI assists execution L1 -- Human authors, AI assists execution

What This Means: As ASIL increases, AI transitions from a generator role to a checker role. At ASIL D, AI should primarily be used to verify human-authored work products rather than to generate them. This reflects the ISO 26262 principle that higher safety integrity requires greater independence and rigor in verification.

Safety-Critical AI Usage Rules

Rule Rationale
AI-generated code for ASIL C/D components must undergo independent code review by a qualified engineer who did not prompt the AI. Independence requirement per ISO 26262 Part 6, Table 2.
AI-generated test cases must include a human-authored test oracle that specifies the expected behavior based on requirements, not based on observed code behavior. Prevents "teaching to the test" where AI tests pass because they mirror code logic rather than requirement intent.
AI tools used for ASIL B-D work products must be qualified per ISO 26262 Part 8, Clause 11. Tool Confidence Level (TCL) must be assessed for each AI tool. Unqualified tools may introduce systematic faults. AI tools are particularly prone to non-deterministic behavior that complicates TCL assessment.
AI-generated architecture suggestions for ASIL C/D must be validated against safety analysis outputs (FMEA, FTA). AI has no awareness of hazard analysis results unless explicitly provided.

Process Interactions

How SWE Processes Interact with SYS, HWE, and MLE

The SWE process group does not operate in isolation. It receives inputs from system engineering, coordinates with hardware engineering, and increasingly interfaces with machine learning engineering for systems that incorporate AI/ML components.

Interaction From To Information Exchanged AI Support
System to Software Requirements SYS.2 SWE.1 System requirements allocated to software AI derives SW requirements from SYS requirements
Architecture Allocation SYS.3 SWE.2 Software architecture constraints, resource budgets AI maps system architecture elements to SW components
HW-SW Interface HWE.1 SWE.2, SWE.3 Register maps, memory maps, timing constraints, pin assignments AI generates hardware abstraction layer code from HW specifications
SW-HW Integration Feedback SWE.5 HWE.1 Interface defects, timing violations AI analyzes integration test failures for HW-SW root cause
Software to System Integration SWE.6 SYS.4 Verified integrated software, verification results AI compiles software verification evidence for system integration
ML Model Integration MLE.3 SWE.3 Trained model artifacts, inference API, resource requirements AI assists with model integration code, input/output validation
ML Verification MLE.4 SWE.4, SWE.5 Model performance metrics, edge case data AI generates test cases targeting ML model boundary conditions
Change Propagation SYS.2 (change) SWE.1-SWE.6 Changed system requirements AI performs impact analysis across all SWE work products

Cross-Discipline Traceability

Traceability Link Direction ASPICE Requirement AI Role
System Req --> SW Req Forward SWE.1 BP4 AI generates forward trace, flags missing allocations
SW Req --> System Req Backward SWE.1 BP4 AI verifies every SW req traces to a system req
SW Req --> SW Architecture Forward SWE.2 BP5 AI checks all requirements allocated to components
SW Architecture --> Detailed Design Forward SWE.3 BP6 AI verifies design covers all architectural elements
SW Req --> Verification Measure Forward SWE.6 BP4 AI identifies requirements without verification measures
Verification Result --> Verification Measure Backward SWE.6 BP4 AI links test results to their specifications

What This Means: Traceability is the connective tissue of ASPICE compliance. AI excels at maintaining and checking traceability links across large work product sets, catching gaps that manual review would miss. However, the human must validate that the traces are semantically correct, not just structurally present.


V-Model Alignment

Each SWE process occupies a specific position in the V-Model. The left side represents design and decomposition activities, while the right side represents verification and integration activities. The horizontal connections between left and right sides define which design artifact each verification level checks against.

V-Model Position SWE Process Activity Checks Against Typical Environment
Left -- Top SWE.1 Software Requirements Analysis Derive and specify SW requirements from system requirements SYS.2 System Requirements Requirements management tool
Left -- Middle SWE.2 Software Architectural Design Define SW component structure, interfaces, resource allocation SWE.1 SW Requirements Architecture modeling tool
Left -- Bottom SWE.3 Software Detailed Design and Unit Construction Create detailed designs and implement source code SWE.2 SW Architecture IDE, code generator
Right -- Bottom SWE.4 Software Unit Verification Verify individual units against detailed design SWE.3 Detailed Design Host compiler, unit test framework
Right -- Middle SWE.5 Software Component Verification and Integration Verification Verify component integration against architecture SWE.2 SW Architecture SIL/PIL environment
Right -- Top SWE.6 Software Verification Verify integrated software against SW requirements SWE.1 SW Requirements HIL/target environment

V-Model Horizontal Traceability

Left Side (Design) Right Side (Verification) What Is Verified
SWE.1 SW Requirements SWE.6 Software Verification Does the integrated software satisfy all SW requirements?
SWE.2 SW Architecture SWE.5 Integration Verification Do integrated components interact correctly per architecture?
SWE.3 Detailed Design SWE.4 Unit Verification Does each unit behave as specified in the detailed design?

What This Means: The V-Model is not just a diagram -- it dictates which work products serve as the reference for each verification level. AI tools must be configured to use the correct reference document when generating verification measures. A unit test (SWE.4) that checks against requirements instead of detailed design is incorrectly scoped, even if it passes.


AI Integration Highlights

Where AI Provides Most Value

Process AI Contribution Value
SWE.1 Requirements derivation, consistency Medium
SWE.2 Pattern suggestions, documentation Medium
SWE.3 Code generation, completion High
SWE.4 Test generation, execution High
SWE.5 Test case generation Medium
SWE.6 Coverage analysis Medium

SWE-Specific HITL Patterns

Pattern SWE Application
Reviewer Code review, test review
Collaborator Architecture exploration
Monitor CI/CD pipeline, test execution
Approver Release approval

Process Relationships

The following diagram illustrates the data flow between SWE processes, showing how requirements cascade into architecture, design, and code, with verification activities feeding back at each level.

Software Engineering Process Flow


Key Work Products

Note: Work Product IDs follow ASPICE 4.0 standard numbering.

WP ID Work Product Producer AI Role
17-08 SW requirements specification SWE.1 Draft generation
04-04 SW architecture description SWE.2 Documentation
04-05 SW detailed design SWE.3 Generation
11-05 Software Unit SWE.3 Generation
08-60 Verification Measure (unit) SWE.4 Generation
03-50 Verification Measure Data (unit) SWE.4 Analysis
08-60 Verification Measure (component/integration) SWE.5 Generation
03-50 Verification Measure Data (component/integration) SWE.5 Analysis
08-60 Verification Measure (software) SWE.6 Generation
03-50 Verification Measure Data (software) SWE.6 Analysis

Common Challenges in AI-Augmented Software Engineering

Teams adopting AI across SWE processes encounter recurring challenges. Awareness of these pitfalls enables proactive mitigation.

Technical Challenges

Challenge Affected Processes Description Mitigation
Hallucinated requirements SWE.1 AI derives requirements that are not implied by or traceable to system requirements. The generated text reads plausibly but introduces scope creep or contradictions. Require bidirectional traceability review. Every AI-generated SW requirement must cite a specific system requirement source.
Architecture blind spots SWE.2 AI suggests component decompositions that ignore real-time constraints, memory partitioning, or AUTOSAR layering rules specific to the project. Provide AI with project-specific architecture constraints as context. Human architect makes all structural decisions.
Syntactically correct but semantically wrong code SWE.3 AI-generated code compiles and passes static analysis but contains logic errors, off-by-one faults, or incorrect state machine transitions. Mandatory human code review with focus on behavioral correctness. Pair AI generation with model-based testing.
Shallow test coverage SWE.4 AI achieves high line/branch coverage by generating tests that exercise code paths but use weak assertions, effectively testing that "code runs" rather than "code behaves correctly." Require test oracles derived from requirements. Review assertion quality independently from coverage metrics.
Integration timing issues SWE.5 AI generates integration tests that pass in simulation but fail on target due to timing, interrupt priorities, or DMA conflicts not modeled in SIL. Validate critical integration tests on target hardware. AI can flag tests that are timing-sensitive.
Non-deterministic AI output All The same prompt may produce different outputs across runs, complicating reproducibility and audit trails. Pin AI model versions. Store prompts and outputs in version control. Use temperature=0 where available.

Organizational Challenges

Challenge Description Mitigation
Over-reliance on AI Engineers accept AI output without critical review, assuming "the AI knows best." Quality degrades because the human review step becomes perfunctory. Establish review checklists specific to AI-generated artifacts. Measure review defect detection rates.
Skill erosion Junior engineers learn to prompt AI rather than understanding the underlying engineering principles. They cannot debug AI-generated code or evaluate architectural trade-offs. Pair AI usage with engineering training. Require junior engineers to complete manual exercises before using AI tools.
Assessment ambiguity ASPICE assessors may question whether AI-generated work products satisfy process outcomes if the human contribution is unclear. Maintain clear records of human decisions and AI contributions. Demonstrate that HITL patterns were followed.
Tool qualification burden Each AI tool requires qualification per ISO 26262 Part 8. Frequent model updates invalidate previous qualification evidence. Establish a tool qualification framework that accommodates incremental re-qualification. See Chapter 03.04.

Embedded Software Considerations

Safety-Critical Aspects

Aspect SWE Impact
MISRA compliance SWE.3, SWE.4
Coverage requirements SWE.4, SWE.5, SWE.6
Timing analysis SWE.2, SWE.4
Memory constraints SWE.2, SWE.3
Fault handling SWE.1, SWE.3

AUTOSAR Context

Layer SWE Relevance
Application SWE.1-SWE.6 (full lifecycle)
RTE SWE.2, SWE.3 (generated)
BSW SWE.3, SWE.4 (configuration)
MCAL SWE.3, SWE.4 (low-level)

Tool Ecosystem

An overview of tool categories that support SWE processes with AI augmentation. Detailed tool recommendations are provided in Chapter 06.07.

Tool Categories by SWE Process

SWE Process Tool Category Representative Tools AI Augmentation
SWE.1 Requirements Management IBM DOORS, Polarion, Jama Connect, codebeamer AI-powered NLP for requirement quality checks, completeness analysis, and automatic attribute population
SWE.2 Architecture Modeling Enterprise Architect, PTC Integrity Modeler, AUTOSAR tooling (Vector DaVinci, ETAS ISOLAR) AI-suggested component decomposition, interface generation, resource estimation
SWE.3 IDE and Code Generation VS Code + Copilot, CLion, MATLAB/Simulink, TargetLink, Embedded Coder AI code completion, MISRA-aware generation, code review suggestions
SWE.3 Static Analysis Polyspace, QAC/PRQA, Coverity, PC-lint, Axivion AI-enhanced rule checking, defect prediction, false positive filtering
SWE.4 Unit Test Frameworks Google Test, Unity (C), VectorCAST, Tessy, Cantata AI test generation, boundary value analysis, mock generation
SWE.4 Coverage Analysis gcov/lcov, BullseyeCoverage, VectorCAST, Testwell CTC++ AI-driven coverage gap identification and test suggestion
SWE.5 Integration Test VectorCAST, dSPACE VEOS (SIL), ETAS LABCAR (PIL) AI integration test derivation from architecture, sequence test generation
SWE.6 HIL Test dSPACE, ETAS, NI/VeriStand, IPG CarMaker AI test sequencing, result analysis, anomaly detection
All Traceability Reqtify, Capra, built-in tool links AI traceability gap detection, impact analysis, consistency checking
All CI/CD Jenkins, GitLab CI, GitHub Actions, Zuul AI-driven pipeline orchestration, flaky test detection, build failure analysis

What This Means: No single tool covers all SWE processes. A typical automotive project uses 8-15 tools across the SWE lifecycle. AI augmentation is most mature in IDE/code generation and unit testing tools, and least mature in requirements management and HIL testing tools. Teams should prioritize AI integration where tool support is strongest and value is highest.


Implementation Roadmap

Adopting AI across SWE processes should follow a phased approach that builds confidence, tooling infrastructure, and organizational capability incrementally.

Phase 1: Foundation (Months 1-3)

Activity SWE Process Goal Success Metric
Deploy AI-assisted IDE SWE.3 Enable code completion for non-safety code Developer adoption rate > 70%
Pilot AI unit test generation SWE.4 Generate unit tests for one QM-rated module Test generation time reduced by 40%
Establish HITL review checklist All Define review criteria for AI-generated artifacts Checklist approved by QA
Assess tool qualification needs All Identify TCL for each AI tool TCL assessment documented

Phase 2: Expansion (Months 4-6)

Activity SWE Process Goal Success Metric
AI requirements consistency checking SWE.1 Automated consistency checks on SW requirements Consistency defects found > manual baseline
AI code generation for ASIL A/B SWE.3 Extend AI code generation to safety-relevant modules MISRA compliance rate of AI-generated code > 95%
AI integration test derivation SWE.5 Generate integration tests from architecture model Interface coverage > 80%
AI traceability checking All Automated bidirectional traceability verification Traceability gap detection rate > 90%

Phase 3: Optimization (Months 7-12)

Activity SWE Process Goal Success Metric
AI requirements derivation SWE.1 AI drafts SW requirements from system requirements Review cycle time reduced by 30%
AI architecture documentation SWE.2 Automated architecture description generation Documentation completeness > 90%
AI verification measure specification SWE.6 AI drafts software verification measures Verification coverage gaps reduced by 50%
AI result analysis and reporting SWE.4-SWE.6 Automated anomaly detection in test results False negative rate < 5%
Continuous improvement All Refine prompts, templates, and workflows based on lessons learned Process capability level maintained or improved

Phase 4: Maturity (Months 12+)

Activity SWE Process Goal Success Metric
Full AI pipeline integration All AI augmentation embedded in CI/CD for all SWE processes End-to-end automation with HITL gates
ASIL C/D AI assistance SWE.3, SWE.4 Controlled AI usage for highest safety levels with qualified tools Tool qualification evidence maintained
Cross-project AI model tuning All Fine-tune AI models on project-specific coding standards and patterns Model output acceptance rate > 80%
ASPICE assessment readiness All Demonstrate AI-augmented processes to assessors Capability Level 2+ achieved

What This Means: Do not attempt to introduce AI across all SWE processes simultaneously. Start where the risk is lowest and the value is highest (SWE.3 code completion, SWE.4 unit test generation), build confidence and tooling, then expand to higher-risk areas. Each phase should include a retrospective to capture lessons learned before proceeding.


Cross-References

Topic See Also
AI Tools for Code Generation Part III -- Chapter 14
AI Tools for Testing Part III -- Chapter 15
Tool Qualification Framework 03.04
HITL Patterns 03.02
Automation Levels 03.01
ISO 26262 Integration Part IV -- Chapter 18
Practical SWE Implementation Part IV -- Chapter 17
MLE Process Group Chapter 08

Chapter Sections

Section Topic AI Focus
06.01 SWE.1 Software Requirements Analysis Derivation, consistency
06.02 SWE.2 Software Architectural Design Patterns, documentation
06.03 SWE.3 Software Detailed Design and Unit Construction Code generation
06.04 SWE.4 Software Unit Verification Test generation
06.05 SWE.5 Software Component Verification and Integration Verification Test automation
06.06 SWE.6 Software Verification Coverage analysis
06.07 AI Tools for Software Engineering Tool recommendations

Prerequisites

Prerequisite Covered In
SYS processes Chapter 5
Automation levels 03.01
HITL patterns 03.02
Tool qualification 03.04