9: Support Processes


What You'll Learn

Here's what you'll take away from this section:

  • Describe the SUP processes and their support role
  • Apply AI augmentation to support activities
  • Produce ASPICE-compliant support work products
  • Integrate support processes with development
  • Understand the ASPICE 4.0 changes to the SUP process group
  • Design an implementation roadmap for AI adoption in support processes
  • Avoid common pitfalls when integrating AI into quality, configuration, and problem management

Chapter Overview

The Support (SUP) process group provides essential activities that enable and enhance the primary engineering processes. While system engineering (SYS), software engineering (SWE), and hardware engineering (HWE) define what is built, support processes govern how it is built, verified, tracked, and controlled. In safety-critical embedded systems development, support processes are not optional overhead -- they are the connective tissue that ensures traceability, reproducibility, and evidence of compliance.

Support Processes Overview

Why Support Processes Matter in Safety-Critical Development

In regulated industries such as automotive (ISO 26262), medical devices (IEC 62304), industrial automation (IEC 61508), and avionics (DO-178C), support processes serve three critical functions:

  1. Evidence Generation: Every engineering decision must be recorded, traceable, and auditable. Support processes ensure that work products, baselines, and change histories exist in a form that assessors, auditors, and regulatory bodies can evaluate.

  2. Process Discipline: Without rigorous configuration management, quality assurance, and change control, even technically excellent engineering work can fail an ASPICE assessment. A Capability Level 2 rating requires that processes are managed, and support processes provide the management infrastructure.

  3. Defect Prevention and Resolution: Problems found late in integration or in the field cost orders of magnitude more to fix than those caught early. SUP.9 (Problem Resolution) and SUP.10 (Change Request Management) create structured pathways for identifying, analyzing, and resolving defects before they propagate.

Key Insight: In ASPICE assessments, SUP processes are among the most frequently under-rated. Organizations that invest heavily in SWE activities but neglect quality assurance and configuration management routinely fail to achieve Capability Level 2 across their process profiles. Support processes are not secondary -- they are foundational.

The Support-Engineering Relationship

Support processes do not operate in isolation. They form a bidirectional relationship with every engineering process group:

  • SYS/SWE/HWE/MLE produce work products that SUP.8 (Configuration Management) controls and baselines.
  • SUP.1 (Quality Assurance) monitors whether SYS/SWE/HWE/MLE processes and their work products comply with plans and standards.
  • SUP.9/SUP.10 handle defects and change requests that arise during SYS/SWE/HWE/MLE execution.
  • SUP.11 (ML Data Management), new in ASPICE 4.0, supports MLE processes with data quality, provenance tracking, and bias detection.

This bidirectional coupling means that AI integration in support processes has a multiplier effect: improving the efficiency and consistency of SUP activities accelerates all engineering processes they support.

ASPICE 4.0 Note: SUP.2 (Verification) has been removed in ASPICE 4.0. Verification activities are now integrated into the respective SYS and SWE processes. SUP.11 (Machine Learning Data Management) is new in ASPICE 4.0.


ASPICE 4.0 Changes for SUP Processes

ASPICE 4.0 introduced significant structural and content changes to the Support process group compared to ASPICE 3.1. Understanding these changes is essential for organizations migrating their process frameworks and for new implementations targeting 4.0 compliance.

Structural Changes

Change ASPICE 3.1 ASPICE 4.0 Impact
SUP.2 removed SUP.2 Verification was a standalone process Verification activities distributed into SYS.2-5 and SWE.2-6 Each engineering process now owns its verification; no separate "verification process"
SUP.11 added No ML data management SUP.11 Machine Learning Data Management introduced Organizations using ML/AI in products must manage training/validation data as a controlled asset
SUP.3-7 removed SUP.3 (Validation), SUP.4 (Joint Review), SUP.5 (Audit), SUP.6 (Product Evaluation), SUP.7 (Documentation) existed These processes are removed from the core PAM Activities absorbed into other processes or considered generic practices
Outcome wording Outcome statements used "shall" language Outcome statements rephrased for clarity and measurability Assessors evaluate observable outcomes, not prescriptive activities
Work product alignment WP IDs partially overlapped across processes Cleaner WP ID scheme with explicit ownership Less ambiguity in which process produces which work product

SUP.2 Verification Redistribution

The removal of SUP.2 as a standalone process is the most significant structural change. In ASPICE 3.1, SUP.2 defined generic verification activities (reviews, inspections, walkthroughs). In ASPICE 4.0, each engineering process includes its own verification base practices:

Former SUP.2 Activity Now Covered By
Requirements review SYS.2 BP7, SWE.1 BP7
Architecture review SYS.3 BP8, SWE.2 BP8
Design review SWE.3 BP7
Code review SWE.4 BP6
Integration test review SWE.5 BP6, SYS.4 BP7
Qualification test review SWE.6 BP6, SYS.5 BP6

Migration Note: Organizations that had a centralized verification team under SUP.2 must redistribute verification responsibilities to each engineering process owner. The verification methods and techniques remain the same; only the organizational ownership changes. AI tools that supported centralized verification can still be used -- they simply report into each engineering process rather than a single SUP.2 process.

SUP.11 Machine Learning Data Management (New)

SUP.11 is entirely new in ASPICE 4.0 and reflects the growing use of machine learning in embedded systems (e.g., ADAS perception, predictive maintenance, anomaly detection). Its purpose is to ensure that ML data is managed with the same rigor as source code:

Aspect Description
Purpose Manage ML data (training, validation, test sets) throughout the lifecycle
Data Provenance Track origin, collection method, labeling process, and versioning of all datasets
Data Quality Define quality criteria (completeness, balance, accuracy) and validate datasets against them
Bias Detection Identify and mitigate biases in training data that could cause safety-relevant failures
Data Versioning Maintain baselines of datasets tied to model versions, enabling reproducibility
Privacy/Legal Ensure compliance with data protection regulations (GDPR, etc.) for collected data

Practical Note: SUP.11 interacts heavily with MLE.1-5 (Machine Learning Engineering) and SUP.8 (Configuration Management). Organizations must decide whether to extend their existing CM infrastructure to handle large datasets or adopt specialized ML data management platforms (DVC, LakeFS, Pachyderm).


SUP Process Summary

Process Purpose Key Outcomes AI Automation Level
SUP.1 Quality assurance Process/product compliance verified, non-conformances tracked L1-L2
SUP.8 Configuration management Items identified, baselines established, changes controlled L2-L3
SUP.9 Problem resolution Problems recorded, analyzed, resolved, trends monitored L2
SUP.10 Change request management Changes analyzed for impact, approved, implemented, verified L2
SUP.11 Machine learning data management (NEW) Data quality validated, provenance tracked, bias detected L2-L3

Legend: L0 = No automation (human-only), L1 = AI assists (human drives), L2 = AI proposes + human approves, L3 = AI executes autonomously (within defined boundaries)


AI Integration Strategy

AI integration in support processes follows a different pattern than in engineering processes. Engineering processes require AI to understand domain semantics (embedded software architecture, safety constraints, hardware interfaces). Support processes, by contrast, are largely about pattern recognition, consistency checking, and workflow automation -- areas where AI excels even without deep domain knowledge.

AI Integration by SUP Process

SUP.1 Quality Assurance -- AI as Compliance Auditor

SUP.1 is fundamentally about checking whether processes are followed and work products meet defined standards. This is a natural fit for AI:

Activity Traditional Approach AI-Augmented Approach Automation Level
Process audit Manual checklist review by QA engineer AI pre-screens process evidence, flags gaps, generates audit findings draft L2
Work product review Manual inspection against templates/checklists AI validates structure, completeness, cross-references, and consistency L2
Non-conformance classification QA engineer categorizes findings AI classifies severity, suggests corrective actions, links to similar past NCRs L1-L2
Trend analysis Quarterly manual analysis of NCR data AI continuously monitors quality metrics, detects trends, and generates alerts L2-L3
Audit scheduling Manual planning based on project milestones AI recommends audit timing based on process risk and change velocity L1

HITL Requirement: AI must never autonomously close a non-conformance report. Closure requires human verification that the corrective action was effective. ASPICE assessors will look for evidence of human judgment in QA decisions.

SUP.8 Configuration Management -- AI as Baseline Guardian

Configuration management in embedded systems is complex because it spans source code, build scripts, hardware configuration files, calibration data, requirements documents, test cases, and binary artifacts. AI provides significant value in maintaining consistency across these diverse artifact types:

Activity Traditional Approach AI-Augmented Approach Automation Level
Baseline integrity check Manual verification that all items are present and consistent AI automatically validates baseline completeness against CM plan L3
Branch management Manual enforcement of naming conventions and merge policies AI-powered CI/CD gates that validate branch names, commit messages, and merge readiness L3
Change tracking Manual linking of commits to change requests AI auto-links commits to CRs/PRs based on commit message parsing and code analysis L2-L3
Configuration audit Periodic manual comparison of delivered vs. baselined items AI continuously compares deployed configurations against approved baselines L2-L3
Dependency analysis Manual tracking of third-party library versions AI monitors dependency trees, flags vulnerabilities, and suggests updates L2

SUP.9 Problem Resolution -- AI as Root Cause Investigator

Problem resolution benefits enormously from AI's ability to correlate information across large datasets of defect reports, code changes, test results, and system logs:

Activity Traditional Approach AI-Augmented Approach Automation Level
Problem classification Engineer manually categorizes defect type and severity AI auto-classifies based on symptom description, affected component, and historical patterns L2
Root cause analysis Manual investigation, often requiring multiple engineers AI suggests probable root causes based on similar past defects, code change history, and test coverage gaps L2
Duplicate detection Manual search through existing problem reports AI identifies potential duplicates using semantic similarity, reducing report clutter L2-L3
Resolution recommendation Engineer proposes fix based on experience AI suggests resolution approaches based on similar past resolutions and their effectiveness L1-L2
Trend monitoring Periodic manual review of defect statistics AI continuously monitors defect inflow/outflow rates, detects anomalies, and predicts future trends L2-L3

SUP.10 Change Request Management -- AI as Impact Analyst

Change requests in safety-critical systems require thorough impact analysis before approval. A seemingly simple change can have cascading effects through requirements, architecture, implementation, and test artifacts:

Activity Traditional Approach AI-Augmented Approach Automation Level
Impact analysis Manual traceability walk-through AI traces change impact through requirement-architecture-code-test chains automatically L2
Effort estimation Expert judgment, often inaccurate AI estimates effort based on historical data for similar changes, adjusted for complexity factors L1-L2
Approval routing Static workflow rules AI recommends reviewers based on code ownership, expertise, and availability L2
Regression scope Manual identification of affected test cases AI identifies minimum regression test set based on code dependency and change impact analysis L2-L3
Change bundling Manual grouping of related CRs AI identifies CRs that should be implemented together to minimize integration risk L1-L2

SUP.11 ML Data Management -- AI as Data Quality Engine

SUP.11 is uniquely suited to AI augmentation because managing ML data at scale is virtually impossible without automated tooling:

Activity Traditional Approach AI-Augmented Approach Automation Level
Data quality validation Manual sampling and inspection AI validates entire datasets against quality criteria (completeness, label accuracy, distribution) L2-L3
Bias detection Statistical analysis by data scientist AI automatically detects class imbalance, demographic bias, edge case under-representation L2-L3
Data versioning File-system snapshots AI-integrated data versioning with automatic lineage tracking (DVC, LakeFS) L3
Annotation quality Manual review of labels AI cross-validates annotations using model agreement, flags inconsistent labels L2
Data augmentation Manual augmentation strategy design AI recommends augmentation strategies to address identified gaps in data coverage L1-L2

AI Integration Highlights

Where AI Provides Value in SUP

Process AI Contribution Value Justification
SUP.1 Process compliance checking High Reduces manual audit effort by 40-60%; increases finding detection rate
SUP.8 Baseline consistency checking Medium Automates repetitive checks; human judgment still needed for exception handling
SUP.9 Root cause suggestion High Accelerates root cause identification from days to hours; improves first-time-fix rate
SUP.10 Impact analysis High Prevents missed dependencies that cause rework; reduces change approval cycle time
SUP.11 Data quality validation, bias detection High Enables dataset-scale validation that is impractical manually; critical for ML safety

SUP-Specific HITL Patterns

Pattern SUP Application Human Responsibility
Monitor CI/CD pipeline monitoring Intervene when anomalies detected
Reviewer QA audit assistance Validate AI-generated findings before recording
Escalation Problem severity assessment Decide escalation path for critical defects
Auditor Compliance verification Confirm process adherence and sign off
Approver Change request impact analysis Accept or reject AI-recommended changes
Decision Maker Baseline release authorization Authorize baselines for delivery

Critical Principle: In safety-critical development, AI operates as an assistant within support processes, never as the authority. Every QA finding, baseline release, problem closure, and change approval requires a human decision maker with documented accountability. This is not merely a best practice -- it is an ASPICE requirement for achieving Capability Level 2 and above.


Process Relationships

The following diagram shows how SUP processes relate to the primary engineering process groups (SYS, SWE, HWE, MLE), illustrating the support touchpoints at each lifecycle phase.

Support-Engineering Relationship


Process Interactions

Understanding how SUP processes interact with each other and with the primary engineering process groups is essential for designing an integrated toolchain and avoiding gaps in process coverage.

SUP-to-SUP Interactions

Support processes form an internal dependency network:

Source Process Target Process Interaction
SUP.1 (QA) SUP.9 (Problem) QA findings that require corrective action are recorded as problem reports
SUP.1 (QA) SUP.10 (Change) QA-mandated process improvements may require change requests
SUP.9 (Problem) SUP.10 (Change) Problem resolutions that require design/code changes generate change requests
SUP.10 (Change) SUP.8 (CM) Approved changes must be tracked through CM; new baselines created after implementation
SUP.8 (CM) SUP.1 (QA) CM reports (baseline status, configuration audits) are inputs to QA assessments
SUP.11 (ML Data) SUP.8 (CM) Dataset versions and lineage must be managed under configuration management

SUP-to-Engineering Interactions

SUP Process SYS Interaction SWE Interaction HWE Interaction MLE Interaction
SUP.1 Audits system requirements completeness (SYS.1-2) Audits code review compliance (SWE.4) Audits HW design review records (HWE.1) Audits ML model validation evidence (MLE.4)
SUP.8 Baselines system specifications (SYS.2-3) Baselines source code and build artifacts (SWE.3-4) Baselines HW design files (HWE.1) Baselines models and datasets (MLE.2-3)
SUP.9 Tracks system-level defects (SYS.4-5) Tracks software defects (SWE.5-6) Tracks hardware defects (HWE.1) Tracks ML model performance issues (MLE.4)
SUP.10 Manages system requirement changes (SYS.1-2) Manages code change requests (SWE.3-4) Manages HW design changes (HWE.1) Manages model retraining requests (MLE.3)
SUP.11 N/A N/A N/A Manages training/validation/test datasets (MLE.1-5)

SUP-to-MAN Interactions

Support processes also interact with management processes:

Interaction Description
SUP.1 -> MAN.3 QA reports feed project management status; NCR trends indicate process health
SUP.9 -> MAN.5 Problem trends are inputs to risk identification; unresolved critical problems escalate as project risks
SUP.8 -> MAN.3 Baseline status reports inform milestone readiness decisions
SUP.10 -> MAN.3 Change request backlogs and approval cycle times affect project scheduling
SUP.1 -> MAN.6 QA metrics are primary inputs to measurement (process performance indicators)

Integration Tip: When designing AI tooling for support processes, build the integrations between SUP processes first. An AI that can trace from a QA finding (SUP.1) through problem recording (SUP.9) to change request (SUP.10) to baseline update (SUP.8) provides far more value than individual point solutions.


Key Work Products

WP ID Work Product Producer AI Role ASPICE 4.0 Status
15-01 QA plan SUP.1 Template generation, compliance rule encoding Unchanged
15-02 QA report SUP.1 Finding analysis, trend visualization Unchanged
08-28 Non-conformance record SUP.1 Classification, corrective action suggestion Unchanged
06-01 CM plan SUP.8 Policy generation, branching strategy recommendation Unchanged
06-02 Baseline report SUP.8 Automated generation from CI/CD pipeline data Unchanged
08-27 Problem report SUP.9 Root cause suggestion, duplicate detection Unchanged
13-19 Problem resolution record SUP.9 Resolution effectiveness analysis Unchanged
08-13 Change request SUP.10 Impact analysis, effort estimation Unchanged
13-20 Change control record SUP.10 Approval workflow tracking Unchanged
11-54 ML data requirements SUP.11 Data quality rules generation New in 4.0
03-51 ML data analysis results SUP.11 Bias detection, distribution analysis New in 4.0

Implementation Roadmap

Adopting AI in support processes should follow a phased approach. Attempting to automate all SUP processes simultaneously leads to tool sprawl, integration gaps, and organizational resistance. The roadmap below is designed for a typical embedded systems organization with an existing ASPICE Level 1-2 process framework.

Phase 1: Foundation (Months 1-3)

Objective: Establish the toolchain baseline and automate the highest-value, lowest-risk activities.

Activity Process Deliverable Risk Level
Deploy AI-assisted CI/CD pipeline validation SUP.8 Automated commit message, branch naming, and baseline integrity checks Low
Implement AI-powered static analysis in CI SUP.1 (feeds QA) MISRA compliance reports, code quality metrics generated per commit Low
Set up centralized issue tracker with AI classification SUP.9 Auto-classification of problem reports by component, severity, and type Low-Medium
Define AI usage policy for support processes All SUP Documented policy covering AI tool qualification, HITL requirements, and evidence retention Low

Success Criteria: CI/CD pipeline runs on every commit with automated SUP.8 checks; problem reports are auto-classified with >80% accuracy; QA engineer effort on routine checks reduced by 20%.

Phase 2: Intelligence (Months 4-8)

Objective: Add AI-powered analysis capabilities that augment human decision-making.

Activity Process Deliverable Risk Level
Deploy AI root cause analysis for problem reports SUP.9 AI-suggested root causes with confidence scores; linked similar historical defects Medium
Implement AI change impact analysis SUP.10 Automated traceability walk-through generating impact reports for every CR Medium
Enable AI-powered QA audit pre-screening SUP.1 AI pre-audits work products before human QA review, generating gap reports Medium
Integrate AI into code review workflow Feeds SUP.1 AI review findings integrated into merge request approvals Medium
Begin ML data pipeline instrumentation SUP.11 Data quality metrics collected for all training datasets Medium

Success Criteria: Mean time to root cause reduced by 40%; change impact analysis covers 90% of traceability links automatically; QA audits find 25% more issues through AI pre-screening.

Phase 3: Optimization (Months 9-14)

Objective: Close the feedback loop with predictive analytics and continuous process improvement.

Activity Process Deliverable Risk Level
Deploy predictive defect trend analysis SUP.9 AI predicts defect inflow rates by component, enabling proactive resource allocation Medium-High
Implement AI-driven regression test selection SUP.10 AI selects minimum regression test set per change request, reducing test execution time Medium-High
Enable automated baseline consistency audits SUP.8 AI continuously compares deployed configurations against approved baselines and alerts on drift Medium
Deploy ML dataset bias monitoring SUP.11 Continuous bias detection on training data with automated reporting Medium
Implement cross-process AI analytics dashboard All SUP Unified view of QA health, CM status, problem trends, and change velocity Medium

Success Criteria: Defect prediction accuracy >70%; regression test reduction of 30-50% without coverage loss; zero undetected configuration drift incidents.

Phase 4: Maturity (Months 15+)

Objective: Achieve sustained, measured AI integration with continuous calibration.

Activity Process Deliverable Risk Level
Calibrate AI models based on accumulated organizational data All SUP AI accuracy metrics tracked and published; models retrained quarterly Medium
Implement AI-assisted process improvement recommendations SUP.1/MAN.6 AI analyzes process metrics and suggests targeted process improvements Medium-High
Achieve tool qualification evidence for safety-critical AI tools All SUP Tool qualification reports per ISO 26262 Part 8 / IEC 61508 Part 3 High
Extend AI to supplier support process monitoring SUP.1/SUP.8 AI monitors supplier process compliance and CM alignment High

Key Milestone: By Phase 4, the organization should have quantitative evidence that AI integration in support processes has improved process capability indicators (defect density, cycle time, audit finding rate) and can demonstrate this to ASPICE assessors.


Common Pitfalls

Organizations integrating AI into support processes frequently encounter these issues. Each pitfall below is drawn from real-world observations in automotive and embedded systems organizations.

Pitfall 1: Automating Without Process Discipline

Symptom: AI tools are deployed before the underlying processes are defined and followed consistently.

Example: An organization deploys an AI-powered problem classification tool, but engineers do not follow a consistent format for problem report descriptions. The AI receives inconsistent inputs and produces unreliable classifications. Engineers lose trust in the tool and revert to manual classification.

Prevention: Achieve at least ASPICE Capability Level 1 (performed) for a SUP process before attempting AI augmentation. AI amplifies existing process discipline -- it cannot create it.

Pitfall 2: Treating AI Findings as Authoritative

Symptom: QA engineers accept AI-generated audit findings without human verification, leading to false positives in non-conformance reports and eroding credibility.

Example: An AI flags a requirements document as "incomplete" because it does not match the expected template structure. In reality, the document uses an approved alternative format. The QA engineer records the NCR without checking, wasting the engineering team's time on a false finding.

Prevention: Always implement AI findings as "proposed" status requiring human confirmation before they become official records. Train QA staff to critically evaluate AI suggestions.

Pitfall 3: Neglecting AI Tool Qualification

Symptom: AI tools are used in safety-critical processes without qualification evidence, creating a gap that assessors and auditors will identify.

Example: An AI-powered code review tool is used as the sole static analysis mechanism for ASIL-B software. During an ISO 26262 audit, the assessor asks for tool qualification evidence (use case analysis, validation test results, known limitations). None exists. The organization must retroactively qualify the tool or discard its results.

Prevention: From Phase 1, maintain a tool register that classifies each AI tool by its role (verification tool, development tool, support tool) and its impact on safety-relevant work products. Plan qualification activities according to ISO 26262 Part 8, Table 4.

Pitfall 4: Ignoring AI Confidence Calibration

Symptom: AI tools report confidence scores (e.g., "85% confidence this is a duplicate defect"), but nobody validates whether these scores are calibrated -- that is, whether "85% confidence" actually means the AI is correct 85% of the time.

Example: An AI root cause analyzer consistently reports 90% confidence, but post-analysis shows it is correct only 55% of the time. Engineers initially trust the high scores and skip manual verification, leading to incorrect fixes.

Prevention: Implement quarterly calibration reviews. Track AI predictions against actual outcomes. Publish calibration curves to engineering teams so they can appropriately weight AI suggestions.

Pitfall 5: Configuration Management Gaps for AI Artifacts

Symptom: AI models, prompt templates, training data, and configuration files used by AI tools are not managed under SUP.8. When an AI tool behaves differently after an update, there is no way to trace what changed.

Example: The AI code review bot is updated to a new model version. Review findings change in character and severity. The QA team notices inconsistency in audit results but cannot identify the cause because the AI model version was not baselined.

Prevention: Treat AI tools, models, prompt templates, and configuration parameters as configuration items under SUP.8. Version them, baseline them, and track changes with the same rigor applied to source code.

Pitfall 6: Over-Reliance on AI for Problem Trend Analysis

Symptom: Management relies on AI-generated trend reports without understanding the underlying data quality. AI identifies a "trend" that is actually an artifact of inconsistent defect categorization.

Prevention: Ensure the data feeding AI analytics is clean and consistently categorized. Cross-validate AI trend findings with manual spot checks, especially in the first 6-12 months of deployment.


Tool Ecosystem

Effective AI integration in support processes requires a coordinated tool ecosystem rather than isolated point solutions. The table below maps the primary tool categories to SUP processes, along with representative tools commonly used in embedded systems organizations.

Tool Categories by SUP Process

SUP Process Tool Category Representative Tools AI Capabilities
SUP.1 ALM / QA Management Polarion, codebeamer, Azure DevOps, Jira AI-assisted audit checklists, compliance dashboards
SUP.1 Static Analysis SonarQube, Polyspace, Klocwork, Coverity AI-enhanced rule sets, defect pattern detection
SUP.8 Version Control Git (GitLab, GitHub, Bitbucket), Perforce AI commit analysis, branch management policies
SUP.8 CI/CD GitLab CI, GitHub Actions, Jenkins, Azure Pipelines AI pipeline optimization, baseline automation
SUP.8 Artifact Management Artifactory, Nexus, Harbor AI-powered vulnerability scanning, dependency analysis
SUP.9 Issue Tracking Jira, GitLab Issues, Azure Boards, Redmine AI classification, duplicate detection, root cause suggestion
SUP.10 Change Management Jira (with CR workflows), Polarion, codebeamer AI impact analysis, approval routing
SUP.11 ML Data Management DVC, LakeFS, Pachyderm, Weights & Biases Data versioning, lineage tracking, bias detection
SUP.11 Annotation Platforms Label Studio, CVAT, Scale AI, Labelbox AI-assisted labeling, quality scoring

Integration Architecture Principles

When selecting and integrating tools for support processes, follow these principles:

  1. Single Source of Truth: Each work product type should have exactly one authoritative repository. Avoid duplicating problem reports across multiple systems.

  2. Bidirectional Traceability: Tools must support linking between artifacts -- change requests to code commits, problem reports to test failures, baselines to release packages. AI can automate link creation, but the links must be stored in the tools.

  3. API-First Selection: Choose tools with robust APIs that allow AI integration. Tools that only support manual web interfaces cannot be effectively augmented with AI.

  4. Evidence Exportability: All tools must support exporting evidence in formats suitable for ASPICE assessments (PDF reports, traceability matrices, audit logs). AI-generated reports must be exportable alongside human-created ones.

  5. Scalability for Embedded Artifacts: Embedded systems produce large binary artifacts (firmware images, calibration files, FPGA bitstreams). Ensure the CM toolchain handles large files efficiently (Git LFS, Perforce, or dedicated artifact repositories).

Practical Recommendation: For most embedded systems organizations starting their AI integration journey, a GitLab-centered stack provides the best balance of CM, CI/CD, issue tracking, and AI integration capabilities. GitLab Duo offers native AI features for code review, issue summarization, and pipeline optimization, reducing the need for custom AI tool development.


Chapter Sections

Section Topic AI Focus
Chapter 9.1 SUP.1 Quality Assurance Compliance checking
Chapter 9.2 SUP.2 Verification Review automation
Chapter 9.3 SUP.8 Configuration Management Version control
Chapter 9.4 SUP.9 Problem Resolution Root cause analysis
Chapter 9.5 SUP.10 Change Management Impact analysis
Chapter 9.6 SUP Tools and Automation Tool recommendations

Note on Chapter 9.2: Although SUP.2 has been removed from ASPICE 4.0, Chapter 9.2 is retained in this book because verification activities are still essential -- they have been redistributed into SYS and SWE processes. Chapter 9.2 covers AI-assisted review and verification techniques that apply regardless of which process owns the activity.


Prerequisites

Prerequisite Covered In Why It Matters
Engineering processes Chapters 5-8 SUP processes support SYS, SWE, HWE, and MLE -- understanding the engineering processes is essential to understand what SUP supports
ASPICE framework Chapter 2 The process assessment model, capability levels, and process attributes define the context for SUP process evaluation
Automation levels Chapter 3.1 The L0-L3 automation level framework governs how AI is applied within each SUP process
HITL patterns Chapter 3.2 Human-in-the-loop patterns define the human-AI collaboration model for safety-critical support activities
V-Model alignment Chapter 1 Support processes operate across all V-Model phases; understanding the V-Model helps place SUP activities in context

Summary

The Support process group in ASPICE 4.0 provides the infrastructure for quality, control, and problem management across all engineering activities:

  • SUP.1 Quality Assurance: Ensures processes and work products comply with plans and standards. AI enables automated compliance checking and trend analysis.
  • SUP.8 Configuration Management: Controls and baselines all configuration items. AI automates baseline integrity, branch management, and change tracking.
  • SUP.9 Problem Resolution: Provides structured defect management from identification through resolution. AI accelerates root cause analysis and duplicate detection.
  • SUP.10 Change Request Management: Manages changes through impact analysis, approval, and verification. AI automates impact tracing and effort estimation.
  • SUP.11 ML Data Management (New): Manages machine learning data with provenance, quality, and bias controls. AI enables dataset-scale validation.

Key ASPICE 4.0 Changes: SUP.2 (Verification) removed and redistributed; SUP.11 (ML Data Management) added; outcome wording refined for measurability.

AI Integration Principle: Support processes are high-value targets for AI augmentation because they involve pattern recognition, consistency checking, and workflow automation. However, human accountability must be preserved for all decisions -- AI proposes, humans dispose.