5.6: SUP.10 Change Request Management
Process Definition
Purpose
SUP.10 Purpose: To ensure that change requests are managed, tracked, and controlled.
Change Request Management is a cornerstone of configuration-controlled development in safety-critical embedded systems. In ASPICE 4.0, SUP.10 governs how proposed modifications to baselined work products -- requirements, architecture, code, test cases, documentation -- are formally recorded, analyzed for impact, approved by competent authority, implemented under controlled conditions, and verified before closure. Without disciplined change management, traceability erodes, baselines become unreliable, and compliance evidence fragments.
AI integration elevates SUP.10 from a largely administrative burden into a proactive, intelligence-driven process. Rather than relying solely on human reviewers to mentally trace every dependency when a change request arrives, AI can instantly analyze the full traceability web, flag affected work products, estimate effort with calibrated models, and present the Change Control Board (CCB) with structured decision packages. The human retains full decision authority; the AI accelerates and deepens the analysis that supports that decision.
Outcomes
| Outcome | Description |
|---|---|
| O1 | Requests for changes are recorded and identified |
| O2 | Change requests are analyzed, dependencies identified, and impact estimated |
| O3 | Change requests are approved before implementation and prioritized accordingly |
| O4 | Bidirectional traceability is established between change requests and affected work products |
| O5 | Implementation of change requests is confirmed |
| O6 | Change requests are tracked to closure and status is communicated to affected parties |
Base Practices with AI Integration
| BP | Base Practice | AI Level | AI Application | HITL Gate |
|---|---|---|---|---|
| BP1 | Identify and record change requests | L1-L2 | Template generation, field validation, duplicate detection across existing CRs and problem reports, auto-population of metadata from linked artifacts | Human reviews and confirms CR before submission |
| BP2 | Analyze and assess change requests | L2 | Impact analysis across requirements, architecture, code, and test work products; dependency graph traversal; effort estimation using calibrated historical models; risk scoring | Human validates AI impact assessment, adjusts estimates based on tacit knowledge |
| BP3 | Approve change requests before implementation | L1 | CCB decision support packages: summarized impact, risk assessment, affected baselines, recommended priority; AI provides analysis, humans decide | CCB retains sole approval authority |
| BP4 | Establish bidirectional traceability | L2-L3 | Automated traceability matrix updates when CRs are linked; gap detection when work products lack CR linkage; orphan detection for unlinked changes | Human confirms traceability completeness at milestone reviews |
| BP5 | Confirm implementation of change requests | L2 | Verification status tracking; automated checks that all work items linked to a CR are completed; regression test pass/fail correlation | Human sign-off on implementation completeness |
| BP6 | Track change requests to closure | L2-L3 | Status monitoring dashboards; automated stakeholder notifications on state transitions; aging alerts for stale CRs; trend analysis | Human closes CR after confirming all conditions met |
| BP7 | Communicate status of change requests | L2 | Automated status reports; digest emails to affected parties; dashboard widgets showing CR pipeline health | Human reviews and approves external communications |
CCB Decision Authority: AI provides impact analysis and recommendations; the Change Control Board (CCB) retains final approval authority and may override AI recommendations based on business factors, resource constraints, or strategic considerations not captured in automated impact analysis. Human judgment is essential for balancing technical feasibility against project priorities.
Change Request Lifecycle
A change request progresses through a defined set of states. Each transition has explicit entry criteria, and AI assists at every stage by validating completeness, flagging anomalies, and accelerating analysis.
States and Transitions
| State | Description | AI Role | Exit Criteria |
|---|---|---|---|
| Draft | CR is being authored; fields may be incomplete | Validates required fields, suggests classification, detects duplicates | All mandatory fields populated; requester submits |
| Submitted | CR entered into formal tracking; awaiting triage | Assigns preliminary priority based on affected components and historical patterns | Triage owner assigned |
| Under Analysis | Impact analysis in progress | Generates impact report: affected requirements, code modules, test cases, baselines; estimates effort and risk | Impact analysis report reviewed by analyst |
| Analyzed | Impact analysis complete; awaiting CCB review | Prepares CCB decision package with summary, risk matrix position, and recommendation | CCB meeting scheduled or asynchronous review initiated |
| Approved | CCB has authorized implementation | Updates traceability links; generates work item skeletons for implementation, testing, and documentation tasks | Work items created and assigned |
| In Implementation | Development, testing, and documentation updates underway | Monitors work item completion; flags blocked items; tracks effort against estimate | All linked work items completed |
| Verification | Confirming that the change was correctly implemented | Checks regression test results; validates traceability matrix completeness; confirms baseline update readiness | Verification evidence recorded |
| Closed | CR fully resolved and archived | Archives CR with full audit trail; updates metrics database; triggers stakeholder notification | All conditions of approval satisfied |
| Rejected | CCB decided not to implement | Records rejection rationale; links to superseding CR if applicable | Rejection rationale documented |
| Deferred | Approved in principle but postponed to a future release or phase | Monitors deferral conditions; re-alerts when trigger conditions are met | Deferral rationale and re-evaluation date documented |
Transition Rules
Note: Backward transitions (e.g., from Approved back to Under Analysis) are permitted when new information surfaces, but must be documented with justification. AI flags any backward transition for mandatory human review.
| From | To | Trigger | AI Check |
|---|---|---|---|
| Draft | Submitted | Requester submits | Validates mandatory fields complete |
| Submitted | Under Analysis | Triage owner accepts | Confirms no exact duplicate exists |
| Under Analysis | Analyzed | Analyst completes review | Validates impact report attached |
| Analyzed | Approved | CCB approves | Records approval with attendees and conditions |
| Analyzed | Rejected | CCB rejects | Validates rejection rationale provided |
| Analyzed | Deferred | CCB defers | Validates deferral date and conditions recorded |
| Approved | In Implementation | Work items created | Confirms at least one work item linked |
| In Implementation | Verification | All work items complete | Checks all linked items in Done state |
| Verification | Closed | Verification passed | Confirms test evidence and traceability complete |
| Verification | In Implementation | Verification failed | Flags rework items; notifies assignee |
AI-Powered Impact Analysis
Impact analysis is the highest-value AI application within SUP.10. When a change request arrives, the AI traverses the project's traceability graph to identify every work product that may be affected, estimates the ripple effects, and quantifies the effort and risk.
Analysis Dimensions
| Dimension | What AI Analyzes | Output |
|---|---|---|
| Requirements Impact | Which system, software, and hardware requirements reference the affected component; upstream and downstream traceability links | List of affected requirement IDs with linkage depth (direct vs. transitive) |
| Architecture Impact | Which architectural components, interfaces, and data flows are touched; inter-component coupling analysis | Affected component list with coupling strength indicator |
| Code Impact | Which source files, functions, and modules are modified or depend on modified elements; call graph and data flow analysis | Changed file list with dependency fan-out count |
| Test Impact | Which unit tests, integration tests, and system tests exercise the affected code paths; coverage mapping | Test case list requiring re-execution or update |
| Documentation Impact | Which design documents, safety analyses, and user-facing documents reference the changed elements | Document list with section-level pointers |
| Baseline Impact | Which baselines contain the affected work products; whether a baseline re-release is needed | Affected baseline IDs with re-release recommendation |
Impact Analysis Report Structure
AI Change Impact Analysis
Change Request: CR-2025-015
Title: Add temperature compensation for GPIO timing
The diagram below visualizes the AI-generated impact analysis for this change request, showing affected components, traceability links, and estimated effort across hardware and software domains.
The AI generates a structured impact report that feeds directly into the CCB decision package:
# AI-Generated Impact Analysis Report (illustrative)
impact_analysis:
change_request: CR-2025-015
generated_by: AI Impact Analyzer v2.1
generation_date: 2025-01-15
confidence_score: 0.87
requirements_impact:
direct:
- id: SWE-BCM-103
description: "GPIO drive strength configuration"
change_needed: "Add temperature parameter to drive strength lookup"
- id: SWE-BCM-120
description: "Timing compliance at temperature extremes"
change_needed: "Update acceptance criteria for -40C operation"
transitive:
- id: SYS-BCM-045
description: "System timing budget"
change_needed: "Review system-level timing allocation"
total_requirements_affected: 3
architecture_impact:
components_affected:
- name: "GPIO Driver"
coupling: high
change_type: "Interface modification"
- name: "Temperature Monitor Service"
coupling: medium
change_type: "New dependency"
interface_changes: 1
code_impact:
files_modified:
- path: "src/driver/gpio_driver.c"
functions_affected: ["GPIO_SetDriveStrength", "GPIO_Init"]
lines_estimated: 45
- path: "src/driver/gpio_driver.h"
functions_affected: ["GPIO_SetDriveStrength signature"]
lines_estimated: 5
- path: "src/service/temp_monitor.c"
functions_affected: ["TempMonitor_GetCompensation"]
lines_estimated: 30
total_lines_estimated: 80
test_impact:
unit_tests:
- test_gpio_drive_strength.c (modify)
- test_temp_compensation.c (new)
integration_tests:
- SWE-IT-GPIO-001 (re-execute)
- SWE-IT-BCM-003 (re-execute)
system_tests:
- SYS-ST-TIMING-001 (re-execute at temperature extremes)
total_tests_affected: 5
effort_estimate:
implementation: 16 hours
unit_testing: 8 hours
integration_testing: 4 hours
documentation: 4 hours
total: 32 hours
confidence: medium
risk_assessment:
overall_risk: low
factors:
- "GPIO interface change affects one downstream consumer"
- "Temperature compensation is additive, not modifying existing logic"
- "Existing test infrastructure covers temperature sweep"
mitigations:
- "Boundary value testing at -40C, +25C, +85C"
- "Regression test full GPIO suite after change"
Confidence Scoring
AI impact analysis includes a confidence score (0.0 to 1.0) that reflects the completeness and quality of the traceability data available. Low confidence scores signal to the CCB that manual analysis should supplement the AI output.
| Confidence Range | Interpretation | CCB Action |
|---|---|---|
| 0.85 - 1.00 | High confidence; traceability data is complete and consistent | Proceed with AI analysis as primary input |
| 0.60 - 0.84 | Medium confidence; some traceability gaps detected | Supplement with targeted manual review of flagged gaps |
| Below 0.60 | Low confidence; significant traceability gaps or stale data | Require full manual impact analysis; treat AI output as indicative only |
AI-Assisted Change Management
The following diagram traces the complete change request lifecycle, from submission and AI-assisted impact analysis through CCB review, implementation, and verification.
Change Classification
Consistent classification of change requests is essential for prioritization, routing, and metrics. AI assists by analyzing the change description, affected components, and historical patterns to suggest both type and severity classifications. Human reviewers confirm or override the AI suggestion.
Type Classification
| Type | Definition | Typical Trigger | AI Detection Method |
|---|---|---|---|
| Corrective | Fixes a defect or non-conformance in an existing work product | Problem report, test failure, audit finding | Linked PR detected; keywords: "fix", "defect", "failure", "incorrect" |
| Adaptive | Modifies the system to accommodate changes in the operating environment | New hardware revision, OS update, toolchain upgrade | External dependency change detected; keywords: "migrate", "upgrade", "compatibility" |
| Perfective | Improves performance, maintainability, or usability without changing functionality | Optimization request, refactoring, code quality improvement | No functional requirement change; keywords: "optimize", "refactor", "improve" |
| Preventive | Proactive change to prevent future defects or reduce risk | Risk mitigation action, technical debt reduction, safety analysis finding | Linked to risk register entry; keywords: "prevent", "proactive", "harden" |
| Emergency | Urgent change required to address a critical field issue or safety concern | Field incident, safety recall, regulatory mandate | Severity = critical; keywords: "urgent", "safety", "recall", "field issue" |
Severity Classification
| Severity | Criteria | SLA Target | AI Scoring Factors |
|---|---|---|---|
| Critical | Safety impact; regulatory non-compliance; system inoperable | 24-48 hours to CCB review | Safety-related component affected; ASIL-rated requirement impacted; field incident linked |
| High | Major functionality affected; no workaround available; customer-blocking | 3-5 business days to CCB review | Core feature impacted; multiple requirements affected; no degraded mode path |
| Medium | Functionality affected but workaround exists; moderate customer impact | 10 business days to CCB review | Limited scope; workaround documented; fewer than 5 work products affected |
| Low | Minor impact; cosmetic or documentation-only change; improvement | Next scheduled CCB review | Documentation-only change; no code impact; single work product affected |
AI Classification Workflow
- Text Analysis: AI parses the CR title, description, and justification fields using NLP to extract intent keywords and affected domain terminology.
- Artifact Linkage: AI examines linked problem reports, requirements, and risk register entries to infer type (e.g., a CR linked to a PR is likely Corrective).
- Component Analysis: AI maps the affected files or requirements to safety classification (ASIL level) and architectural tier (driver, service, application) to inform severity.
- Historical Comparison: AI compares against previously classified CRs with similar characteristics to calibrate the suggestion.
- Human Confirmation: The suggested classification is presented to the triage owner, who confirms or overrides with documented rationale.
Traceability Through Changes
Bidirectional traceability is an ASPICE core requirement (O4) and becomes especially challenging when changes ripple across multiple work product levels. AI maintains traceability integrity throughout the change lifecycle.
Traceability Dimensions
| Level | Upstream Link | Downstream Link | AI Maintenance Action |
|---|---|---|---|
| System Requirements | Customer requirement, regulatory standard | Software requirements, hardware requirements | Detects when a system requirement change creates orphaned SW/HW requirements |
| Software Requirements | System requirement, CR | Architecture elements, detailed design | Flags requirements modified by CR that have unupdated architecture links |
| Architecture | Software requirement | Detailed design, interface specifications | Identifies interface changes that propagate to dependent components |
| Detailed Design / Code | Architecture element | Unit test cases | Alerts when code changes lack corresponding test case updates |
| Test Cases | Requirement under test | Test execution results | Detects test cases that cover changed code but have not been re-executed |
| Documentation | All levels | Baseline entries | Flags documents referencing changed elements that have not been revised |
Traceability Integrity Checks
AI performs continuous traceability integrity checks and reports anomalies:
| Check | Description | Frequency |
|---|---|---|
| Orphan Detection | Work products not linked to any CR or requirement | On every CR state transition |
| Suspect Link Detection | Links where the source has been modified but the target has not been reviewed | On every work product update |
| Coverage Gap Detection | Requirements affected by a CR that lack downstream test coverage | During impact analysis (BP2) |
| Circular Dependency Detection | CRs that reference each other creating implementation ordering conflicts | On CR creation and linkage |
| Baseline Consistency Check | Verifies that all work products in a baseline reflect the latest approved CRs | Before baseline creation |
Suspect Link Management
When a change request modifies a work product, all traceability links originating from or terminating at that work product become "suspect" until reviewed. AI manages this process:
- Automatic Flagging: When a CR moves to "In Implementation," AI marks all downstream links as suspect.
- Review Assignment: AI generates a review checklist for each suspect link, assigned to the appropriate work product owner.
- Resolution Tracking: As owners confirm or update links, AI clears suspect flags and records the review evidence.
- Completeness Gate: Before a CR can move to "Closed," AI verifies that zero suspect links remain.
Change Request Template
# Change Request (illustrative example)
change_request:
id: CR-(year)-(number)
title: "Add temperature compensation for GPIO timing"
status: approved
created: (creation date)
requester: SW Lead
description: |
Implement temperature-dependent GPIO drive strength configuration
to maintain timing compliance at cold temperatures (-40°C).
justification: |
Problem PR-2025-042 identified timing violation at cold temperature.
This change addresses the root cause by compensating for temperature-
dependent transistor switching characteristics.
classification:
type: corrective
severity: medium
ai_suggested_type: corrective
ai_suggested_severity: medium
ai_confidence: 0.91
human_override: false
related_items:
problem: PR-2025-042
requirements:
- SWE-BCM-103
- SWE-BCM-120
risks:
- RSK-003
impact_analysis:
ai_generated: true
requirements_impact: medium
code_impact: low
test_impact: medium
effort_estimate: 24 hours
risk: low
ccb_review:
date: 2025-01-17
attendees:
- Project Manager
- SW Lead
- QA Lead
- System Architect
decision: approved
conditions:
- "Add tests for boundary temperatures"
- "Update design document"
implementation:
assigned_to: SW Developer
target_date: 2025-01-22
actual_date: 2025-01-21
work_items:
- WI-2025-089 (Implementation)
- WI-2025-090 (Unit tests)
- WI-2025-091 (Documentation)
verification:
status: complete
tests_passed: true
regression_passed: true
closure:
date: 2025-01-22
closed_by: SW Lead
Tool Integration
SUP.10 effectiveness depends on tight integration between the change management tool, version control, requirements management, and CI/CD systems. The following table maps common ALM tools to SUP.10 capabilities and describes how AI enhances each integration point.
Tool-to-Process Mapping
| Tool | SUP.10 Capability | AI Enhancement | Integration Method |
|---|---|---|---|
| Jira | CR tracking, workflow automation, dashboards | Atlassian Intelligence for field suggestions; custom AI plugins for impact analysis; JQL-based duplicate detection | REST API; webhooks for state transitions; Jira Automation rules |
| Azure DevOps | Work item tracking, boards, pipelines integration | Copilot-assisted work item creation; pipeline-triggered impact checks; AI-powered query suggestions | REST API; service hooks; Azure Pipelines integration |
| Polarion | Requirements-linked CR management, baseline-aware changes | Traceability matrix auto-update; suspect link detection; baseline impact analysis | Polarion REST API; OSLC interface; custom workflow extensions |
| codebeamer | Full ALM with CR workflow, traceability, and reporting | AI roadmap features for impact prediction; automated classification | REST API; webhook triggers; custom workflow scripts |
| GitLab Issues | Lightweight CR tracking integrated with merge requests | CI pipeline integration for impact analysis; merge request-linked CR verification | GitLab API; CI/CD pipeline hooks; custom issue templates |
Integration Architecture
A typical AI-enhanced change management integration connects the following systems:
| System | Role in SUP.10 | Data Exchanged |
|---|---|---|
| ALM / Issue Tracker | Primary CR repository; workflow engine; dashboards | CR records, state transitions, assignments, comments |
| Requirements Tool | Source of truth for requirements traceability | Requirement IDs, traceability links, suspect flags |
| Version Control (Git) | Links CRs to code changes via commit references and merge requests | Commit SHAs, branch names, merge request IDs, diff content |
| CI/CD Pipeline | Executes automated impact checks and verification | Build results, test results, coverage reports, impact analysis output |
| AI Analysis Service | Performs impact analysis, classification, and duplicate detection | Impact reports, classification suggestions, confidence scores |
| Notification Service | Distributes status updates to stakeholders | State transition alerts, CCB meeting invitations, closure notifications |
Configuration Example: Jira + GitLab + AI
# Integration configuration (illustrative)
change_management_integration:
cr_tracker:
tool: jira
project_key: BCM
issue_type: "Change Request"
custom_fields:
ai_impact_score: customfield_10100
ai_classification: customfield_10101
ai_confidence: customfield_10102
workflow:
states: [Draft, Submitted, Under Analysis, Analyzed, Approved,
In Implementation, Verification, Closed, Rejected, Deferred]
transitions:
- from: Submitted
to: Under Analysis
trigger: auto # AI triggers analysis on submission
- from: Under Analysis
to: Analyzed
trigger: ai_analysis_complete
- from: Approved
to: In Implementation
trigger: manual # Human creates work items
version_control:
tool: gitlab
branch_naming: "cr/{cr_id}/{short_description}"
commit_convention: "CR-{id}: {description}"
merge_request_template: |
## Change Request
CR: {cr_id}
Impact Level: {ai_impact_level}
Affected Requirements: {requirement_list}
## Verification Checklist
- [ ] Unit tests updated
- [ ] Integration tests re-executed
- [ ] Documentation updated
- [ ] Traceability matrix verified
ai_service:
endpoint: "https://ai-analysis.internal/api/v2"
triggers:
- event: cr_submitted
action: run_impact_analysis
- event: cr_work_items_complete
action: verify_traceability_completeness
- event: baseline_requested
action: check_open_crs_against_baseline
notifications:
channels:
- type: email
recipients: ccb_members
events: [cr_analyzed, cr_approved, cr_rejected]
- type: teams_channel
channel: "#bcm-changes"
events: [cr_submitted, cr_closed, cr_emergency]
HITL Protocol for Changes
Human-in-the-Loop (HITL) controls are non-negotiable for SUP.10 compliance. AI accelerates analysis and automates routine checks, but humans own every decision that affects baselined work products.
HITL Gate Definitions
| Gate | Stage | Human Action Required | AI Contribution |
|---|---|---|---|
| G1: Submission Review | Draft to Submitted | Requester confirms CR accuracy and completeness | Validates fields; suggests classification; warns of duplicates |
| G2: Impact Validation | Under Analysis to Analyzed | Analyst reviews AI impact report; adjusts estimates; adds domain knowledge | Generates comprehensive impact report with confidence scoring |
| G3: CCB Approval | Analyzed to Approved/Rejected/Deferred | CCB members vote on disposition; document rationale and conditions | Prepares decision package; presents risk summary; records decision |
| G4: Implementation Sign-off | In Implementation to Verification | Implementer and reviewer confirm all work items complete | Checks all linked items in Done state; flags incomplete traceability |
| G5: Verification Confirmation | Verification to Closed | QA or verification lead confirms all evidence collected and satisfactory | Validates test results against requirements; checks suspect link resolution |
| G6: Closure Authorization | Verification to Closed | CR owner or process manager authorizes formal closure | Archives full audit trail; updates metrics; notifies stakeholders |
Escalation Rules
| Condition | Escalation Action | Responsible |
|---|---|---|
| CR in "Submitted" state for more than SLA target | Auto-escalate to triage owner's manager; highlight in dashboard | AI monitoring |
| AI confidence score below 0.60 | Mandatory manual impact analysis required; AI output flagged as advisory only | Analyst + CCB |
| Emergency CR submitted | Immediate notification to CCB chair and safety manager; expedited review process activated | AI notification + CCB chair |
| CR rejected by CCB but requester contests | Escalate to project sponsor or program manager for arbitration | Process manager |
| Backward state transition requested | Mandatory justification documented; AI flags for audit trail completeness | Requestor + process manager |
Override Documentation
When a human overrides an AI recommendation (e.g., changes the suggested classification or rejects the impact estimate), the override must be documented:
# Override record (illustrative)
override_record:
cr_id: CR-2025-022
field_overridden: severity
ai_recommendation: medium
human_decision: high
rationale: |
AI did not account for upcoming regulatory audit in Q2.
Unresolved timing issue during audit would constitute
a non-conformance finding. Elevating severity to ensure
resolution before audit window.
overridden_by: QA Lead
date: 2025-02-10
Metrics and KPIs
Effective change management requires quantitative monitoring. The following metrics track both process efficiency and AI contribution quality.
Process Efficiency Metrics
| Metric | Definition | Target | Measurement Method |
|---|---|---|---|
| CR Cycle Time | Average elapsed time from CR submission to closure | < 15 business days (medium severity) | ALM tool timestamp analysis |
| CR Approval Lead Time | Average elapsed time from submission to CCB decision | < 5 business days (standard); < 2 days (high); < 1 day (critical) | State transition timestamps |
| CR Backlog Age | Number of CRs open longer than their SLA target | Zero CRs exceeding SLA by more than 50% | Dashboard aging report |
| First-Pass Approval Rate | Percentage of CRs approved without rework at CCB | > 80% | CCB decision records |
| Implementation On-Time Rate | Percentage of approved CRs implemented by target date | > 90% | Target date vs. actual date comparison |
| Rejection Rate | Percentage of submitted CRs rejected by CCB | Monitor trend (not a fixed target; high rate may indicate upstream quality issues) | CCB decision records |
| Rework Rate | Percentage of CRs that return from Verification to In Implementation | < 10% | Backward transition count |
AI Effectiveness Metrics
| Metric | Definition | Target | Measurement Method |
|---|---|---|---|
| Impact Analysis Accuracy | Percentage of AI-identified affected work products confirmed by human review | > 85% | Compare AI report to actual changes made during implementation |
| Effort Estimate Accuracy | Ratio of AI-estimated effort to actual effort | 0.8 - 1.2 (within 20% of actual) | Compare estimate to timesheet data |
| Classification Agreement Rate | Percentage of AI-suggested type/severity accepted without override | > 90% | Override record analysis |
| Duplicate Detection Precision | Percentage of AI-flagged duplicates that are true duplicates | > 80% | Human review of duplicate suggestions |
| Confidence Calibration | Correlation between AI confidence score and actual accuracy | Positive correlation (r > 0.7) | Statistical analysis of confidence vs. outcome |
| Analysis Time Reduction | Time spent on impact analysis with AI vs. baseline without AI | > 50% reduction | Time tracking comparison |
Reporting Cadence
| Report | Audience | Frequency | Content |
|---|---|---|---|
| CR Status Dashboard | Project team | Real-time (continuous) | Open CRs by state, severity, and assignee; aging alerts |
| Weekly CR Summary | Project manager, team leads | Weekly | New CRs, state transitions, closures, SLA compliance |
| CCB Meeting Package | CCB members | Per CCB meeting (weekly or bi-weekly) | CRs pending decision; impact summaries; AI recommendations |
| Monthly Change Metrics | Management, process owner | Monthly | Trend analysis; AI effectiveness metrics; process improvement recommendations |
| Milestone Traceability Report | QA, assessors | At each project milestone | CR coverage; traceability completeness; suspect link resolution status |
Work Products
| WP ID | Work Product | AI Role |
|---|---|---|
| 08-13 | Change request | Impact analysis, classification suggestion, duplicate detection |
| 13-20 | Change status report | Automated generation from ALM data |
| 13-21 | Impact analysis report | AI generation with confidence scoring |
| 08-50 | Traceability matrix (CR links) | Automated update and suspect link detection |
| 15-09 | CCB meeting minutes | AI-prepared decision package as input |
Implementation Checklist
Use this checklist when establishing or improving SUP.10 Change Request Management with AI integration.
Process Setup
- Define CR workflow states and transitions in ALM tool matching the lifecycle defined in this chapter
- Configure mandatory fields for CR creation: title, description, justification, requester, affected components
- Establish CR numbering convention (e.g., CR-YYYY-NNN) with automatic assignment
- Define type and severity classification criteria and configure in ALM tool
- Set SLA targets for each severity level and configure aging alerts
- Charter the Change Control Board: membership, quorum rules, meeting cadence, decision recording procedure
AI Integration
- Deploy AI impact analysis service connected to requirements management tool and version control
- Configure traceability data feeds so AI can traverse requirement-to-code-to-test links
- Set up duplicate detection using historical CR corpus and NLP similarity matching
- Calibrate effort estimation model using at least 20 historical CRs with actual effort data
- Define confidence score thresholds and corresponding human review requirements
- Implement automated classification suggestion with human confirmation workflow
Tool Integration
- Connect ALM tool to version control via API (branch naming, commit linking, merge request correlation)
- Configure CI/CD pipeline to trigger impact analysis on merge request creation referencing a CR
- Set up webhook-based notifications for CR state transitions to relevant stakeholders
- Create dashboard views: CR pipeline, aging report, SLA compliance, AI effectiveness metrics
- Configure baseline impact checks that prevent baseline creation with unresolved CRs
HITL and Governance
- Document HITL gates (G1-G6) in the project's change management plan
- Define escalation rules for SLA breaches, low-confidence AI outputs, and emergency CRs
- Create override documentation template and train team on when and how to override AI suggestions
- Establish CCB decision recording procedure (attendees, decision, conditions, rationale)
- Define backward transition rules and mandatory justification requirements
Metrics and Continuous Improvement
- Instrument ALM tool to capture all metrics defined in the Metrics and KPIs section
- Schedule monthly metrics review with process owner
- Establish AI effectiveness baseline during first quarter of operation
- Define improvement targets for AI accuracy and process efficiency after baseline period
- Plan quarterly retrospective on change management process effectiveness
Summary
SUP.10 Change Request Management:
- AI Level: L2 (AI analysis, human approval)
- Primary AI Value: Impact analysis, effort estimation, classification, traceability maintenance, duplicate detection
- Human Essential: CCB decision authority, implementation sign-off, verification confirmation, override decisions
- Key Outputs: CR records, impact analysis reports, traceability updates, CCB decision packages, change metrics
- Integration: Connects ALM, requirements management, version control, CI/CD, and notification systems
- HITL Protocol: Six defined gates (G1-G6) ensuring human authority at every critical decision point