Appendix F: Checklist Library
This appendix provides ready-to-use checklists for key ASPICE processes. Print them or integrate them into your review tools to ensure consistent quality across your team.
Code Review Checklist (SUP.2)
Severity Classification: Critical (C) = Must fix before merge | Major (M) = Should fix before release | Minor (m) = Nice-to-have
General
- (C) Code compiles without warnings (-Wall -Wextra -Werror)
- (M) All functions have Doxygen headers
- (m) Naming follows project conventions
- (m) No TODO/FIXME without tickets
MISRA C:2012
- (C) PC-lint Plus shows 0 violations (Required rules)
- (C) All Required rules satisfied
- (M) Deviations documented with rationale
- (C) No forbidden functions (malloc, strcpy, sprintf)
Safety
- (C) All pointers checked for NULL
- (C) Error paths tested
- (C) No dynamic memory allocation
- (M) Watchdog refresh present in main loop
Testing
- (C) Unit tests pass (100%)
- (C) Coverage target met (see ASIL variants below)
- (M) All test cases have @verifies tags
Coverage Targets by ASIL:
- ASIL-A: Statement 100%, Branch 100%
- ASIL-B: Statement 100%, Branch 100%
- ASIL-C: Statement 100%, Branch 100%, MC/DC recommended
- ASIL-D: Statement 100%, Branch 100%, MC/DC 100%
@verifies Tag Validation: Use scripts/check_traceability.py (see Appendix A.9) to automate verification that all @implements tags have corresponding @verifies tags in test code.
Requirements Review Checklist (SWE.1)
Severity Classification: Critical (C) = Must fix before baseline | Major (M) = Should fix before design | Minor (m) = Nice-to-have
- (C) Each requirement has unique ID (format: SWE-XXX-Y)
- (C) Requirements are testable (measurable criteria exist)
- (M) No "shall not" requirements (rephrase positively)
- (M) Priority assigned (High/Medium/Low)
- (C) Acceptance criteria defined
- (C) Traceability to system requirements (SYS-XXX)
- (C) Safety requirements identified (ASIL tag present)
- (M) Interface requirements complete (CAN/SPI/I2C specs)
Architecture Review Checklist (SWE.2)
Severity Classification: Critical (C) = Must fix before detailed design | Major (M) = Should fix before coding | Minor (m) = Nice-to-have
- (C) Architecture documented (UML/C4 diagrams)
- (C) Component responsibilities clear and non-overlapping
- (C) Interfaces defined (APIs, data flows)
- (M) ADRs created for key decisions (see ADR Template)
- (C) Safety architecture reviewed (freedom from interference)
- (M) Performance analysis done (WCET, memory budget)
- (M) AUTOSAR compliance verified (if applicable)
Integration Test Checklist (SWE.5)
Severity Classification: Critical (C) = Must fix before system test | Major (M) = Should fix before release | Minor (m) = Nice-to-have
- (C) Integration strategy defined (bottom-up/top-down/sandwich)
- (C) Interface tests specified for each component boundary
- (M) Integration order documented
- (M) Stub/mock strategy defined
- (C) Test environment ready (HIL/SIL)
- (C) Regression tests pass (100%)
- (C) Defects logged and tracked (severity assigned)
Release Checklist (MAN.3)
Severity Classification: Critical (C) = Release blocker | Major (M) = Should fix, may defer with waiver | Minor (m) = Nice-to-have
- (C) All requirements implemented (traceability 100%)
- (C) All tests pass (100%)
- (C) Code coverage meets target (see ASIL variants above)
- (C) MISRA violations = 0 (Required rules)
- (M) Documentation updated
- (M) Release notes written (see Release Notes Template)
- (C) Version tag created (semantic versioning)
- (C) Safety assessment complete
- (C) Regulatory approval obtained (if required)
Detailed Design Review Checklist (SWE.3)
Severity Classification: Critical (C) = Must fix before coding | Major (M) = Should fix before unit test | Minor (m) = Nice-to-have
Module Design
- (C) Each module has a single, stated responsibility (single responsibility principle)
- (C) Module interface (public API) documented before implementation begins
- (C) Cyclomatic complexity per function does not exceed project threshold (typically 10 for ASIL-A/B, 6 for ASIL-C/D)
- (M) Function length does not exceed 60 lines (excluding Doxygen block)
- (M) No function has more than 5 parameters; use structs for grouped parameters
- (m) Design matches the software architecture decomposition from SWE.2
Data Structures
- (C) All struct members have explicit type widths (uint8_t, int32_t, not int or char)
- (C) Bitfield usage documented with target compiler and byte-order rationale
- (C) No unbounded arrays; all arrays have compile-time or explicitly bounded sizes
- (M) Shared data structures accessed via accessor functions, not direct struct member access
- (M) Volatile qualifier applied to all hardware-mapped and ISR-shared variables
- (C) No global mutable state unless justified and documented with concurrency analysis
Algorithms
- (C) Algorithm pseudocode or flowchart present in design document before coding
- (C) WCET (Worst Case Execution Time) estimate provided for all time-critical paths
- (M) Floating-point usage justified; fixed-point alternative considered for MCU targets without FPU
- (C) Division operations guarded against divide-by-zero at the design level
- (M) Lookup tables preferred over runtime computation for deterministic timing where applicable
- (m) Algorithm reference (paper, standard, or prior art) cited in design document
MISRA C:2012 Design-Level Compliance
- (C) No dynamic memory allocation planned (Rule 21.3 — use of malloc/free)
- (C) No recursion planned (Rule 17.2 — functions shall not call themselves)
- (C) No function pointer assignments that bypass static call graph analysis
- (M) All planned deviations from MISRA Required rules have deviation records prepared
- (M) Use of standard library functions reviewed against permitted/forbidden list (Rule 21.x)
Naming Conventions
- (C) Module prefix applied to all public symbols (e.g.,
MOTCTRL_for motor controller module) - (C) Type definitions follow project convention (e.g.,
_tsuffix for typedefs) - (M) Enumeration values prefixed with enum name (e.g.,
STATE_INIT,STATE_RUN) - (M) Constants defined via
#defineorconstwith ALL_CAPS naming - (m) Boolean variables named as predicates (
is_ready,has_error,can_proceed)
Traceability
- (C) Each design element traces to one or more software requirements (SWE-XXX)
- (C) Safety-relevant design elements tagged with ASIL and linked to safety requirements
- (M) Design document version recorded in the configuration management system
Unit Test Checklist (SWE.4)
Severity Classification: Critical (C) = Must fix before integration | Major (M) = Should fix before coverage sign-off | Minor (m) = Nice-to-have
Test Plan
- (C) Unit test plan document exists and is baselined before test execution begins
- (C) Test scope defined: all public functions in module under test are listed
- (C) Test environment specified (native host, QEMU, target MCU with JTAG)
- (M) Test schedule and responsible engineer recorded
- (M) Test tool versions recorded (Unity/CppUTest/GoogleTest version, compiler version)
Coverage Targets
- (C) Statement coverage target defined per ASIL (ASIL-A/B: 100%, ASIL-C: 100%, ASIL-D: 100%)
- (C) Branch coverage target defined per ASIL (ASIL-A/B: 100%, ASIL-C: 100%, ASIL-D: 100%)
- (C) MC/DC coverage target defined for ASIL-C (recommended) and ASIL-D (required per ISO 26262-6 Table 10)
- (C) Coverage report generated by approved tool (gcov, LDRA, Tessy, VectorCAST)
- (C) Coverage gaps explained: unreachable code either removed or deviation-documented
- (M) Coverage report archived as a configuration item in the CM system
Mock and Stub Strategy
- (C) Mocking strategy documented: which dependencies are mocked vs. real
- (C) Hardware abstraction layer (HAL) mocked for host-based execution
- (C) All mock expectations asserted (not just injected return values)
- (M) Mock implementation reviewed for correctness against real HAL behavior
- (m) Mock generation tool identified (CMock, FFF, manual) and version recorded
Test Naming and Structure
- (C) Test names follow convention:
test_<function>_<condition>_<expectedResult>(e.g.,test_MOTCTRL_SetSpeed_NullPtr_ReturnsError) - (C) Each test follows Arrange-Act-Assert (AAA) structure
- (M) No test case asserts more than one behavior (one logical assertion per test)
- (M) Test file mirrors source file structure (one test file per module)
- (m) Test setup and teardown functions used for common fixture initialization
Boundary Conditions and Equivalence Partitioning
- (C) Minimum valid input tested for every parameter
- (C) Maximum valid input tested for every parameter
- (C) Below-minimum (underflow/underrange) input tested
- (C) Above-maximum (overflow/overrange) input tested
- (C) NULL pointer inputs tested for all pointer parameters
- (M) Empty container inputs tested (zero-length arrays, empty queues)
- (M) Nominal (mid-range) inputs tested as positive cases
- (m) Integer overflow at type boundary tested (e.g.,
UINT32_MAX + 1scenario)
Traceability
- (C) Every test case carries a
@verifiestag linking to a software requirement (SWE-XXX) or design element - (C) Traceability matrix (requirement-to-test) generated and reviewed
- (M) Safety-relevant test cases tagged with ASIL and linked to safety requirements
System Test Checklist (SYS.5)
Severity Classification: Critical (C) = Must fix before customer delivery | Major (M) = Should fix before release candidate | Minor (m) = Nice-to-have
System Requirements Verification
- (C) System test plan baselined and reviewed before test execution
- (C) Every system requirement (SYS-XXX) has at least one corresponding system test case
- (C) Traceability matrix (SYS requirement to system test) complete with no gaps
- (C) All Critical and High priority requirements covered by test cases marked Critical
- (M) Test case review conducted by engineer not involved in requirement authoring
Environmental Tests
- (C) Operating temperature range validated (cold start, high-temp soak per system spec)
- (C) Supply voltage variation tested (undervoltage, overvoltage, ripple per hardware spec)
- (M) Vibration and shock profile validated (if mechanical environment is specified)
- (M) EMC pre-compliance test results reviewed (radiated emissions, conducted immunity)
- (m) Humidity and condensation tests performed where environment requires
Performance Tests
- (C) CPU load measured under peak operational scenario; does not exceed budget (typically 70% sustained)
- (C) RAM and flash utilization within allocation with documented margin (minimum 20% headroom recommended)
- (C) Response time requirements verified under nominal load (e.g., CAN message latency, interrupt response)
- (M) Boot time measured from power-on to operational state; meets system requirement
- (M) Communication bus utilization measured (CAN load factor, Ethernet bandwidth)
- (m) Power consumption measured in all operating modes (active, standby, sleep)
Stress and Robustness Tests
- (C) Long-duration soak test executed (minimum 24 hours under representative load; duration per system requirement)
- (C) Watchdog recovery tested: intentional software hang results in correct reset and safe-state entry
- (C) Power-cycle robustness tested (minimum 100 power cycles with correct NVM persistence)
- (C) Communication fault injection tested: bus-off, message loss, out-of-range signal values
- (M) Memory corruption detection tested (stack overflow, heap exhaustion if applicable)
- (M) Boundary scan or GPIO loopback confirms hardware wiring under test
Regression
- (C) Full regression test suite executed against release candidate build
- (C) All defects from previous test cycles verified as resolved or deferred with documented rationale
- (M) Automated regression results archived with build hash for traceability
Configuration Management Checklist (SUP.8)
Severity Classification: Critical (C) = Non-conformance blocks release | Major (M) = Should fix before next baseline | Minor (m) = Process improvement.
Configuration Item Identification
- (C) Configuration item (CI) list is defined and approved (source code, requirements, design docs, test reports, tool binaries, build scripts)
- (C) Each CI has a unique identifier and owner recorded in the CM plan
- (C) Third-party libraries and open-source components listed as CIs with version and license
- (M) Hardware CIs (schematics, BOM, Gerber files) are managed in the same system or explicitly linked
- (m) CI identification scheme documented and consistently applied across all projects
Version Control
- (C) All source code resides in the designated version control system (Git)
- (C) No CI exists only on a local developer machine; all work pushed to central repository
- (C) Commit messages follow project convention (format:
[TICKET-ID] verb: description) - (M) Binary artifacts (compiled objects, PDFs) stored in artifact repository (Artifactory, Nexus), not in Git
- (M)
.gitignoreconfigured to exclude build outputs, secrets, and IDE artifacts - (m) Signed commits enabled for safety-critical repositories where non-repudiation is required
Baseline Management
- (C) Functional baseline established after requirements review and approval
- (C) Allocated baseline established after architecture review and approval
- (C) Product baseline established at release and tagged in version control with semantic version tag
- (C) Baseline contents are immutable; changes require a new baseline via change request (SUP.10)
- (M) Baseline audit conducted: actual repository contents verified against baseline record
- (M) Baseline creation and approval recorded in the CM log with timestamp and approver identity
Branching Strategy
- (C) Branching model documented (GitFlow, trunk-based, or project-specific variant)
- (C) Main/master branch is protected: direct push prohibited, pull request required
- (C) Feature branches merged only after passing CI pipeline (build + static analysis + unit tests)
- (M) Release branches created from main for each release candidate; hotfixes applied to release branch and cherry-picked to main
- (M) Branch naming convention enforced (e.g.,
feature/TICKET-ID-short-description,release/v1.2.0) - (m) Stale branches (no activity > 30 days) reviewed and cleaned up regularly
CI Pipeline Integrity
- (C) CI pipeline configuration (Jenkinsfile,
.gitlab-ci.yml) is itself version-controlled as a CI - (C) Build reproducibility verified: same source tag produces byte-identical output (compiler version pinned)
- (M) CI pipeline execution logs archived per build for audit purposes
- (M) Tool versions used in CI (compiler, static analyser, test runner) pinned and documented
Change Request Checklist (SUP.10)
Severity Classification: Critical (C) = Change cannot proceed | Major (M) = Should be complete before implementation | Minor (m) = Best practice
Change Request Initiation
- (C) Change request (CR) raised in the designated change management tool (Jira, PolarionALM, etc.)
- (C) CR includes: problem description, affected CIs, originator, date, and priority
- (C) CR linked to originating defect report, customer request, or requirements change
- (M) CR classification assigned (corrective, adaptive, perfective, preventive)
- (m) CR categorised by affected domain (software, hardware, documentation, process)
Impact Analysis
- (C) Impact analysis completed and recorded against the CR before approval
- (C) Affected CIs (source modules, requirements, test cases, design documents) explicitly listed
- (C) Safety impact assessed: does the change affect any ASIL-rated component or safety requirement?
- (C) If safety impact exists: re-assessment scope defined (partial HARA update, safety concept re-review, or full re-analysis)
- (M) Effort estimate provided (development, review, test)
- (M) Schedule impact assessed and communicated to project manager
- (m) Alternatives considered and rationale for chosen approach documented
Approval Workflow
- (C) Change Control Board (CCB) review scheduled for all changes to baselined CIs
- (C) CCB decision (approve / reject / defer) recorded with rationale and timestamp
- (C) Safety-impacting changes require sign-off from the functional safety manager or delegate
- (M) Emergency change procedure followed for critical defect fixes with post-hoc CCB ratification recorded
- (m) Change priority reviewed against project backlog at CCB meeting
Implementation Tracking
- (C) Implementation branch created from the correct baseline (not an unrelated branch tip)
- (C) All affected CIs updated (requirements, design, code, tests) — not code alone
- (C) Implementation work items linked back to the approved CR in the change management tool
- (M) Code review (SUP.2) conducted for changed modules using the Code Review Checklist above
- (M) Changed requirements re-reviewed using the Requirements Review Checklist above
Regression Testing
- (C) Regression test scope defined based on impact analysis (full suite or targeted subset)
- (C) Regression test results pass at 100% before merging implementation branch
- (C) If regression scope was reduced from full suite: rationale documented and approved by test manager
- (M) New or modified test cases added to cover the changed behavior
- (M) Coverage targets re-verified for changed modules
Closure
- (C) CR status updated to Closed with reference to implementing commit hash and new baseline
- (C) Verification evidence (test report, review record) attached to the CR record
- (M) Originator notified of resolution and outcome
Safety Assessment Checklist (ISO 26262)
Severity Classification: Critical (C) = Safety case gap — release not permitted | Major (M) = Should resolve before safety sign-off | Minor (m) = Improvement to safety argument
Hazard Analysis and Risk Assessment (HARA) — ISO 26262-3
- (C) HARA conducted with cross-functional team (systems, safety, software, hardware, application)
- (C) All operational situations and hazardous events identified and documented
- (C) Severity (S), Exposure (E), and Controllability (C) ratings assigned and justified for each hazardous event
- (C) ASIL determined for each safety goal per ISO 26262-3 Table 4 (or QM if no safety goal applies)
- (C) Safety goals defined with ASIL and safe state specified for each
- (M) HARA reviewed and approved by the functional safety manager
- (m) HARA assumptions about driver behavior and operational design domain (ODD) explicitly stated
ASIL Decomposition — ISO 26262-9
- (C) ASIL decomposition documented with rationale when used to partition safety requirements
- (C) Independence requirement between decomposed channels verified (spatial and temporal)
- (C) Common cause failures between redundant channels analyzed and mitigated
- (M) Decomposition notation follows ISO 26262-9 convention (e.g., ASIL-D → ASIL-B(D) + ASIL-B(D))
- (m) Decomposition alternatives considered and preferred option justified
Functional Safety Concept — ISO 26262-3
- (C) Functional safety requirements (FSRs) derived from each safety goal with ASIL inherited
- (C) FSRs are technology-independent and implementation-free at this level
- (C) Safe states and transition conditions defined for each FSR
- (C) Fault tolerance time interval (FTTI) and diagnostic test interval (DTI) defined where required
- (M) FSRs reviewed for completeness: all safety goals covered, no orphaned FSRs
- (M) FSRs baselined in requirements management tool with ASIL tags
Technical Safety Concept — ISO 26262-4
- (C) Technical safety requirements (TSRs) derived from FSRs, allocated to hardware and software elements
- (C) Hardware-software interface (HSI) definition complete: all safety-relevant signals documented
- (C) Safety mechanisms identified (e.g., CRC, watchdog, plausibility checks, redundancy)
- (C) Diagnostic coverage (DC) estimated for each safety mechanism per ISO 26262-5/6
- (M) TSRs reviewed against system architecture to confirm allocations are implementable
Software Safety Requirements — ISO 26262-6
- (C) Software safety requirements (SSRs) derived from TSRs allocated to software
- (C) ASIL inherited correctly from TSR to SSR; no unauthorized ASIL reduction without documented decomposition
- (C) Freedom from interference (FFI) analysis conducted between ASIL and QM software partitions
- (C) Software tool confidence level (TCL) assessed for all development tools per ISO 26262-8 Clause 11
- (M) Software safety requirements traceable to software design elements (SWE.2/SWE.3)
Verification and Validation
- (C) Verification plan covers all ASIL-rated requirements with method, scope, and pass criteria
- (C) Safety validation plan demonstrates that the top-level safety goals are met in the item's operational context
- (C) Regression strategy defined for ASIL components: any change triggers defined re-verification scope
- (C) Independent review conducted for ASIL-C and ASIL-D components (independence per ISO 26262-2 Clause 6)
- (M) Safety analysis (FMEA, FTA) performed and linked to safety requirements; all unmitigated failure modes resolved
- (M) Functional safety assessment (FSA) scheduled and assessor independence confirmed
Safety Case
- (C) Safety case document exists and structured as a claim-argument-evidence hierarchy
- (C) All safety goals have supporting evidence chains traceable to verification results
- (C) Outstanding issues list reviewed; no open items rated Critical at release
- (M) Safety case reviewed by the functional safety manager and release authority
- (m) Safety case format compatible with GSN (Goal Structuring Notation) or equivalent for readability
AI Tool Integration Checklist
Severity Classification: Critical (C) = Must resolve before AI tool output enters a baselined artifact | Major (M) = Should resolve before production use | Minor (m) = Best practice for audit readiness.
Tool Qualification (ISO 26262-8 Clause 11 / DO-330 / IEC 61508-3 Annex S)
- (C) AI tool classified by Tool Impact (TI) and Tool Error Detection (TD) to determine Tool Confidence Level (TCL)
- (C) TCL-1 tools: no qualification required; rationale documented
- (C) TCL-2 tools: validation measures applied (tool validation report, use case coverage)
- (C) TCL-3 tools: full tool qualification performed with qualification plan, test cases, and report
- (C) Tool version (model version, API version, plugin version) pinned and recorded as a CI in SUP.8
- (M) Tool supplier assessment conducted: vendor safety documentation, known defect list, update policy reviewed
- (M) Tool qualification evidence archived and linked to the project safety case
- (m) Qualification status re-evaluated when tool version changes (including LLM model updates)
Output Verification (Human-in-the-Loop Gates)
- (C) AI-generated artifacts (requirements, code, test cases, design documents) are never committed directly without human review
- (C) Review checklist for the artifact type applied to AI output (e.g., Code Review Checklist for AI-generated code)
- (C) Reviewer competence verified: reviewer has domain expertise to detect AI errors in the artifact type
- (C) AI output reviewed at the same rigor level required for the ASIL of the affected component (independent review for ASIL-C/D)
- (M) Known AI failure modes documented and included in reviewer guidance (hallucinated references, incorrect type widths, plausible-but-wrong logic)
- (m) Statistical sampling of AI output accuracy tracked over time to detect model drift
HITL Gates — Process Integration
- (C) HITL gate defined at every phase transition where AI generates work products: requirements generation, design, code, test cases
- (C) HITL gate record documents: AI tool used, prompt or input summary, output artifact ID, reviewer name, review date, pass/fail
- (C) No phase gate (ASPICE milestone or safety review) bypassed because AI produced the artifact faster
- (M) HITL gate records stored in the CM system as quality records per SUP.1
- (m) HITL gate cycle time tracked; bottlenecks reported to process improvement (PIM.3)
Traceability
- (C) AI-generated artifacts carry explicit provenance metadata: tool name, model version, prompt hash or ID, generation timestamp
- (C) Traceability from AI-generated artifact to source requirement maintained using standard
@implements/@verifiestags - (C) If AI rewrites or refactors an artifact, the traceability links from the prior version are re-verified and updated
- (M) Prompt library version-controlled: prompts that generate safety-critical artifacts are treated as CIs in SUP.8
- (M) Bidirectional traceability matrix includes AI-generated artifacts without special exemptions
- (m) Prompt templates reviewed and approved before use in ASIL-C/D work products
Audit Trail
- (C) Audit log records every AI tool invocation that produces an artifact entering the project repository: who, when, tool, artifact
- (C) Audit log is tamper-evident (append-only log, signed entries, or CM-controlled record)
- (C) Audit log retained for the product lifecycle period required by the applicable standard (e.g., 10 years for automotive per ISO 26262-2)
- (M) Audit log queryable by artifact ID to reconstruct the complete AI involvement history for any work product
- (M) Audit trail reviewed during internal quality audits (SUP.2 / SUP.9) to verify HITL compliance
- (m) Summary metrics from audit log (AI usage rate, review pass rate, rework rate) reported to project management
Determinism and Reproducibility
- (C) Stochastic AI output (temperature > 0) never used for final artifact generation without deterministic re-generation option or human correction
- (C) For code generation: reproducibility test executed — same prompt + same model version + temperature = 0 produces identical output
- (M) Non-deterministic generation runs documented with "accepted output" snapshot archived as the definitive artifact
- (M) Model update policy defined: process for re-verifying AI-generated artifacts when the underlying model is updated
- (m) Temperature and sampling parameters recorded in the audit log for each generation run