7.0: Assessment Preparation
Introduction
The ASPICE assessment is not an exam to cram for the night before—it's a validation of 12-18 months of disciplined process execution. Teams that follow continuous ASPICE practices (Chapters 19-23) find assessments routine. Teams that scramble to "create evidence" 2 weeks before fail spectacularly. This chapter shows how to prepare for—and pass—an ASPICE assessment without heroic last-minute effort.
Assessment Overview
What is an ASPICE Assessment?
Definition: Formal evaluation by a certified ASPICE assessor to determine your organization's process capability level (CL0-5).
Types of Assessments:
| Type | Purpose | Duration | Cost | Outcome |
|---|---|---|---|---|
| Pre-Assessment (Mock Audit) | Practice run, identify gaps before formal audit | 2-3 days | $15k-25k | Gap analysis report (not certified) |
| Formal Assessment (CL2/CL3) | Official audit for OEM contracts, certification | 5-10 days | $50k-150k | Official ASPICE rating (CL0-5) |
| Surveillance Assessment (Annual) | Verify ongoing compliance (maintain certification) | 2-3 days | $25k-40k | Re-certification or gap list |
When to Assess:
- Pre-Assessment: 6 months after pilot project completion (validate readiness)
- Formal Assessment: 12-18 months after ASPICE program start (after Wave 3 rollout)
- Surveillance: Annually (OEM contract requirement)
Assessment Process
Typical 5-Day Formal Assessment Schedule: The following diagram presents the day-by-day assessment timeline, showing how activities progress from opening and document review through process-specific interviews, evidence verification, and the final closing meeting with ratings.
Day 1 Focus: Opening + Document Review (8 hours)
- Opening Meeting: Assessor introduces scope/timeline, organization presents process overview
- Process Document Review: Review process descriptions (SWE.1-6), check ASPICE base practices coverage
- Preliminary Questions: Assessor clarifies ambiguities
Day 2-3 Focus: Work Product Sampling (16 hours)
- Assessor selects 2-3 projects for deep dive
- Reviews: User Stories, ADRs, Code (MISRA), Test Reports, Traceability Matrix
- Checks: Consistency, completeness, maintenance
Day 4 Focus: Interviews (8 hours)
- 10-15 people across roles (Developers, QA, Architects, Managers)
- Verifies: Do people KNOW and FOLLOW the processes?
Day 5 Focus: Findings + Closing (8 hours)
- Internal review, findings presentation, CL ratings, next steps
Outcome: Final report issued in 2-4 weeks, with official CL rating per process.
Assessment Scope Definition
Choosing What to Assess
Scope Dimensions:
| Dimension | Decision | Recommendation |
|---|---|---|
| Processes | Which ASPICE processes to assess | SWE.1-6, SUP.1/8/9/10, MAN.3 (core 14 processes for CL2) |
| Projects | Which projects to include | 2-3 representative projects (pilot + Wave 2 projects) |
| Organizational Units | Which teams to assess | 1-2 teams (20-30 people) for first assessment |
| Target CL | CL1, CL2, or CL3 | CL2 (industry standard for OEM contracts) |
Example Scope Statement:
# ASPICE Assessment Scope
**Organization**: Acme Automotive Software Division
**Assessment Date**: June 15-19, 2026
**Assessor**: TÜV SÜD (VDA QMC certified)
## Scope
### Processes (11 total)
**Software Engineering (SWE)**:
- SWE.1: Software Requirements Analysis
- SWE.2: Software Architectural Design
- SWE.3: Software Detailed Design and Unit Construction
- SWE.4: Software Unit Verification
- SWE.5: Software Integration and Integration Test
- SWE.6: Software Qualification Testing
**Supporting Processes (SUP)**:
- SUP.1: Quality Assurance
- SUP.8: Configuration Management
- SUP.9: Problem Resolution Management
- SUP.10: Change Request Management
**Management Processes (MAN)**:
- MAN.3: Project Management
### Projects in Scope (3 projects)
1. **Parking Assist** (Pilot project, ASIL-A, 5 developers, 4 months)
2. **Lane Departure Warning** (Wave 2 project, ASIL-B, 7 developers, 6 months)
3. **Adaptive Cruise Control** (Wave 2 project, ASIL-B, 8 developers, 6 months)
### Organizational Units
- **ADAS Development Team** (20 people)
- 15 developers, 3 QA engineers, 2 architects
### Target Capability Level
- **CL2** (Managed Process) across all 11 processes
### Out of Scope
- SYS.2-5 (System processes) - handled by separate systems engineering team
- HWE processes (no hardware development in this unit)
- ACQ, SPL, REU processes (not applicable)
## Assessment Logistics
- **Duration**: 5 days (40 hours)
- **Assessor**: 2 assessors (lead + co-assessor)
- **Cost**: $75,000 (including travel, report)
- **Sponsor**: VP of Engineering
Pre-Assessment Checklist
3 Months Before Assessment
Process Readiness:
## Pre-Assessment Checklist (3 Months Out)
### Process Documentation (MAN.3, SUP.1)
- [ ] Process descriptions documented for all 11 processes
- [ ] Process descriptions stored in Confluence: `ASPICE/Processes/`
- [ ] Each process document includes:
- [ ] Purpose statement
- [ ] Entry/exit criteria
- [ ] Activities (mapped to ASPICE base practices)
- [ ] Roles and responsibilities
- [ ] Work products produced
- [ ] Tools used
**Example**:
```markdown
# SWE.1 Software Requirements Analysis
**Purpose**: Establish software requirements for the system.
**Entry Criteria**:
- System requirements available ([SYS-xxx] Epics in Jira)
- Product Owner assigned
**Activities**:
1. Elicit requirements from stakeholders (BP1: Specify functional/non-functional)
2. Document requirements as User Stories in Jira (BP1)
3. Define acceptance criteria (Given/When/Then) (BP1)
4. Establish traceability to system requirements (BP5)
5. Obtain Product Owner approval (BP4)
6. Manage changes via Change Requests (BP7)
**Roles**:
- Product Owner: Define requirements
- Developers: Refine User Stories
- QA: Review acceptance criteria for testability
**Work Products**:
- User Stories in Jira (with acceptance criteria)
- Traceability matrix (auto-generated from Jira links)
**Tools**:
- Jira (requirement management)
- Confluence (traceability matrix storage)
Work Product Readiness (All SWE Processes)
-
Sample 3 projects with complete work products:
- SWE.1: 20+ User Stories (with acceptance criteria, traceability)
- SWE.2: 3+ ADRs (architecture decisions)
- SWE.3: Source code in Git (with PR reviews, MISRA compliance)
- SWE.4: Unit test coverage reports (≥80%)
- SWE.5: Integration test results (HIL test logs)
- SWE.6: Acceptance test results (Gherkin scenarios executed)
- SUP.8: Traceability matrix (bidirectional)
- SUP.9: Bug reports (Jira bugs with root cause analysis)
-
Organize evidence in folder structure:
evidence/
├── project_1_parking_assist/
│ ├── SWE.1_requirements/
│ │ ├── jira_export.pdf
│ │ └── user_stories.json
│ ├── SWE.2_architecture/
│ │ ├── ADR-001-sensor-fusion.md
│ │ ├── ADR-002-control-algorithm.md
│ │ └── architecture_diagram.png
│ ├── SWE.3_design/
│ │ ├── git_log.txt
│ │ └── code_review_records.pdf (PR approvals)
│ ├── SWE.4_unit_tests/
│ │ ├── coverage_report.html
│ │ └── ci_pipeline_logs.txt
│ ├── SWE.5_integration/
│ │ └── hil_test_results.xml
│ ├── SWE.6_qualification/
│ │ └── acceptance_test_report.html
│ └── SUP.8_traceability/
│ └── traceability_matrix.xlsx
├── project_2_lane_departure/
│ └── [same structure]
└── project_3_adaptive_cruise/
└── [same structure]
Team Readiness (Interviews)
-
Conduct mock interviews (ASPICE Program Manager plays assessor)
-
Train team on common questions:
- "How do you ensure requirements traceability?"
- Answer: "Every commit message references a Jira story ID [SWE-XXX]. We auto-generate traceability matrix monthly."
- "What happens if a MISRA violation is found?"
- Answer: "CI pipeline rejects PR. Developer fixes violation before merge."
- "How do you manage requirement changes?"
- Answer: "Product Owner creates Change Request in Jira (SUP.10). Impact analysis performed. Approved changes update User Story + linked code."
- "How do you ensure requirements traceability?"
-
Identify interview participants (10-15 people):
- 3 Senior Developers
- 2 Junior Developers
- 2 QA Engineers
- 1 Architect
- 1 Product Owner
- 1 Scrum Master
- 1 Engineering Manager
Tool Access for Assessor
-
Create read-only accounts for assessor:
- Jira (access to all 3 projects)
- GitHub (access to source code repositories)
- Confluence (access to process documentation)
- CI/CD pipeline (view build logs, test reports)
-
Prepare demo environment:
- HIL test bench available (for live integration test demo)
- Sample build: Assessor can trigger CI pipeline, observe results
Logistics (1 Month Before)
- Book conference room (5 days, 8 hours/day)
- Set up projector, whiteboard, refreshments
- Block calendars for interview participants (Day 4)
- Prepare opening presentation (30 slides, 1 hour):
- Company overview
- ASPICE journey (pilot → rollout)
- Process overview (SWE.1-6, SUP, MAN)
- Project scope (3 projects)
---
## Assessment Survival Guide
### Do's and Don'ts
| Do [PASS] | Don't [FAIL] |
|------|---------|
| **Be honest**: "We don't do MC/DC coverage yet" (assessor appreciates transparency) | **Fabricate evidence**: Assessor WILL find inconsistencies (career-ending for assessor to miss fraud) |
| **Show real work products**: Actual Jira tickets, Git commits | **Create fake documents**: Word docs written night before (obvious to experienced assessor) |
| **Admit gaps**: "SWE.5 integration testing is manual, we're automating next quarter" | **Over-promise**: "We always do X" when you don't (one counter-example fails you) |
| **Provide context**: "This project was legacy, retrofitted ASPICE mid-stream" | **Make excuses**: "We were too busy to document" (assessor doesn't care) |
| **Demonstrate tools**: Live Jira demo, show CI pipeline running | **Death by PowerPoint**: 200-slide deck (assessor wants work products, not slides) |
**Golden Rule**: Assessors are looking for **evidence of consistent practice**, not perfection.
---
### Interview Tips for Team Members
**Common Assessor Questions** (with good/bad answers):
**Q: "How do you ensure traceability between requirements and code?"**
- [PASS] **Good Answer**: "Every commit message starts with [SWE-XXX] referencing the Jira story. Our CI pipeline rejects commits without a valid Jira ID. We auto-generate a traceability matrix monthly from Git log + Jira export."
- [FAIL] **Bad Answer**: "Um, we just... know which code goes with which requirement?" (No evidence)
**Q: "What happens if a unit test fails in the CI pipeline?"**
- [PASS] **Good Answer**: "The PR is blocked from merging. Developer sees failure in GitHub PR status check, fixes the test or the code, pushes again. PR can only merge after all tests pass."
- [FAIL] **Bad Answer**: "Well, usually we fix it... I think?" (No process)
**Q: "Show me an example of a requirement change you handled."**
- [PASS] **Good Answer**: "Sure, [CR-45] in Jira. Customer wanted to lower brake latency from 150ms to 100ms. We performed impact analysis (3 modules affected), estimated 2 weeks, Product Owner approved. We updated [SWE-234] User Story, modified code, re-ran tests, verified with customer in Sprint Review."
- [FAIL] **Bad Answer**: "Changes just happen, we don't really track them." (SUP.10 fail)
**Q: "How do you know if your architecture is good?"**
- [PASS] **Good Answer**: "We document architecture decisions in ADRs (Architecture Decision Records). Each ADR includes context, decision, rationale, and alternatives considered. We review ADRs in architecture review meetings before major design changes."
- [FAIL] **Bad Answer**: "Our architect is really experienced, so we trust him." (No evidence)
---
## Summary
**Assessment Preparation Timeline**:
- **3 Months Before**: Process documentation complete, evidence organized
- **1 Month Before**: Mock interviews, tool access for assessor, logistics finalized
- **1 Week Before**: Final evidence review, team briefing
- **Assessment Week**: 5 days (document review, work product sampling, interviews, findings)
- **2 Weeks After**: Final report with CL ratings
**Critical Success Factors**:
1. **Continuous Practice**: Teams following ASPICE daily (Ch 19-23) → assessment is routine
2. **Honest Evidence**: Show real work products, admit gaps (transparency builds trust)
3. **Team Preparedness**: Everyone knows their process, can explain it confidently
4. **Organized Evidence**: Clear folder structure, easy for assessor to navigate
**Chapter Structure**:
1. **24.01 Evidence Collection** - How to package work products for assessment
2. **24.02 Common Findings** - Top 10 reasons teams fail, how to avoid
3. **24.03 Continuous Readiness** - Stay assessment-ready year-round (not last-minute scramble)
**Next**: Detailed evidence collection strategies (24.01).
---
**Navigation**: [← 23.04 Metrics and KPIs](23.04_Metrics_and_KPIs.md) | [Contents](../00_Front_Matter/00.06_Table_of_Contents.md) | [24.1 Evidence Collection →](24.01_Evidence_Collection.md)