6.3: Training and Competency
Introduction
ASPICE is not intuitive. Developers won't magically know how to write Architecture Decision Records, trace requirements to code, or structure unit tests for MC/DC coverage. Effective training transforms ASPICE from "compliance burden" to "quality enabler." This section provides a comprehensive training curriculum, competency framework, and certification program.
Training Philosophy
Anti-Patterns to Avoid
| Bad Training Approach | Why It Fails | Better Approach |
|---|---|---|
| Death by PowerPoint | 8-hour slide deck, zero hands-on | 80% hands-on workshops, 20% theory |
| Generic ASPICE Course | Teaches PAM theory, not YOUR processes | Custom training using YOUR templates, YOUR tools |
| One-and-Done | 2-day training, then abandoned | Continuous learning: workshops + office hours + refreshers |
| Mandatory Attendance, No Engagement | Developers zone out, check emails | Interactive exercises, pair programming, real code reviews |
| Trainer Has Never Implemented ASPICE | No credibility, can't answer practical questions | Trainers are pilot team members (peer learning) |
Key Principle: Training must be practical, tool-specific, and continuous.
Training Curriculum
2-Day ASPICE Practitioner Workshop
Target Audience: Software developers, QA engineers, tech leads
Pre-Requisites:
- Familiarity with Git, Jira (basic usage)
- Active software project (will use real examples)
Learning Objectives: By end of workshop, participants can:
- Write ASPICE-compliant User Stories with acceptance criteria (SWE.1)
- Create Architecture Decision Records (ADRs) (SWE.2)
- Perform code reviews using ASPICE checklist (SWE.3)
- Configure CI/CD pipeline for automated testing (SWE.4-5)
- Collect evidence for ASPICE assessment (all processes)
Day 1: ASPICE Fundamentals + Requirements
Morning Session (9:00 AM - 12:00 PM): Theory + Context
09:00-09:30: Icebreaker + Motivation
Activity: "ASPICE Horror Stories"
- Each participant shares a past project failure (e.g., "Requirements changed 50 times")
- Facilitator shows how ASPICE prevents each failure:
- "Changing requirements? SWE.1 BP7 (Change Management)"
- "Code review found bug after release? SWE.3 BP7 (Verify Design)"
Outcome: Participants see ASPICE as solution, not burden.
09:30-10:30: ASPICE Overview
Format: Interactive presentation (30 slides, 60 minutes)
Content:
-
What is ASPICE? (10 min)
- Process Assessment Model, not a standard
- Capability Levels: CL0-5 (focus on CL2 for this org)
- Automotive industry context (OEM requirements)
-
V-Model Walkthrough (15 min)
- Left side: Requirements → Design → Implementation
- Right side: Testing (Unit → Integration → Qualification)
- Show how each ASPICE process maps to V-Model
-
SWE Processes Deep Dive (25 min)
- SWE.1: Requirements (User Stories)
- SWE.2: Architecture (ADRs)
- SWE.3: Detailed Design (Code + Reviews)
- SWE.4: Unit Testing (pytest + coverage)
- SWE.5: Integration (HIL testing)
- SWE.6: Qualification (Acceptance tests)
-
Q&A (10 min)
Materials: Slide deck, V-Model poster (participants keep poster)
10:30-10:45: Break ☕
10:45-12:00: SWE.1 Hands-On (Requirements Analysis)
Activity: Write Your First ASPICE-Compliant User Story
Setup:
- Participants pair up (2 people per laptop)
- Each pair works on a feature from their current project
- Use provided Jira template (loaded in training Jira instance)
Exercise:
## Exercise 1: Create a User Story (SWE.1 BP1)
**Scenario**: You're adding "Lane Departure Warning" to an ADAS system.
**Task**:
1. Log in to training Jira: https://training.jira.company.com
2. Create a new User Story using the template:
- **Epic Link**: [SYS-100] "Driver Assistance System"
- **Summary**: "[SWE-XXX] Lane Departure Warning - Visual Alert"
- **Description**: Use "As a / I want / So that" format
- **Acceptance Criteria**: Write 3 testable criteria (Given/When/Then)
- **Safety Classification**: ASIL-B
- **Traceability**: Link to parent requirement [SYS-45]
3. Peer review with another pair:
- Swap laptops
- Use SWE.1 checklist (provided) to review
- Give feedback
**Deliverable**: 1 ASPICE-compliant User Story per pair
**Time**: 45 minutes (30 min creation + 15 min peer review)
Facilitator Support:
- Walk around, answer questions
- Show example on projector if pairs are stuck
- Highlight best example: "Great acceptance criteria from Pair 5—specific and testable!"
Outcome: Participants have created real User Story they can use in their project.
Lunch Break (12:00 PM - 1:00 PM) 🍕
Afternoon Session (1:00 PM - 5:00 PM): Architecture + Design
1:00-2:30: SWE.2 Hands-On (Architecture)
Activity: Write an Architecture Decision Record (ADR)
Theory (15 min):
- What is ADR? Document WHY architectural decisions were made
- Template: Context, Decision, Consequences, Alternatives
- Show 3 real ADR examples from pilot project
Exercise (75 min):
## Exercise 2: Create an ADR (SWE.2 BP1)
**Scenario**: Your team needs to choose a communication protocol for sensor data.
**Options**: CAN, Ethernet, FlexRay
**Task**:
1. Open ADR template: `docs/architecture/ADR-XXX-template.md`
2. Fill in sections:
- **Context**: "We need to transmit 10 MB/s sensor data from LIDAR to ECU"
- **Decision**: "We will use Automotive Ethernet (100BASE-T1)"
- **Rationale**: Why Ethernet? (bandwidth, cost, latency)
- **Consequences**: Pros/cons of this choice
- **Alternatives Considered**: Why NOT CAN? Why NOT FlexRay?
- **Traceability**: Links to [SYS-67] "High-bandwidth sensor interface"
3. Commit ADR to Git:
```bash
git add docs/architecture/ADR-007-automotive-ethernet.md
git commit -m "[SYS-67] Architectural decision: Ethernet for sensor data"
git push origin feature/sensor-communication
- Create Pull Request (PR) for architecture review
Deliverable: 1 ADR committed to Git, PR opened
Time: 75 minutes
**Facilitator Role**:
- Demo Git workflow on projector (first-time Git users may struggle)
- Provide decision criteria table (bandwidth, cost, latency comparison)
**Outcome**: Participants understand ADRs document "why," not just "what."
---
#### 2:30-2:45: Break ☕
---
#### 2:45-4:00: SWE.3 Hands-On (Code Review)
**Activity**: Perform ASPICE-Compliant Code Review
**Theory** (15 min):
- SWE.3 BP7: Verify software detailed design (code review)
- What to check: MISRA compliance, traceability, test coverage
- Show GitHub PR template with ASPICE checklist
**Exercise** (60 min):
```markdown
## Exercise 3: Code Review (SWE.3 BP7)
**Scenario**: Review a Pull Request for emergency braking logic.
**Task**:
1. Open training GitHub: https://github.com/training/emergency-brake
2. Navigate to PR #42: "[SWE-234] Add pedestrian detection"
3. Review using ASPICE checklist (provided):
**SWE.1: Requirements Traceability**
- [ ] PR title references requirement ID (e.g., [SWE-234])
- [ ] Code comments include @implements tags
**SWE.3: Coding Standards**
- [ ] MISRA C compliance (check CI pipeline results)
- [ ] No compiler warnings
- [ ] Functions have Doxygen comments
**SWE.4: Unit Testing**
- [ ] Unit tests written for new functions
- [ ] Code coverage ≥ 80% (check coverage report)
- [ ] All tests pass (CI pipeline green)
4. Leave review comments:
- If issues found: "Request Changes" with specific feedback
- If compliant: "Approve" with summary
5. Discuss findings with your pair
**Deliverable**: 1 code review comment posted to PR
**Time**: 60 minutes
Facilitator Role:
- Intentionally include 3-4 ASPICE violations in the training PR (e.g., missing traceability comment)
- After exercise, reveal violations and discuss
Outcome: Participants can perform rigorous code reviews.
4:00-5:00: SWE.4 Hands-On (Unit Testing)
Activity: Write Unit Tests with Coverage Metrics
Theory (10 min):
- SWE.4 BP3: Test software units
- Coverage requirements: 80% branch coverage (ASIL-B minimum)
- Show pytest example with coverage report
Exercise (50 min):
## Exercise 4: Unit Testing (SWE.4 BP3)
# Scenario: Test emergency braking function
# File: src/braking/emergency_brake.c
def calculate_brake_force(distance_m: float, speed_kmh: float) -> float:
"""
Calculate required brake force for emergency stop.
Implements: [SWE-234] Emergency braking algorithm
Args:
distance_m: Distance to obstacle (meters)
speed_kmh: Current vehicle speed (km/h)
Returns:
Brake force (Newtons)
"""
if distance_m <= 0:
raise ValueError("Distance must be positive")
if speed_kmh < 0:
raise ValueError("Speed must be non-negative")
# Simple physics: F = ma, deceleration to stop in distance_m
speed_ms = speed_kmh / 3.6
deceleration = (speed_ms ** 2) / (2 * distance_m)
vehicle_mass_kg = 1500
brake_force = vehicle_mass_kg * deceleration
return brake_force
# Task: Write unit tests in tests/unit/test_emergency_brake.py
# Use pytest framework
# Test cases to cover:
# 1. Normal operation: distance=20m, speed=50km/h
# 2. Edge case: distance=5m, speed=30km/h
# 3. Invalid input: distance=0 (should raise ValueError)
# 4. Invalid input: speed=-10 (should raise ValueError)
# 5. Boundary: distance=1m, speed=10km/h (high deceleration)
# Run tests with coverage:
# pytest tests/unit/test_emergency_brake.py --cov=src/braking --cov-report=html
# Deliverable: 100% branch coverage for calculate_brake_force()
Solution (facilitator shows after 40 min):
# tests/unit/test_emergency_brake.py
import pytest
from src.braking.emergency_brake import calculate_brake_force
def test_normal_operation():
"""Test nominal case: 20m distance, 50 km/h speed"""
force = calculate_brake_force(distance_m=20, speed_kmh=50)
assert force > 0 # Should apply brakes
assert force < 15000 # Reasonable force for 1500kg vehicle
def test_edge_case_short_distance():
"""Test edge case: 5m distance, 30 km/h speed (high deceleration)"""
force = calculate_brake_force(distance_m=5, speed_kmh=30)
assert force > 0
# Expect higher force for shorter distance
normal_force = calculate_brake_force(distance_m=20, speed_kmh=30)
assert force > normal_force
def test_invalid_distance_zero():
"""Test invalid input: distance=0 should raise ValueError"""
with pytest.raises(ValueError, match="Distance must be positive"):
calculate_brake_force(distance_m=0, speed_kmh=50)
def test_invalid_distance_negative():
"""Test invalid input: negative distance"""
with pytest.raises(ValueError, match="Distance must be positive"):
calculate_brake_force(distance_m=-10, speed_kmh=50)
def test_invalid_speed_negative():
"""Test invalid input: negative speed"""
with pytest.raises(ValueError, match="Speed must be non-negative"):
calculate_brake_force(distance_m=20, speed_kmh=-10)
def test_boundary_minimum_distance():
"""Test boundary: 1m distance, 10 km/h (minimum safe scenario)"""
force = calculate_brake_force(distance_m=1, speed_kmh=10)
assert force > 0
# Coverage report shows 100% branch coverage [PASS]
Outcome: Participants understand how to achieve high code coverage.
4:50-5:00: Day 1 Wrap-Up
- Recap: "Today you learned SWE.1-4 (Requirements → Unit Testing)"
- Homework: "Review your current project—identify 3 User Stories that need better acceptance criteria"
- Preview Day 2: "Tomorrow: Integration testing, evidence collection, Q&A"
Day 2: Integration, Qualification, Evidence
Morning Session (9:00 AM - 12:00 PM): Testing + CI/CD
9:00-9:15: Day 1 Recap + Q&A
- Participants share overnight insights
- Address questions from Day 1 homework
9:15-10:45: SWE.5 Hands-On (Integration Testing)
Activity: Write and Execute Integration Tests
Theory (15 min):
- SWE.5: Software Integration
- Integration vs Unit Testing (unit=isolated, integration=components interact)
- HIL (Hardware-in-the-Loop) testing for embedded systems
Exercise (75 min):
## Exercise 5: Integration Testing (SWE.5 BP3)
**Scenario**: Test integration between camera driver and pedestrian detection algorithm.
**Setup**:
- HIL test bench available (webcam simulates vehicle camera)
- Test framework: Robot Framework
**Task**:
1. Write integration test spec:
```robot
*** Test Cases ***
Camera Driver Integration
[Documentation] SWE.5 BP3: Integration test
[Tags] Integration ASIL-B
Initialize Camera Driver
Initialize Pedestrian Detector
${image} = Camera Driver Get Frame
${detection} = Pedestrian Detector Process ${image}
Should Not Be Empty ${detection} msg=Detector must process camera frames
-
Run test on HIL bench:
robot --variable HIL_BENCH:192.168.1.100 tests/integration/test_camera_integration.robot -
Review test log (
log.html) for pass/fail results
Deliverable: Integration test executed, log reviewed
Time: 75 minutes
**Outcome**: Participants understand integration testing verifies component interactions.
---
#### 10:45-11:00: Break ☕
---
#### 11:00-12:00: SWE.6 Hands-On (Qualification Testing)
**Activity**: Write Acceptance Tests (Gherkin)
**Theory** (10 min):
- SWE.6: Software Qualification
- Acceptance tests = customer-facing, black-box
- Gherkin format (Given/When/Then) for clarity
**Exercise** (50 min):
```gherkin
## Exercise 6: Acceptance Test (SWE.6 BP3)
# Scenario: Qualify emergency braking feature
# File: tests/acceptance/emergency_braking.feature
Feature: Emergency Braking - Pedestrian Detection
As a vehicle safety system
I want to brake automatically when pedestrians detected
So that collisions are prevented
@ASIL-B @SWE-234
Scenario: Brake activation for pedestrian at 20m
Given vehicle speed is 50 km/h
And a pedestrian is 20 meters ahead
When the camera detects the pedestrian
Then brakes activate within 100 milliseconds
And vehicle stops before pedestrian
And stopping distance is at least 2 meters
# Task: Implement step definitions (Python)
# Run: behave tests/acceptance/
Outcome: Participants can write customer-readable acceptance tests.
Lunch Break (12:00 PM - 1:00 PM) 🍕
Afternoon Session (1:00 PM - 5:00 PM): Evidence + Real-World Practice
1:00-2:30: Evidence Collection Workshop
Activity: Package ASPICE Evidence for Assessment
Theory (20 min):
- What assessors look for
- Work products vs evidence (code IS evidence for SWE.3)
- Evidence organization structure
Exercise (70 min):
## Exercise 7: Collect Evidence Package (All SWE Processes)
**Scenario**: Prepare for pre-assessment tomorrow.
**Task**: Create evidence package for "Lane Departure Warning" feature.
1. **SWE.1 Evidence** (Requirements):
- Export Jira stories to PDF: [SWE-345], [SWE-346], [SWE-347]
- Generate traceability matrix: `python scripts/generate_traceability.py`
2. **SWE.2 Evidence** (Architecture):
- Collect ADRs: `docs/architecture/ADR-008-lane-detection.md`
- Screenshot architecture diagram from Confluence
3. **SWE.3 Evidence** (Design):
- Git log: `git log --oneline --grep="SWE-345" > evidence/git_log.txt`
- PR approval records: Export from GitHub
4. **SWE.4 Evidence** (Unit Testing):
- CI pipeline logs: Download from GitHub Actions (Run #1234)
- Coverage report: `htmlcov/index.html`
5. **SWE.5 Evidence** (Integration):
- HIL test results: `test_results/integration_report.xml`
6. **SWE.6 Evidence** (Qualification):
- Acceptance test log: `behave_report.html`
- Sprint review recording: Link to video
7. **Package Everything**:
```bash
mkdir evidence_package
cp [all files above] evidence_package/
zip -r evidence_package.zip evidence_package/
Deliverable: evidence_package.zip ready for assessor
Time: 70 minutes
**Facilitator Role**:
- Provide evidence collection checklist (participants keep for reference)
- Show real assessor report: "See how they reference our evidence"
**Outcome**: Participants know how to prepare for assessment.
---
#### 2:30-2:45: Break ☕
---
#### 2:45-4:00: Real-World Practice (Bring Your Own Project)
**Activity**: Apply ASPICE to Participant's Actual Project
**Format**: Coached Lab Time
**Task**:
```markdown
## Exercise 8: Apply ASPICE to Your Project
**Objective**: Start implementing ASPICE in your real work (not just training exercises).
**Choose One**:
**Option A: Requirements** (if your project lacks clear requirements)
- Take 1 feature from your current backlog
- Convert to ASPICE-compliant User Story (SWE.1)
- Add acceptance criteria, traceability
**Option B: Architecture** (if your project has undocumented design decisions)
- Document 1 recent architecture decision as ADR (SWE.2)
- Example: "Why we chose PostgreSQL over MySQL"
**Option C: Code Review** (if your team does ad-hoc reviews)
- Review 1 open PR using ASPICE checklist (SWE.3)
- Leave review comments, suggest improvements
**Option D: Testing** (if your project has low test coverage)
- Write unit tests for 1 untested module (SWE.4)
- Achieve ≥80% branch coverage
**Support**:
- Facilitators circulate, answer questions
- Pair programming encouraged (work with person next to you)
**Time**: 75 minutes
Outcome: Participants leave with ASPICE work started in their project.
4:00-4:45: Q&A + Troubleshooting
Format: Open discussion
Topics Covered (participant-driven):
- "How do I convince my manager to allocate time for ASPICE?"
- Answer: Show business case (23.00), ROI calculation
- "Our project is already 6 months in—can we retrofit ASPICE?"
- Answer: Yes, start with requirements traceability (backfill Jira), then improve incrementally
- "What if our OEM customer has different requirements than ASPICE?"
- Answer: ASPICE is framework; tailor to customer (e.g., customer wants DOORS → use DOORS for SWE.1)
Outcome: Practical concerns addressed.
4:45-5:00: Certification + Next Steps
Certification:
- Participants receive "ASPICE Practitioner Certificate" (signed by trainer)
- Certificate lists competencies: SWE.1-6 processes
Next Steps:
- Office Hours: Weekly 2-hour drop-in sessions (Tuesdays 2-4 PM)
- Follow-Up Workshop (optional): "Advanced ASPICE" in 3 months (SYS processes, MAN.3)
- Peer Learning: Join internal Slack channel #aspice-community
Feedback Survey:
- 5-minute anonymous survey (improve future workshops)
Competency Framework
ASPICE Skill Levels
| Level | Title | Criteria | Typical Role |
|---|---|---|---|
| Level 0 | Novice | No ASPICE training | New hire |
| Level 1 | Practitioner | Completed 2-day workshop, can execute ASPICE processes with guidance | Junior Developer |
| Level 2 | Proficient | 6+ months ASPICE experience, can work independently, mentor Level 1 | Senior Developer |
| Level 3 | Expert | 2+ years ASPICE, can tailor processes, train others | Tech Lead, Architect |
| Level 4 | ASPICE Assessor | Certified Provisional/Principal Assessor (VDA QMC or Intacs) | Quality Manager |
Progression Path:
- Level 0 → 1: Complete 2-day workshop
- Level 1 → 2: 6 months hands-on experience + peer endorsement
- Level 2 → 3: Train 2+ teams, contribute to process improvements
- Level 3 → 4: External certification (VDA QMC Provisional Assessor course, €5k, 5 days)
Train-the-Trainer Program
Scaling Training Capacity
Problem: 200 developers need training, but only 2 external trainers available (bottleneck).
Solution: Train internal trainers from pilot team.
Train-the-Trainer Workshop (3 days):
Day 1: Master the Content
- Trainers go through full 2-day workshop (as participants)
- Learn exercises, common questions, facilitation techniques
Day 2: Teaching Practice
- Each trainee delivers 30-minute session to peer group
- Receives feedback on presentation style, clarity
- Practices handling difficult questions
Day 3: Logistics + Certification
- Learn workshop logistics (room setup, tool access, scheduling)
- Co-facilitate half-day session with experienced trainer
- Certified as "Internal ASPICE Trainer"
Outcome: 6 internal trainers → Can train 120 people/month (6 trainers × 20 people/workshop).
Continuous Learning
Ongoing Training Beyond Initial Workshop
Monthly Brownbag Lunches (1 hour):
- Format: Informal, optional
- Topics: "Deep Dive into MISRA C," "Advanced Git Workflows," "Test Automation Best Practices"
- Presenters: Internal experts (rotate)
Annual ASPICE Refresher (half-day):
- Audience: All developers (mandatory)
- Content: Process updates, new templates, lessons from past year's assessments
- Format: 4 hours (2 hours theory, 2 hours hands-on)
Lunch-and-Learn Series (bi-weekly):
- Format: 30-minute talks during lunch
- Topics: "How Team X reduced code review time by 50%," "ASPICE horror story: What went wrong"
Summary
Training Curriculum:
- 2-Day Workshop: Hands-on, tool-specific, covers SWE.1-6
- Competency Framework: 5 levels (Novice → Assessor)
- Train-the-Trainer: Scale to 6 internal trainers (120 people/month capacity)
- Continuous Learning: Brownbags, refreshers, lunch-and-learns
Key Success Factors:
- 80% hands-on exercises (not PowerPoint lectures)
- Real tools (Jira, Git, CI/CD) used in workshop
- Peer learning (pilot team members train others)
- Continuous support (office hours, Slack channel)
Metrics:
- Training completion: 100% of developers within 6 months of hire/rollout
- Satisfaction: ≥80% rate training as "helpful" or "very helpful"
- Competency: ≥70% of developers reach Level 2 (Proficient) within 12 months
Next: Measure ASPICE success with metrics and KPIs (23.04).