6.1: Pilot Project Selection
Introduction
Do NOT roll out ASPICE organization-wide on day one. The right pilot project de-risks ASPICE adoption: it proves processes work, identifies tool gaps, trains early champions, and generates organizational momentum. A bad pilot choice kills ASPICE initiatives. This section shows how to select and execute a winning pilot.
Pilot Project Selection Criteria
The Goldilocks Project (Not Too Hard, Not Too Easy)
Selection Matrix:
| Criterion | Weight | Scoring Guidelines |
|---|---|---|
| Size | 20% | 3-8 developers (not 1, not 50). Team small enough to coordinate, large enough to test collaboration processes. |
| Duration | 15% | 3-6 months (not 2 weeks, not 2 years). Long enough to complete V-cycle, short enough for feedback loop. |
| Complexity | 25% | Moderate complexity (not trivial, not mission-critical). Should exercise SWE.1-6, but failure won't sink company. |
| Team Quality | 25% | Mix of senior + junior (not all newbies, not all cynics). Senior devs become ASPICE champions. |
| Stakeholder Visibility | 10% | Visible to management (not stealth project, not board-level scrutiny). Enough attention to demonstrate value. |
| Safety Level | 5% | ASIL-A or QM preferred for pilot (not ASIL-D first time). Lower risk if processes fail. |
Scoring Example:
# Pilot Project Evaluation Tool
class PilotProjectEvaluator:
"""Score potential pilot projects on suitability for ASPICE"""
CRITERIA_WEIGHTS = {
"size": 0.20,
"duration": 0.15,
"complexity": 0.25,
"team_quality": 0.25,
"visibility": 0.10,
"safety_level": 0.05
}
def evaluate_project(self, project: dict) -> dict:
"""
Evaluate project suitability for ASPICE pilot.
Score: 0-100 (higher is better).
"""
scores = {
"size": self._score_size(project["team_size"]),
"duration": self._score_duration(project["duration_months"]),
"complexity": self._score_complexity(project["complexity"]),
"team_quality": self._score_team(project["team"]),
"visibility": self._score_visibility(project["visibility"]),
"safety_level": self._score_safety(project["asil_level"])
}
# Calculate weighted score
total_score = sum(
scores[criterion] * self.CRITERIA_WEIGHTS[criterion]
for criterion in scores
)
return {
"project_name": project["name"],
"scores": scores,
"total_score": round(total_score, 1),
"recommendation": self._get_recommendation(total_score),
"rationale": self._generate_rationale(scores)
}
def _score_size(self, team_size: int) -> float:
"""Ideal: 3-8 developers (score 100). <3 or >12 penalized."""
if 3 <= team_size <= 8:
return 100
elif team_size < 3:
return 30 # Too small, won't test collaboration
elif team_size <= 12:
return 70 # Acceptable but coordination harder
else:
return 20 # Too large for pilot
def _score_duration(self, months: int) -> float:
"""Ideal: 3-6 months (score 100)."""
if 3 <= months <= 6:
return 100
elif months < 3:
return 40 # Too short for full V-cycle
elif months <= 9:
return 70 # Acceptable but long feedback loop
else:
return 30 # Too long, loses momentum
def _score_complexity(self, complexity: str) -> float:
"""Ideal: Moderate complexity."""
complexity_scores = {
"trivial": 20, # Doesn't exercise ASPICE processes
"low": 60,
"moderate": 100, # Sweet spot
"high": 70,
"very_high": 30 # Too risky for pilot
}
return complexity_scores.get(complexity, 50)
def _score_team(self, team: dict) -> float:
"""
Ideal: Mix of senior (ASPICE champions) + junior (learners).
Volunteers only (no forced participation).
"""
score = 0
# Senior developers (50% weight)
if team["senior_devs"] >= 2:
score += 50
elif team["senior_devs"] == 1:
score += 30
else:
score += 10 # No mentorship
# Volunteers (30% weight)
if team["volunteers_percentage"] == 100:
score += 30
elif team["volunteers_percentage"] >= 50:
score += 15
else:
score += 0 # Forced participation = resistance
# ASPICE exposure (20% weight)
if team["aspice_training_completed"]:
score += 20
else:
score += 10 # Will need training
return score
def _score_visibility(self, visibility: str) -> float:
"""Moderate visibility best (not stealth, not C-level scrutiny)."""
visibility_scores = {
"stealth": 20, # No executive awareness
"low": 50,
"moderate": 100, # VP-level visibility
"high": 70, # C-level watching (adds pressure)
"critical": 30 # Board-level (too much pressure)
}
return visibility_scores.get(visibility, 50)
def _score_safety(self, asil: str) -> float:
"""Lower ASIL preferred for pilot (less risk)."""
asil_scores = {
"QM": 100, # No safety impact
"ASIL-A": 90,
"ASIL-B": 70,
"ASIL-C": 40,
"ASIL-D": 20 # Too risky for first project
}
return asil_scores.get(asil, 50)
def _get_recommendation(self, score: float) -> str:
if score >= 80:
return "[PASS] HIGHLY RECOMMENDED - Ideal pilot candidate"
elif score >= 60:
return "[WARN] ACCEPTABLE - Good pilot with some reservations"
else:
return "[FAIL] NOT RECOMMENDED - Choose different project"
def _generate_rationale(self, scores: dict) -> list:
issues = []
for criterion, score in scores.items():
if score < 60:
issues.append(f"[WARN] {criterion.replace('_', ' ').title()}: Score {score}/100 (below threshold)")
return issues if issues else ["All criteria met [PASS]"]
# Example: Evaluate 3 potential pilot projects
evaluator = PilotProjectEvaluator()
projects = [
{
"name": "Parking Assist Feature",
"team_size": 5,
"duration_months": 4,
"complexity": "moderate",
"team": {"senior_devs": 2, "volunteers_percentage": 100, "aspice_training_completed": True},
"visibility": "moderate",
"asil_level": "ASIL-A"
},
{
"name": "Autonomous Emergency Braking",
"team_size": 15,
"duration_months": 12,
"complexity": "very_high",
"team": {"senior_devs": 3, "volunteers_percentage": 40, "aspice_training_completed": False},
"visibility": "critical",
"asil_level": "ASIL-D"
},
{
"name": "Dashboard UI Refresh",
"team_size": 2,
"duration_months": 2,
"complexity": "low",
"team": {"senior_devs": 1, "volunteers_percentage": 100, "aspice_training_completed": False},
"visibility": "low",
"asil_level": "QM"
}
]
for project in projects:
result = evaluator.evaluate_project(project)
print(f"\n{result['project_name']}: {result['total_score']}/100")
print(f" {result['recommendation']}")
for issue in result['rationale']:
print(f" {issue}")
Output:
Parking Assist Feature: 92.5/100
[PASS] HIGHLY RECOMMENDED - Ideal pilot candidate
All criteria met [PASS]
Autonomous Emergency Braking: 42.5/100
[FAIL] NOT RECOMMENDED - Choose different project
[WARN] Size: Score 20/100 (below threshold)
[WARN] Complexity: Score 30/100 (below threshold)
[WARN] Team Quality: Score 40/100 (below threshold)
[WARN] Visibility: Score 30/100 (below threshold)
[WARN] Safety Level: Score 20/100 (below threshold)
Dashboard UI Refresh: 58.0/100
[FAIL] NOT RECOMMENDED - Choose different project
[WARN] Size: Score 30/100 (below threshold)
[WARN] Duration: Score 40/100 (below threshold)
Winner: Parking Assist Feature (moderate complexity, volunteer team, manageable scope).
Pilot Project Charter
Formal Pilot Definition Document
# ASPICE Pilot Project Charter
**Project**: Parking Assist Feature (Ultrasonic Sensor-Based)
**Duration**: 4 months (March 1 - June 30, 2025)
**Pilot Goal**: Achieve ASPICE CL2 compliance on one feature to validate processes before organization-wide rollout.
---
## Success Criteria
### Primary Success Criteria (Must Achieve)
1. **ASPICE CL2 Pre-Assessment**: External assessor rates pilot at CL2 (≥85% of base practices)
2. **No Schedule Slip**: Feature delivered on time (June 30)
3. **Quality Maintained**: ≤2 defects/KLOC (comparable to non-ASPICE projects)
### Secondary Success Criteria (Nice to Have)
4. **Team Satisfaction**: ≥70% of team rates ASPICE processes as "helpful" (post-pilot survey)
5. **Productivity**: No more than 10% productivity decrease vs baseline
6. **Evidence Completeness**: All SWE.1-6 work products generated
---
## Scope
### In Scope (SWE Processes)
- SWE.1: Software Requirements Analysis (Jira User Stories)
- SWE.2: Software Architectural Design (ADRs)
- SWE.3: Software Detailed Design (C code + reviews)
- SWE.4: Unit Verification (pytest + coverage)
- SWE.5: Integration (HIL testing)
- SWE.6: Qualification Testing (acceptance tests)
### Supporting Processes
- SUP.1: Quality Assurance (sprint reviews)
- SUP.8: Configuration Management (Git branching, tagging)
- SUP.9: Problem Resolution (Jira bugs)
- MAN.3: Project Management (sprint planning, burndown)
### Out of Scope (Future Phases)
- SYS.2-5 (System processes) - handled by separate systems team
- HWE processes (no hardware changes in pilot)
- ACQ, SPL, REU processes (not applicable)
---
## Team
| Role | Name | Time Allocation | Responsibilities |
|------|------|-----------------|-------------------|
| **ASPICE Pilot Lead** | Alice Johnson | 100% (dedicated) | Coordinate pilot, liaise with assessor, collect evidence |
| **Tech Lead** | Bob Smith | 50% dev + 50% mentoring | Architectural decisions, code reviews, ADRs |
| **Senior Developer** | Charlie Davis | 100% | Parking algorithm implementation |
| **Junior Developer** | Diana Lee | 100% | Sensor driver integration |
| **QA Engineer** | Eve Martinez | 50% | Integration/acceptance testing |
| **Scrum Master** | Frank Chen | 25% | Facilitate sprint ceremonies, remove blockers |
**Total**: 5 FTE (full-time equivalent)
---
## Timeline
### Month 1 (March): Requirements & Architecture
- **Week 1-2**: Requirements elicitation (SWE.1)
- Deliverable: 20 User Stories with acceptance criteria
- Traceability: Stories linked to [SYS-100] "Parking Assist System"
- **Week 3-4**: Architecture design (SWE.2)
- Deliverable: 3 ADRs (sensor fusion, control algorithm, safety monitor)
- Review: Architecture review with external assessor
### Month 2 (April): Implementation & Unit Testing
- **Week 5-6**: Sprint 1 - Sensor driver (SWE.3, SWE.4)
- Deliverable: Ultrasonic sensor driver + unit tests (85% coverage)
- **Week 7-8**: Sprint 2 - Distance calculation (SWE.3, SWE.4)
- Deliverable: Distance algorithm + unit tests (90% coverage)
### Month 3 (May): Integration & Testing
- **Week 9-10**: Sprint 3 - Control logic (SWE.3, SWE.5)
- Deliverable: Steering control + integration tests (HIL bench)
- **Week 11-12**: Sprint 4 - Safety monitor (SWE.3, SWE.5)
- Deliverable: Watchdog + redundancy checks
### Month 4 (June): Qualification & Assessment
- **Week 13-14**: Sprint 5 - Qualification testing (SWE.6)
- Deliverable: 50 acceptance test cases executed (100% pass rate)
- **Week 15**: Pre-assessment preparation
- Deliverable: Evidence package (all work products organized)
- **Week 16**: External pre-assessment
- Deliverable: CL2 rating from assessor
---
## Budget
| Item | Cost | Justification |
|------|------|---------------|
| **External Assessor** | $25,000 | Pre-assessment (3 days) + recommendations |
| **Tools (Jira, SonarQube)** | $5,000 | 5 users × 4 months |
| **Training** | $10,000 | 2-day ASPICE workshop (5 people × $2k) |
| **HIL Test Bench Time** | $8,000 | 40 hours × $200/hour |
| **Contingency (15%)** | $7,200 | Buffer for unexpected needs |
| **TOTAL** | **$55,200** | |
**ROI Calculation**: If pilot succeeds, enables $5M OEM contract (10,000% ROI).
---
## Risks & Mitigation
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| **Team resistance** | Medium | High | Volunteers only, frequent feedback sessions |
| **Tool integration issues** | Medium | Medium | 2-week buffer for tool setup (before Month 1) |
| **Requirements creep** | Low | High | Strict change control (SUP.10), Product Owner gatekeeping |
| **Assessor fails project** | Low | Critical | Weekly check-ins with assessor, mid-pilot review at Month 2 |
---
## Lessons Learned Framework
### Data Collection (Throughout Pilot)
- **Weekly**: Team survey (5 questions, 2 minutes) - track morale, process friction
- **Bi-weekly**: Metrics dashboard (velocity, defect density, review time)
- **Monthly**: Retrospective with Process Owner - identify process gaps
### Post-Pilot Review (Week 17)
- **What Worked**: Document successful practices (e.g., "ADRs reduced architecture discussions by 60%")
- **What Didn't**: Identify bottlenecks (e.g., "Code review approval took 3 days, blocked sprints")
- **Adjustments for Rollout**: Refine processes before organization-wide deployment
**Deliverable**: 20-page "Pilot Lessons Learned" report for executive sponsor.
---
## Governance
### Steering Committee (Monthly)
- **Attendees**: Executive Sponsor, ASPICE Program Manager, Pilot Lead, Tech Lead
- **Agenda**: Progress review, risk mitigation, budget tracking
- **Duration**: 1 hour
### Pilot Team Retrospectives (Bi-weekly)
- **Attendees**: Pilot team (5 people)
- **Agenda**: Process improvements, ASPICE friction points
- **Duration**: 1 hour
### Assessor Check-ins (Bi-weekly)
- **Attendees**: Pilot Lead, Assessor (remote)
- **Agenda**: Evidence review, early feedback
- **Duration**: 30 minutes
---
## Exit Criteria
Pilot is complete when:
1. [PASS] All 20 User Stories delivered (Definition of Done met)
2. [PASS] External pre-assessment completed (CL2 rating achieved)
3. [PASS] Lessons Learned report published
4. [PASS] Team satisfaction ≥70% (post-pilot survey)
5. [PASS] Executive sponsor approves rollout to Phase 2 (next 3 teams)
**Approval Authority**: Executive Sponsor + ASPICE Program Manager
---
**Signed**:
- Executive Sponsor: _______________ Date: ___________
- ASPICE Program Manager: _______________ Date: ___________
- Pilot Lead: _______________ Date: ___________
Pilot Execution Best Practices
Week 1: Kickoff Meeting
Agenda:
# Pilot Kickoff Meeting (2 hours)
## Part 1: The "Why" (30 minutes)
- Executive sponsor explains strategic importance
- "We lost $12M contract to competitor with ASPICE CL2"
- "This pilot proves we can compete"
- Q&A: Address fears/concerns
## Part 2: The "What" (45 minutes)
- Pilot Lead walks through charter
- Scope, timeline, success criteria
- Show example work products (templates from Ch 20)
- Emphasize: "We're learning together, mistakes are OK"
## Part 3: The "How" (30 minutes)
- Tool onboarding
- Jira: Create first User Story together
- Git: Demonstrate commit message convention
- CI/CD: Show pipeline in action
- Hands-on: Each team member creates a User Story
## Part 4: Commitment (15 minutes)
- Team agrees on working agreement
- Daily standups at 9:30 AM
- Code reviews within 24 hours
- Bi-weekly retrospectives every other Friday
- Team signs charter (symbolic commitment)
Mid-Pilot Review (Month 2)
Checkpoint with External Assessor:
# Mid-Pilot Assessor Review (4 hours)
## Objective
Validate we're on track for CL2 before investing 2 more months.
## Agenda
### 1. Work Product Sampling (2 hours)
Assessor reviews:
- 5 User Stories (SWE.1): Check acceptance criteria, traceability
- 2 ADRs (SWE.2): Verify architecture decisions documented
- 3 PRs (SWE.3): Check code review rigor (2 approvals, MISRA compliance)
- CI pipeline logs (SWE.4): Verify unit test execution, coverage
### 2. Interview Team (1 hour)
Assessor asks:
- "How do you ensure traceability between requirements and code?"
- "Walk me through your code review process"
- "What happens if a unit test fails in CI?"
### 3. Feedback Session (1 hour)
Assessor provides:
- [PASS] Strengths: "Traceability is excellent (Jira links in commits)"
- [WARN] Gaps: "ADRs missing rationale section (SWE.2 BP1)"
- [TOOL] Recommendations: "Add 'Alternatives Considered' to ADR template"
## Outcome
- **Go/No-Go decision**: Continue to Month 3, or pause and fix gaps
- **Adjustments**: Update templates, provide additional training
Common Pilot Pitfalls (and How to Avoid Them)
| Pitfall | Symptom | Root Cause | Prevention |
|---|---|---|---|
| Scope Creep | Pilot never finishes | No change control | Strict SUP.10 process, Product Owner says "no" |
| Perfectionism | Team spends 3 days on 1 ADR | Misunderstanding "sufficient documentation" | Show examples, "Done is better than perfect" |
| Tool Obsession | Team customizes Jira for 2 weeks | Thinking tools solve process problems | Use templates, minimal configuration |
| Isolation | Pilot team disconnected from org | Stealth mode, no communication | Monthly demos to broader org |
| Burnout | Team works weekends | Unrealistic timeline | Protected capacity (no other projects during pilot) |
Pilot Success Celebration
Week 17: Demo Day (2 hours, All-Hands)
Agenda:
# Pilot Demo Day (Organization-Wide)
## 1. Executive Introduction (10 minutes)
- Sponsor: "4 months ago, we started ASPICE pilot. Today, we're CL2."
## 2. Pilot Team Presentation (60 minutes)
- **What We Built**: Live demo of Parking Assist feature
- **How We Did It**: Show Jira → Code → Tests → Evidence flow
- **What We Learned**: Top 3 insights
1. "Code reviews found 40% more bugs than we expected"
2. "Traceability saved us when requirements changed"
3. "CI/CD made testing actually fun"
## 3. Assessor Report (20 minutes)
- External assessor presents CL2 rating
- Highlights: "Best traceability I've seen in 50 assessments"
## 4. Q&A (20 minutes)
- Other teams ask: "How much extra work was it?" (Answer: "15% first month, 5% steady-state")
## 5. Next Steps (10 minutes)
- ASPICE Program Manager: "We're rolling out to 3 more teams next quarter"
- Call for volunteers
## 6. Celebration 🎉
- Team lunch, certificate of completion, bonus recognition
Summary
Pilot Project Selection Checklist:
- [PASS] Team: 3-8 developers, volunteers, senior mentorship
- [PASS] Duration: 3-6 months
- [PASS] Complexity: Moderate (exercises all SWE processes, but not mission-critical)
- [PASS] ASIL: QM or ASIL-A (lower risk)
- [PASS] Visibility: Moderate (management aware, not C-level pressure)
- [PASS] Budget: $50-100k (tools, training, assessor)
Pilot Execution:
- Charter: Formal scope, success criteria, timeline (4 months typical)
- Kickoff: Align team on "why" before "how"
- Mid-Pilot Review: Assessor checkpoint at Month 2 (course correction)
- Evidence Collection: Continuous (not scrambling at end)
- Lessons Learned: 20-page report for rollout phase
- Celebration: Org-wide demo, recognize team, build momentum
Next: Scale from pilot to rollout (23.02 Rollout Strategy).