2.6: Lessons Learned
Project Retrospective
Overall Assessment
Project: Adaptive Cruise Control (ACC) ECU Development Duration: 17.5 months (target: 18 months) Budget: €2.4M (target: €2.5M) Quality: 1.6 defects/KLOC (target: ≤2.0) Certification: [PASS] ASPICE CL2, [PASS] ISO 26262 ASIL-B
Verdict: Successful - On time, under budget, exceeding quality targets
What Worked Well [PASS]
1. AI-Assisted Development (52% Productivity Gain)
Success Factor: GitHub Copilot for C coding + unit test generation
Evidence:
- Code Generation: 40% faster function implementation
- Unit Tests: 73% reduction in test development time
- Documentation: 75% faster Doxygen comment generation
Key Practice:
## AI Code Generation Workflow (Recommended)
1. **Write detailed function comment FIRST** (requirements, inputs, outputs, safety notes)
2. **Let Copilot generate initial implementation** (80% complete)
3. **Developer reviews for**:
- MISRA C compliance (run cppcheck)
- Safety logic correctness (ASIL-B validation)
- Edge case handling (input validation)
4. **Approve or refine** (20% effort)
Time: 45 min → 10 min per function (78% reduction)
Lesson: AI is a force multiplier, not a replacement. Expert review essential for safety-critical code.
Recommendation: Invest in AI tools early (GitHub Copilot: $10/developer/month = massive ROI)
Productivity Measurement Methodology: The 52% productivity gain was measured by comparing estimated effort (using historical data from similar non-AI-assisted projects) vs. actual effort tracked in Jira. Key activities were time-boxed with consistent task granularity to enable valid comparison.
2. AUTOSAR Classic Architecture (Proven Stability)
Success Factor: Chose AUTOSAR Classic over Adaptive
Evidence:
- Deterministic Timing: 50ms control loop never missed (100% real-time compliance)
- Tool Maturity: Vector DaVinci Configurator generated 90% of BSW code
- Safety Certification: Extensive ISO 26262 tooling support (VectorCAST, LDRA)
Lesson: Proven technology > bleeding edge for safety-critical automotive systems.
Recommendation: Use AUTOSAR Adaptive only if OTA updates or dynamic loading required (not for ACC).
3. Continuous ASPICE Compliance (No Last-Minute Scramble)
Success Factor: Evidence auto-generated from CI/CD pipeline
Evidence:
- Traceability Matrix: Auto-generated weekly from Jira + Git
- Test Reports: VectorCAST, dSPACE logs archived automatically
- Coverage Reports: Pushed to S3 every PR merge
Result: ASPICE assessment was 2-day review (assessor verified existing evidence), not 2-week panic.
Lesson: Continuous readiness > cramming before audit
Recommendation: Implement evidence automation from Day 1 (see Chapter 22.03).
4. Pilot Project Approach (De-Risked ASPICE Rollout)
Success Factor: ACC project was Wave 2 (not Wave 1 pilot)
Context: Organization ran pilot on smaller project (Parking Assist, ASIL-A) before ACC.
Benefit: ACC team learned from pilot's mistakes:
- [PASS] Pre-commit hooks for traceability (prevented missing Jira IDs)
- [PASS] MISRA checker integrated in CI (caught violations early)
- [PASS] ADR template refined (pilot's template too verbose, simplified for ACC)
Lesson: Don't make your critical project the pilot (high risk).
Recommendation: Follow pilot → early adopters → org-wide rollout (Chapter 23.02).
5. HIL Testing from Month 6 (Early Integration)
Success Factor: Procured dSPACE HIL bench early (Month 6 vs traditional Month 10)
Evidence:
- Integration Issues Found Early: 23 defects found at Month 7 (cheap to fix)
- vs Late Integration: Industry average: 60% defects found at Month 12+ (expensive)
Cost Savings: €180k (estimated rework cost avoided by early HIL testing)
Lesson: Shift-left integration testing (HIL as soon as first software builds available).
Recommendation: Budget for HIL bench in project Month 6, not Month 10.
What Didn't Work (and How We Fixed It) [WARN]
1. Initial ADR Template Too Complex
Problem: Pilot project's ADR template was 4 pages (developers complained: "Too much overhead")
Evidence: Only 2 ADRs written in Month 2 (should have been 5+)
Root Cause: Template had 15 sections, many irrelevant (e.g., "Migration Strategy" for greenfield project)
Fix: Simplified ADR template to 6 core sections:
# ADR Template (Simplified)
1. Context: What problem are we solving?
2. Decision: What solution did we choose?
3. Rationale: WHY this solution?
4. Consequences: Pros/cons
5. Alternatives Considered: What we rejected (brief)
6. Traceability: Links to requirements
Total: 1 page (down from 4 pages)
Result: ADR adoption increased from 40% to 95% of design decisions.
Lesson: Templates must be practical (not bureaucratic).
Recommendation: Start with minimal template, add sections only if assessor requests.
2. Kalman Filter Tuning Took 3 Weeks (Underestimated)
Problem: EKF sensor fusion required extensive calibration (Q/R matrices)
Evidence: Initial tuning: Distance estimation error 8% (target: <5%)
Root Cause: Physics-based initial values (Q/R matrices) didn't match real sensor noise.
Fix: Empirical tuning on HIL bench:
- Collected 10 hours of sensor data (radar + camera)
- Analyzed noise covariance (actual R matrices: radar=1.8m, not 1.0m assumption)
- Re-tuned Q matrices based on vehicle dynamics (acceleration variance)
Result: Distance estimation error reduced to 2.3% (better than 5% target).
Lesson: Complex algorithms need calibration time (don't underestimate).
Recommendation: Allocate 20% of algorithm development time for tuning/calibration.
3. CAN Message Latency Issue (Month 11)
Problem: CAN bus overload caused ACC throttle commands delayed by 25ms (spec: <15ms)
Evidence: HIL tests TC-HIL-401 through TC-HIL-410 failed (10% of integration tests)
Root Cause: OEM's vehicle architecture had 15 ECUs on same CAN bus (250 kbps), bus utilization 78%
Fix (2 weeks):
- OEM adjusted CAN arbitration IDs (ACC messages given higher priority)
- Reduced ACC message frequency for non-critical data (HMI status: 100ms → 200ms)
- Implemented CAN message packing (combined 3 signals into 1 message)
Result: Latency reduced to 12ms (within 15ms spec).
Lesson: Integration issues with vehicle network are common (not ACC software fault).
Recommendation: Involve OEM systems engineer early (Month 3) to review CAN architecture.
4. SOTIF Scenarios Underestimated (ISO 21448)
Problem: SOTIF testing revealed 4 "unknown unsafe" scenarios not in original hazard analysis
Example: Heavy rain + truck spray → camera blindness + radar multipath → distance estimation error 20%
Root Cause: HARA (Hazard Analysis) focused on component failures, not environmental limitations.
Fix:
- Expanded SOTIF scenario library from 30 to 50 test cases
- Added mitigations:
- Detect low camera confidence → reduce max ACC speed to 80 km/h
- Detect radar multipath (signal variance check) → increase time gap to 2.5s
Result: 92% SOTIF pass rate (acceptable for ASIL-B).
Lesson: SOTIF (environmental hazards) as important as FMEA (component failures) for ADAS.
Recommendation: Allocate 15% of safety budget to SOTIF analysis (ISO 21448, not just ISO 26262).
Recommendations for Future Projects
For AI-Assisted Development
-
Invest in AI Tools Early
- GitHub Copilot: $10/developer/month (ROI: 500%+)
- Train team on effective prompting (2-hour workshop)
- Establish code review process (AI-generated code ≠ production-ready)
-
AI Limitations for Safety-Critical Code
- [PASS] Good for: Boilerplate, unit tests, documentation
- [WARN] Requires review for: Complex algorithms (Kalman filter), safety logic (fault handling)
- [FAIL] Don't trust blindly for: MISRA compliance (run cppcheck), safety requirements (expert validation)
-
Measure AI Impact
- Track time savings per activity (coding, testing, docs)
- Compare defect density (AI-assisted vs manual code)
- Adjust process based on data (not hype)
For ASPICE Compliance
-
Continuous Evidence Generation
- Automate traceability matrix (Jira + Git integration)
- CI/CD pipeline generates test reports, coverage (no manual work)
- Monthly self-assessments (not yearly scramble)
-
Practical Templates
- Start minimal (1-page ADR, not 4 pages)
- Add complexity only if assessor requires
- Tools > Documents (Git log IS evidence, no need to re-type into Word doc)
-
Pilot Before Production
- Test ASPICE processes on low-risk project first
- Refine templates, tools, training
- Scale to critical projects after lessons learned
For ISO 26262 Safety
-
Safety from Day 1
- HARA at Month 1, not Month 6 (requirements trace to safety goals)
- Safety engineer embedded in team (not external consultant)
- FMEA + SOTIF in parallel (environmental hazards matter)
-
Tool Qualification
- Qualify compilers, static analyzers early (TCL classification)
- Use pre-qualified tools (VectorCAST, LDRA) to save time
- Budget €50k for tool qualification (not optional for ASIL-B)
-
Independent Safety Assessment
- Hire external assessor (TÜV SÜD, DEKRA) at Month 13
- Budget €100k for ISO 26262 certification
- Allow 3 months for findings remediation
Key Metrics Summary
Productivity Gains (AI-Assisted vs Traditional)
| Metric | Traditional | AI-Assisted | Improvement |
|---|---|---|---|
| Code Development | 200 hours | 120 hours | 40% faster |
| Unit Testing | 150 hours | 40 hours | 73% faster |
| Documentation | 60 hours | 15 hours | 75% faster |
| MISRA Compliance | 12 violations/KLOC | 2 violations/KLOC | 83% reduction |
| Total Time | 590 hours | 285 hours | 52% reduction |
Quality Metrics (vs Industry Average)
| Metric | Industry Avg | ACC Project | Improvement |
|---|---|---|---|
| Defect Density | 3.5 defects/KLOC | 1.6 defects/KLOC | 54% better |
| Code Coverage | 75% (ASIL-B) | 89% | 18% higher |
| ASPICE CL2 Pass Rate | 85% BP achievement | 92% BP | 8% better |
| Budget Overrun | +15% typical | -4% (under budget) | 19% better |
Final Recommendations
1. For Management
- AI Investment: Approve GitHub Copilot ($10/dev/month) → 52% productivity gain
- ASPICE Readiness: Continuous evidence automation → no last-minute panic
- Safety Budget: Allocate €150k for ISO 26262 (tools, assessment) → non-negotiable for ASIL-B
2. For Engineers
- AI as Assistant: Use Copilot for speed, but review for safety
- ASPICE Discipline: Traceability from Day 1 (Jira IDs in commits) → avoids 200 hours of retroactive work
- Early Integration: HIL testing from Month 6 → catches 60% of defects early (cheap to fix)
3. For Organizations
- Pilot First: Test ASPICE on low-risk project (Parking Assist, ASIL-A) → learn before critical project
- Training: 2-day ASPICE workshop for all developers → reduces compliance friction by 70%
- Tool Qualification: Use pre-qualified tools (Vector, LDRA) → saves 3 months vs custom qualification
Conclusion
ACC ECU Development: A Success Story
- [PASS] On Time: 17.5 months (target: 18 months)
- [PASS] Under Budget: €2.4M (target: €2.5M)
- [PASS] High Quality: 1.6 defects/KLOC (54% better than industry average)
- [PASS] Certified: ASPICE CL2 (92% BP achievement), ISO 26262 ASIL-B
Key Success Factors:
- AI-assisted development (52% productivity gain)
- Continuous ASPICE compliance (no last-minute scramble)
- Early HIL testing (caught 60% of defects early)
- Proven technology stack (AUTOSAR Classic, Vector tools)
Message to Future Projects: ASPICE + AI is achievable, practical, and highly beneficial. Follow the playbook from Chapters 19-25.
Scaling Considerations: This case study covers a mid-sized project (14.5 FTE, 25,000 SLOC). For larger projects (50+ FTE, 100,000+ SLOC), additional considerations include: multi-team coordination, distributed CI/CD, and scaled ASPICE assessments. For smaller projects, consider streamlined templates and combined roles.
Chapter 25 Complete: ACC ECU case study demonstrates real-world ASPICE + AI integration.
Next: Industrial controller development (Chapter 26).