7.3: Continuous Readiness

Introduction

The worst ASPICE approach: 11 months of chaos, then 1 month of panic documentation before assessment. The best approach: continuous readiness—processes are ASPICE-compliant every day, so assessments are non-events. This section shows how to maintain permanent assessment readiness without heroic effort.


The Continuous Readiness Mindset

Before vs. After

Traditional Approach (BAD) Continuous Readiness (GOOD)
Month 1-11: Ignore ASPICE, focus on features Every Sprint: Follow ASPICE processes daily
Month 12: Scramble to create evidence Every Sprint: Evidence auto-generated by CI/CD
Assessment Week: Panic, all-hands effort Assessment Week: Business as usual
Post-Assessment: Abandon ASPICE until next year Post-Assessment: Continuous improvement (CL3)
Result: CL1 (barely pass), team burnout Result: CL2/CL3, sustainable practice

Key Shift: ASPICE is not a "project" with start/end dates—it's how we work.


Continuous Readiness Framework

Four Pillars

The following diagram presents the four pillars of continuous ASPICE readiness: automated evidence generation, real-time compliance dashboards, gap detection alerts, and self-healing process corrections.

Auto Evidence Collection


Pillar 1: Automated Evidence Generation

CI/CD as Evidence Factory

Goal: Work products generated as side-effect of normal development (zero manual effort).

Implementation:

# .github/workflows/aspice-evidence-generation.yml
name: ASPICE Evidence Generation (Continuous)

on:
  push:
    branches: [main, develop]
  schedule:
    - cron: '0 2 * * 0'  # Weekly Sunday 2 AM

jobs:
  generate-evidence:
    runs-on: ubuntu-latest

    steps:
      # SWE.1: Requirements Traceability Matrix
      - name: Generate Traceability Matrix
        run: |
          python scripts/generate_traceability_matrix.py \
            --jira-project PARKING_ASSIST \
            --git-repo . \
            --output evidence/SWE.1/traceability_matrix_$(date +%Y%m%d).xlsx

      # SWE.3: Git Commit Log (with requirement IDs)
      - name: Export Git Log
        run: |
          git log --oneline --decorate --all --since="6 months ago" \
            > evidence/SWE.3/git_log_$(date +%Y%m%d).txt

      # SWE.4: Code Coverage Report
      - name: Generate Coverage Report
        run: |
          pytest --cov=src --cov-report=html --cov-report=json
          cp coverage.json evidence/SWE.4/coverage_$(date +%Y%m%d).json
          zip -r evidence/SWE.4/coverage_html_$(date +%Y%m%d).zip htmlcov/

      # SWE.4: Test Execution Logs
      - name: Export Test Results
        run: |
          cp test_results.xml evidence/SWE.4/test_results_$(date +%Y%m%d).xml

      # SWE.3: MISRA Compliance Report
      - name: Generate MISRA Report
        run: |
          cppcheck --addon=misra --xml --output-file=evidence/SWE.3/misra_$(date +%Y%m%d).xml src/

      # Upload to S3 (long-term storage)
      - name: Archive Evidence to S3
        run: |
          aws s3 sync evidence/ s3://aspice-evidence-prod/parking-assist/$(date +%Y%m%d)/
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      # Slack notification
      - name: Notify Team
        run: |
          curl -X POST ${{ secrets.SLACK_WEBHOOK }} \
            -H 'Content-Type: application/json' \
            -d '{"text":"[PASS] ASPICE evidence generated for this week: https://s3.../$(date +%Y%m%d)/"}'

Result: Evidence archive grows automatically every week (52 snapshots/year).


Evidence Retention Policy

## ASPICE Evidence Retention (SUP.8 BP4)

**Retention Duration**: 3 years (OEM contract requirement)

**Storage**:
- **Active Projects** (< 6 months old): AWS S3 Standard
- **Completed Projects** (6 months - 3 years): AWS S3 Glacier (lower cost)
- **Archived Projects** (> 3 years): Delete (unless legal hold)

**Folder Structure**:

s3://aspice-evidence-prod/ ├── parking_assist/ │ ├── 2025-01-07/ # Weekly snapshot │ ├── 2025-01-14/ │ ├── 2025-01-21/ │ └── ... ├── lane_departure/ │ ├── 2025-02-01/ │ └── ... └── adaptive_cruise/ └── ...


**Access Control**:
- **Read**: ASPICE Program Manager, External Assessor (time-limited credentials)
- **Write**: CI/CD pipeline (service account)
- **Delete**: ASPICE Program Manager only (with approval trail)

**Audit Trail**: S3 versioning enabled (track who accessed/modified evidence).

Pillar 2: Real-Time Compliance Monitoring

ASPICE Compliance Dashboard (Always On)

Tool Stack: Grafana + InfluxDB + Python collectors

Dashboard Sections:

## ASPICE Compliance Dashboard (Grafana)

### Section 1: Certification Status
**Metric**: % of teams CL2 certified
**Update Frequency**: Weekly (after pre-assessments)
**Alert**: If coverage drops below 90% (email ASPICE Program Manager)

**Visualization**: Gauge (0-100%)
```grafana
{
  "type": "gauge",
  "title": "CL2 Certification Coverage",
  "targets": [
    {
      "measurement": "certification_coverage",
      "field": "percent"
    }
  ],
  "thresholds": [
    { "value": 0, "color": "red" },
    { "value": 90, "color": "yellow" },
    { "value": 95, "color": "green" }
  ]
}

Section 2: Traceability Coverage

Metric: % of commits with requirement IDs Update Frequency: Daily (automated Git scan) Alert: If below 95% for 3 consecutive days

Visualization: Time series graph (last 30 days)

# Daily traceability collector
def collect_traceability_metric():
    """Run daily via cron"""
    repo = git.Repo('/path/to/repo')
    commits_last_30_days = list(repo.iter_commits(since='30 days ago'))

    total = len(commits_last_30_days)
    traced = sum(1 for c in commits_last_30_days if re.search(r'\[(SWE|SYS)-\d+\]', c.message))

    coverage = (traced / total) * 100

    # Push to InfluxDB
    client.write_points([{
        "measurement": "traceability_coverage",
        "tags": {"project": "parking_assist"},
        "fields": {"percent": coverage}
    }])

    # Alert if below threshold
    if coverage < 95:
        send_alert(f"[WARN] Traceability coverage: {coverage:.1f}% (target: 95%)")

Section 3: Code Coverage Trend

Metric: Branch coverage % (6-month trend) Update Frequency: Every PR merge Alert: If coverage decreases >5% in 1 week

Visualization: Line chart with threshold line at 80%


Section 4: Upcoming Assessments

Metric: Days until next assessment Update Frequency: Static (manual entry) Alert: 30 days before assessment (reminder to review evidence)

Visualization: Countdown timer

┌────────────────────────────┐
│ Next ASPICE Assessment     │
│                            │
│        42 Days             │
│   (June 15, 2026)          │
│                            │
│ Status: [PASS] On Track        │
└────────────────────────────┘

Pillar 3: Quarterly Self-Assessments

Internal Audit Program

Frequency: Every 3 months (4 times/year)

Process:

## Quarterly Self-Assessment Process

### Week 1: Preparation
- [ ] Select 1 project for audit (rotate projects quarterly)
- [ ] Assign internal auditor (ASPICE-trained team member, NOT from project team)
- [ ] Schedule 2-day audit (blocked calendars)

---

### Week 2: Audit Execution (2 days)

**Day 1: Document Review (8 hours)**
- Auditor reviews work products:
  - SWE.1: 10 sample User Stories
  - SWE.2: 3 ADRs
  - SWE.3: 5 PRs (code reviews)
  - SWE.4: Coverage report + unit tests
  - SWE.5: Integration test results
  - SWE.6: Acceptance test results
  - SUP.8: Traceability matrix

**Day 2: Interviews (4 hours) + Findings (4 hours)**
- Interview 5 team members:
  - "How do you ensure traceability?"
  - "Walk me through your code review process"
  - "What happens if a test fails in CI?"

- Auditor documents findings:
  - **Strengths**: What's working well
  - **Gaps**: What needs improvement
  - **Rating**: CL0, CL1, CL2, or CL3 (per process)

---

### Week 3: Corrective Actions
- Team Lead reviews findings with team
- Create Jira tickets for gaps (priority: Critical → High → Medium)
- Assign owners, set deadlines (fix before next self-assessment)

**Example Gap**:

Finding: 3 out of 10 User Stories missing traceability to parent Epic Severity: High (blocks CL2) Corrective Action: [PROC-55] "Backfill Epic links for 3 stories" Owner: Alice Deadline: 2 weeks


---

### Week 4: Management Review
- ASPICE Program Manager presents findings to leadership
- Metrics: "This quarter: 92% CL2 coverage (up from 85% last quarter)"
- Action plan: "3 critical gaps, all assigned, resolving in 2 weeks"

Outcome: No surprises when formal assessment arrives (practice makes perfect).


Pillar 4: Continuous Process Improvement

Retrospectives as Process Improvement Engine

Frequency: Bi-weekly (every 2 weeks, at end of sprint)

Agenda:

# Sprint Retrospective (ASPICE SUP.1 BP7 Focus)

**Duration**: 60 minutes
**Attendees**: Development Team (6-8 people)

## Part 1: ASPICE Process Review (30 min)

**Question 1**: Which ASPICE processes worked well this sprint?
**Example Answer**:
- [PASS] "Code reviews were fast (18-hour avg turnaround) thanks to 2-approval rule"
- [PASS] "Traceability was perfect (100% commits had Jira IDs) - pre-commit hook works!"

**Question 2**: Which ASPICE processes caused friction?
**Example Answer**:
- [WARN] "ADR writing took 3 hours (too long). Template is verbose."
- [WARN] "Integration tests flaky on HIL bench (failed 2 times, had to re-run)"

---

## Part 2: Process Improvements (20 min)

**Action Items** (specific, measurable):

| Issue | Process | Proposed Improvement | Owner | Deadline |
|-------|---------|---------------------|-------|----------|
| ADR template too long | SWE.2 BP1 | Simplify ADR template (remove "Alternatives" section if only 1 option) | Bob (Architect) | Next sprint |
| Flaky HIL tests | SWE.5 BP3 | Debug HIL bench network timeout issue | Diana (QA) | 2 weeks |

---

## Part 3: Track Previous Actions (10 min)

**Review Last Sprint's Action Items**:
- [PROC-48] "Add Jira link checker to CI" → [PASS] DONE (100% compliance now)
- [PROC-49] "Train team on MISRA rules" → ⏳ IN PROGRESS (50% trained)

**Result**: Continuous refinement (every 2 weeks, processes get better).

Process Improvement Tracking

# Track process improvements over time
class ProcessImprovementTracker:
    """Monitor process evolution (path to CL3)"""

    def record_improvement(self, improvement: dict):
        """Log process change from retrospective"""
        self.db.insert({
            "date": datetime.now(),
            "process": improvement["aspice_process"],  # e.g., "SWE.2 BP1"
            "issue": improvement["problem"],
            "solution": improvement["fix"],
            "metric_before": improvement["baseline"],
            "metric_after": None,  # Measure later
            "status": "Implemented"
        })

    def measure_impact(self, improvement_id: str, metric_after: float):
        """Measure improvement effectiveness"""
        improvement = self.db.get(improvement_id)
        improvement["metric_after"] = metric_after
        improvement["impact_percent"] = (
            (metric_after - improvement["metric_before"]) /
            improvement["metric_before"]
        ) * 100

        print(f"Impact: {improvement['impact_percent']:.1f}% improvement")

# Example
tracker = ProcessImprovementTracker(db)

# Sprint 20: Issue identified
tracker.record_improvement({
    "aspice_process": "SWE.3 BP7",
    "problem": "Code review turnaround time: 3 days avg",
    "fix": "Set expectation: reviews within 24 hours",
    "baseline": 3.0  # days
})

# Sprint 22: Measure impact
tracker.measure_impact(improvement_id="PROC-50", metric_after=1.2)
# Output: "Impact: -60.0% improvement" (3 days → 1.2 days)

Continuous Readiness Checklist

Monthly Health Check

## ASPICE Continuous Readiness Checklist (Monthly)

**Run this checklist on the 1st of each month**

### Evidence Generation
- [ ] CI/CD pipeline generating evidence weekly (check last 4 weeks)
- [ ] S3 evidence archive has 4 new folders (one per week)
- [ ] No broken evidence collection jobs (check CI logs)

### Compliance Metrics
- [ ] Traceability coverage ≥95% (check dashboard)
- [ ] Code coverage ≥80% for all ASIL-B modules (check SonarQube)
- [ ] MISRA compliance: 0 critical violations (check cppcheck reports)

### Team Competency
- [ ] All new hires completed ASPICE training within 1 month
- [ ] No team members overdue for annual refresher (check LMS)

### Process Updates
- [ ] Process documentation reviewed (no outdated procedures >6 months old)
- [ ] Tool versions up-to-date (Jira, GitHub, CI/CD plugins)

### Assessment Readiness
- [ ] Next assessment date confirmed (calendar entry)
- [ ] Assessor contracted (if formal assessment within 6 months)
- [ ] Evidence package from last month reviewed (spot-check 3 work products)

---

**Action**: If ANY item unchecked, create Jira ticket and assign owner.

From Continuous Readiness to CL3

The Path to Process Improvement (Capability Level 3)

CL2 vs CL3 Difference:

  • CL2 (Managed Process): Processes are defined, followed, evidence exists
  • CL3 (Established Process): Processes are institutionalized + continuously improved

How to Achieve CL3 (after 12-18 months at CL2):

## CL3 Readiness Criteria

### Institutionalization (Required for CL3)
- [ ] **Process Assets Library**: Centralized repository of all process descriptions, templates, checklists (Confluence)
- [ ] **Standard Process**: Organization-wide ASPICE process (not project-specific)
- [ ] **Tailoring Guidelines**: Documented rules for adapting process to project context (e.g., "ASIL-D projects require MC/DC coverage")

### Continuous Improvement (Required for CL3)
- [ ] **Improvement Database**: Track all process changes from retrospectives (24+ months of data)
- [ ] **Metrics Program**: Measure process effectiveness (e.g., "Code review time decreased 60% since adopting checklist")
- [ ] **Lessons Learned**: Post-project reviews feed into process updates (close the loop)

### Evidence of Improvement
- [ ] **Before/After Metrics**: Show quantitative improvement (e.g., defect density: 3.5 → 1.8 defects/KLOC)
- [ ] **Process Evolution Log**: Changelog of process updates (version control for processes)
- [ ] **Benchmarking**: Compare your processes to industry best practices (e.g., "Our code review turnaround is top quartile")

---

**Timeline**: CL2 → CL3 typically takes 12-18 months of sustained improvement.

**ROI**: CL3 unlocks premium OEM contracts (some Tier-1 suppliers require CL3 for safety-critical systems).

Summary

Continuous Readiness = 4 Pillars:

  1. Automated Evidence Generation: CI/CD creates work products weekly (zero manual effort)
  2. Real-Time Monitoring: Dashboards show compliance 24/7 (alerts prevent drift)
  3. Quarterly Self-Assessments: Internal audits every 3 months (practice for real assessment)
  4. Continuous Improvement: Retrospectives drive process refinement (path to CL3)

Benefits:

  • No Assessment Panic: Evidence exists year-round, ready for audit anytime
  • Sustainable Practice: ASPICE is embedded in daily work (not a "project")
  • Continuous Improvement: Metrics show progress (motivates team)
  • CL3 Path: Institutionalized improvement unlocks advanced capability level

Key Metric: Days to Assessment Readiness

  • Traditional approach: 30 days (scramble to create evidence)
  • Continuous readiness: 0 days (always ready)

Part IV Complete: You now have a comprehensive ASPICE implementation playbook (Chapters 19-24).

Next: Part V - Future Trends and AI Integration (where ASPICE meets cutting-edge AI/ML development).