6.4: Metrics and KPIs

Introduction

"You can't improve what you don't measure." ASPICE without metrics is faith-based development. Metrics prove ASPICE delivers value: defects decrease, velocity stabilizes, customer satisfaction improves. This section defines Key Performance Indicators (KPIs) for ASPICE success, automated collection strategies, and dashboard visualization.


ASPICE Metrics Framework

Three Categories of Metrics

Category Purpose Example Metrics Collection Frequency
Compliance Metrics Prove ASPICE processes are followed % teams CL2 certified, traceability coverage % Monthly
Quality Metrics Prove ASPICE improves product quality Defect density, code coverage %, MISRA violations Per sprint
Efficiency Metrics Prove ASPICE doesn't harm productivity Velocity (story points/sprint), cycle time, review turnaround Per sprint

Goal: Balance compliance, quality, AND efficiency (not just compliance).


Compliance Metrics

KPI 1: ASPICE Certification Coverage

Definition: Percentage of active teams that have achieved CL2 certification.

Formula:

Certification Coverage (%) = (Teams with CL2 / Total Active Teams) × 100

Target:

  • Pilot Phase (Month 1-4): 5% (1 out of 20 teams)
  • Wave 2 (Month 5-10): 20% (4 out of 20 teams)
  • Wave 3 (Month 11-18): ≥95% (19 out of 20 teams)

Data Source: ASPICE assessment database (track pre-assessment results)

Collection:

# Automated Collection from Assessment Database
class ComplianceMetrics:
    def __init__(self, assessment_db, jira_client):
        self.assessments = assessment_db
        self.jira = jira_client

    def get_certification_coverage(self) -> dict:
        """Calculate CL2 certification coverage"""
        all_teams = self.jira.get_all_teams()  # 20 teams

        cl2_teams = [
            team for team in all_teams
            if self.assessments.get_latest_rating(team.id) == "CL2"
        ]

        coverage = (len(cl2_teams) / len(all_teams)) * 100

        return {
            "total_teams": len(all_teams),
            "cl2_teams": len(cl2_teams),
            "coverage_percent": round(coverage, 1),
            "target": 95,
            "status": "[PASS] On Track" if coverage >= 80 else "[WARN] Behind Target"
        }

# Example output
metrics = ComplianceMetrics(assessment_db, jira)
result = metrics.get_certification_coverage()

print(f"""
Certification Coverage: {result['coverage_percent']}%
  CL2 Teams: {result['cl2_teams']}/{result['total_teams']}
  Target: {result['target']}%
  Status: {result['status']}
""")

Output (Month 15):

Certification Coverage: 80.0%
  CL2 Teams: 16/20
  Target: 95%
  Status: [PASS] On Track

KPI 2: Requirements Traceability Coverage

Definition: Percentage of code commits that reference a requirement ID.

Formula:

Traceability Coverage (%) = (Commits with Req ID / Total Commits) × 100

Target: ≥95% (allows 5% for emergency hotfixes)

Data Source: Git log analysis

Collection:

#!/bin/bash
# Script: calculate_traceability_coverage.sh
# ASPICE SUP.8 BP5: Ensure bidirectional traceability

# Get commits from last sprint (last 2 weeks)
COMMITS=$(git log --since="2 weeks ago" --oneline)

# Count total commits
TOTAL_COMMITS=$(echo "$COMMITS" | wc -l)

# Count commits with requirement ID (pattern: [SWE-123] or [SYS-456])
TRACED_COMMITS=$(echo "$COMMITS" | grep -cE '\[(SWE|SYS|TC)-[0-9]+\]' || echo 0)

# Calculate coverage
COVERAGE=$(awk "BEGIN {print ($TRACED_COMMITS / $TOTAL_COMMITS) * 100}")

echo "Traceability Coverage: ${COVERAGE}%"
echo "  Traced Commits: $TRACED_COMMITS / $TOTAL_COMMITS"

# Alert if below threshold
if (( $(echo "$COVERAGE < 95" | bc -l) )); then
  echo "[WARN] WARNING: Traceability below 95% threshold"
  echo "Missing requirement IDs in $(($TOTAL_COMMITS - $TRACED_COMMITS)) commits"
fi

Automated Enforcement: Add pre-commit hook (see 22.03) to reject commits without requirement IDs.


KPI 3: Work Product Completeness

Definition: Percentage of required ASPICE work products present for each project.

Formula:

Work Product Completeness (%) = (Work Products Present / Work Products Required) × 100

Required Work Products (per project):

  • SWE.1: ≥10 User Stories (requirements specification)
  • SWE.2: ≥2 ADRs (architecture decisions)
  • SWE.3: Code in Git (source code)
  • SWE.4: Unit test coverage report (≥80%)
  • SWE.5: Integration test results
  • SWE.6: Acceptance test results
  • SUP.8: Traceability matrix

Data Source: Automated scan of project repositories

Collection:

# Automated Work Product Checker
class WorkProductCompleteness:
    """Verify all required ASPICE work products exist"""

    REQUIRED_WORK_PRODUCTS = {
        "SWE.1": ["jira_stories.json"],  # Export from Jira
        "SWE.2": ["docs/architecture/ADR-*.md"],  # At least 2 ADRs
        "SWE.3": ["src/**/*.c", "src/**/*.h"],  # Source code
        "SWE.4": ["coverage_report.html"],  # Coverage ≥80%
        "SWE.5": ["test_results/integration/*.xml"],
        "SWE.6": ["test_results/acceptance/*.html"],
        "SUP.8": ["traceability_matrix.md"]
    }

    def check_project(self, project_path: str) -> dict:
        """Check if project has all required work products"""
        import glob
        import os

        results = {}
        for process, patterns in self.REQUIRED_WORK_PRODUCTS.items():
            found = any(
                glob.glob(os.path.join(project_path, pattern))
                for pattern in patterns
            )
            results[process] = "[PASS] Present" if found else "[FAIL] Missing"

        total_required = len(self.REQUIRED_WORK_PRODUCTS)
        total_present = sum(1 for status in results.values() if "Present" in status)
        completeness = (total_present / total_required) * 100

        return {
            "project": project_path,
            "work_products": results,
            "completeness_percent": round(completeness, 1),
            "status": "[PASS] Complete" if completeness == 100 else "[WARN] Incomplete"
        }

# Example usage
checker = WorkProductCompleteness()
result = checker.check_project("/projects/parking-assist")

print(f"Work Product Completeness: {result['completeness_percent']}%")
for process, status in result['work_products'].items():
    print(f"  {process}: {status}")

Output:

Work Product Completeness: 85.7%
  SWE.1: [PASS] Present
  SWE.2: [PASS] Present
  SWE.3: [PASS] Present
  SWE.4: [PASS] Present
  SWE.5: [FAIL] Missing
  SWE.6: [PASS] Present
  SUP.8: [PASS] Present

Quality Metrics

KPI 4: Defect Density

Definition: Number of defects per 1000 lines of code (defects/KLOC).

Formula:

Defect Density = (Total Defects Found / Lines of Code) × 1000

Target:

  • Before ASPICE: 3.5 defects/KLOC (industry average for automotive)
  • After ASPICE (6 months): ≤2.0 defects/KLOC (40% reduction)

Data Source: Bug tracking system (Jira) + code metrics (SonarQube)

Collection:

# Defect Density Calculator
class QualityMetrics:
    def __init__(self, jira_client, sonarqube_client):
        self.jira = jira_client
        self.sonarqube = sonarqube_client

    def calculate_defect_density(self, project_key: str, time_period_months: int = 6) -> dict:
        """
        Calculate defect density for project.

        Defects: Bugs found in testing + production (excludes feature requests)
        """
        # Get defects from Jira
        jql = f"""
        project = {project_key}
        AND type = Bug
        AND created >= -{time_period_months}M
        AND status IN (Resolved, Closed)
        """
        defects = self.jira.search_issues(jql)

        # Get lines of code from SonarQube
        metrics = self.sonarqube.get_project_metrics(project_key)
        lines_of_code = metrics["ncloc"]  # Non-comment lines of code

        # Calculate density
        defect_density = (len(defects) / lines_of_code) * 1000

        return {
            "project": project_key,
            "defects": len(defects),
            "lines_of_code": lines_of_code,
            "defect_density": round(defect_density, 2),
            "target": 2.0,
            "status": "[PASS] Good" if defect_density <= 2.0 else "[WARN] Needs Improvement"
        }

# Example
quality = QualityMetrics(jira, sonarqube)
result = quality.calculate_defect_density("PARKING_ASSIST", time_period_months=6)

print(f"""
Defect Density: {result['defect_density']} defects/KLOC
  Defects: {result['defects']}
  Lines of Code: {result['lines_of_code']:,}
  Target: ≤{result['target']} defects/KLOC
  Status: {result['status']}
""")

Output:

Defect Density: 1.8 defects/KLOC
  Defects: 45
  Lines of Code: 25,000
  Target: ≤2.0 defects/KLOC
  Status: [PASS] Good

KPI 5: Code Coverage

Definition: Percentage of code executed by unit tests (branch coverage).

Formula:

Code Coverage (%) = (Branches Tested / Total Branches) × 100

Target:

  • ASIL QM: ≥60%
  • ASIL-A: ≥70%
  • ASIL-B: ≥80%
  • ASIL-C/D: ≥85% (MC/DC coverage)

Data Source: CI/CD pipeline coverage reports (pytest-cov, gcov, Codecov)

Collection:

# GitHub Actions: Collect Coverage Metrics
# File: .github/workflows/coverage-metrics.yml

name: Code Coverage Metrics

on:
  pull_request:
  push:
    branches: [main, develop]

jobs:
  coverage:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v3

      - name: Run Unit Tests with Coverage
        run: |
          pytest tests/unit/ --cov=src --cov-report=json --cov-report=html

      - name: Extract Coverage Percentage
        id: coverage
        run: |
          COVERAGE=$(jq '.totals.percent_covered' coverage.json)
          echo "coverage=$COVERAGE" >> $GITHUB_OUTPUT

      - name: Check Coverage Threshold
        run: |
          COVERAGE=${{ steps.coverage.outputs.coverage }}
          THRESHOLD=80  # ASIL-B requirement

          if (( $(echo "$COVERAGE < $THRESHOLD" | bc -l) )); then
            echo "[FAIL] Coverage $COVERAGE% below threshold $THRESHOLD%"
            exit 1
          else
            echo "[PASS] Coverage $COVERAGE% meets threshold $THRESHOLD%"
          fi

      - name: Upload Coverage Report
        uses: codecov/codecov-action@v3
        with:
          files: ./coverage.json
          flags: unittests
          name: parking-assist-coverage

      - name: Comment PR with Coverage
        if: github.event_name == 'pull_request'
        uses: actions/github-script@v6
        with:
          script: |
            const coverage = ${{ steps.coverage.outputs.coverage }};
            const comment = `
            ## Code Coverage Report

            **Coverage**: ${coverage}%
            **Threshold**: 80% (ASIL-B)
            **Status**: ${coverage >= 80 ? '[PASS] Pass' : '[FAIL] Fail'}

            [View detailed report](https://codecov.io/gh/${{ github.repository }})
            `;

            github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.pull_request.number,
              body: comment
            });

Automated Enforcement: Fail CI pipeline if coverage below threshold (prevents merge).


KPI 6: MISRA Compliance Rate

Definition: Percentage of code that passes MISRA C:2012 static analysis (0 violations).

Formula:

MISRA Compliance (%) = (Files with 0 Violations / Total Files) × 100

Target: 100% compliance (0 critical/required rule violations)

Data Source: cppcheck, PC-lint, LDRA

Collection:

#!/bin/bash
# Script: misra_compliance_check.sh
# ASPICE SWE.3 BP5: Ensure coding standards compliance

# Run MISRA C checker on all source files
cppcheck --addon=misra --addon-args="--rule-file=misra_rules.json" src/ 2> misra_report.txt

# Count violations by severity
TOTAL_FILES=$(find src/ -name "*.c" | wc -l)
FILES_WITH_VIOLATIONS=$(grep -c "misra-c2012" misra_report.txt || echo 0)
COMPLIANT_FILES=$(($TOTAL_FILES - $FILES_WITH_VIOLATIONS))

COMPLIANCE=$(awk "BEGIN {print ($COMPLIANT_FILES / $TOTAL_FILES) * 100}")

echo "MISRA Compliance: ${COMPLIANCE}%"
echo "  Compliant Files: $COMPLIANT_FILES / $TOTAL_FILES"

# Breakdown by severity
CRITICAL=$(grep -c "error: \[misra-c2012" misra_report.txt || echo 0)
REQUIRED=$(grep -c "warning: \[misra-c2012" misra_report.txt || echo 0)

echo "  Critical Violations: $CRITICAL (MUST fix)"
echo "  Required Violations: $REQUIRED (should fix)"

# Fail if any critical violations
if [ "$CRITICAL" -gt 0 ]; then
  echo "[FAIL] FAIL: Critical MISRA violations detected"
  exit 1
fi

Efficiency Metrics

KPI 7: Velocity Stability

Definition: Consistency of team velocity (story points per sprint) after ASPICE adoption.

Formula:

Velocity Stability = Standard Deviation of Velocity (last 6 sprints)

Target:

  • Pre-ASPICE: σ = 12 points (high variability)
  • Post-ASPICE (6 months): σ ≤ 8 points (more predictable)

Data Source: Jira sprint reports

Collection:

# Velocity Stability Calculator
import statistics

class EfficiencyMetrics:
    def __init__(self, jira_client):
        self.jira = jira_client

    def calculate_velocity_stability(self, team_id: str, num_sprints: int = 6) -> dict:
        """
        Calculate velocity stability (lower std dev = more stable).
        """
        sprints = self.jira.get_recent_sprints(team_id, limit=num_sprints)

        velocities = [sprint.completed_story_points for sprint in sprints]

        avg_velocity = statistics.mean(velocities)
        std_dev = statistics.stdev(velocities)
        coefficient_of_variation = (std_dev / avg_velocity) * 100  # Normalized metric

        return {
            "team": team_id,
            "avg_velocity": round(avg_velocity, 1),
            "std_dev": round(std_dev, 1),
            "coefficient_of_variation": round(coefficient_of_variation, 1),
            "velocities": velocities,
            "status": "[PASS] Stable" if std_dev <= 8 else "[WARN] Unstable"
        }

# Example
efficiency = EfficiencyMetrics(jira)
result = efficiency.calculate_velocity_stability("team-adas")

print(f"""
Velocity Stability (last {len(result['velocities'])} sprints):
  Average Velocity: {result['avg_velocity']} points/sprint
  Standard Deviation: {result['std_dev']} points
  Coefficient of Variation: {result['coefficient_of_variation']}%
  Status: {result['status']}

  Sprint Velocities: {result['velocities']}
""")

Output:

Velocity Stability (last 6 sprints):
  Average Velocity: 42.5 points/sprint
  Standard Deviation: 6.8 points
  Coefficient of Variation: 16.0%
  Status: [PASS] Stable

  Sprint Velocities: [40, 45, 41, 38, 47, 44]

Interpretation: Lower std dev = ASPICE processes reduce variability (predictable delivery).


KPI 8: Code Review Turnaround Time

Definition: Average time from PR creation to merge (measures process efficiency).

Formula:

Review Turnaround (hours) = Avg(PR Merge Time - PR Creation Time)

Target:

  • Wave 1 (Pilot): ≤48 hours (learning curve)
  • Wave 2-3: ≤24 hours (process matured)

Data Source: GitHub/GitLab PR metadata

Collection:

# Code Review Turnaround Time
from datetime import datetime

class ReviewMetrics:
    def __init__(self, github_client):
        self.github = github_client

    def calculate_review_turnaround(self, repo: str, time_period_days: int = 30) -> dict:
        """
        Calculate avg time from PR creation to merge.
        ASPICE SWE.3 BP7: Verify design (code review efficiency).
        """
        prs = self.github.get_merged_pull_requests(
            repo=repo,
            since=datetime.now() - timedelta(days=time_period_days)
        )

        turnaround_times = []
        for pr in prs:
            created = pr.created_at
            merged = pr.merged_at
            turnaround_hours = (merged - created).total_seconds() / 3600
            turnaround_times.append(turnaround_hours)

        avg_turnaround = statistics.mean(turnaround_times)
        median_turnaround = statistics.median(turnaround_times)

        return {
            "repo": repo,
            "total_prs": len(prs),
            "avg_turnaround_hours": round(avg_turnaround, 1),
            "median_turnaround_hours": round(median_turnaround, 1),
            "target_hours": 24,
            "status": "[PASS] Good" if avg_turnaround <= 24 else "[WARN] Slow"
        }

# Example
review_metrics = ReviewMetrics(github)
result = review_metrics.calculate_review_turnaround("company/parking-assist", time_period_days=30)

print(f"""
Code Review Turnaround Time (last 30 days):
  Total PRs Merged: {result['total_prs']}
  Average Turnaround: {result['avg_turnaround_hours']} hours
  Median Turnaround: {result['median_turnaround_hours']} hours
  Target: ≤{result['target_hours']} hours
  Status: {result['status']}
""")

Output:

Code Review Turnaround Time (last 30 days):
  Total PRs Merged: 87
  Average Turnaround: 18.3 hours
  Median Turnaround: 14.5 hours
  Target: ≤24 hours
  Status: [PASS] Good

ASPICE Metrics Dashboard

Real-Time Visualization

Tool: Grafana + InfluxDB (store metrics time-series data)

Dashboard Layout (Panel 1 - Compliance & Quality): This panel displays real-time compliance status across ASPICE processes, including traceability coverage percentages, review completion rates, and quality gate pass/fail indicators.

Compliance Metrics

Dashboard Layout (Panel 2 - Efficiency & Alerts): This panel tracks development efficiency metrics such as cycle time, AI tool adoption rates, and automated alert thresholds that flag process deviations before they become assessment findings.

Efficiency Metrics

Implementation (Grafana JSON):

{
  "dashboard": {
    "title": "ASPICE Metrics Dashboard",
    "panels": [
      {
        "id": 1,
        "title": "Certification Coverage",
        "type": "stat",
        "targets": [
          {
            "query": "SELECT last(\"coverage_percent\") FROM \"aspice_certification\" WHERE time > now() - 1d"
          }
        ],
        "thresholds": {
          "mode": "absolute",
          "steps": [
            { "value": 0, "color": "red" },
            { "value": 80, "color": "yellow" },
            { "value": 95, "color": "green" }
          ]
        }
      },
      {
        "id": 2,
        "title": "Defect Density Trend",
        "type": "graph",
        "targets": [
          {
            "query": "SELECT mean(\"defect_density\") FROM \"quality_metrics\" WHERE time > now() - 6M GROUP BY time(1w)"
          }
        ]
      }
      // ... more panels
    ]
  }
}

Summary

ASPICE Metrics Framework:

Category KPI Target Data Source
Compliance Certification Coverage ≥95% Assessment DB
Compliance Traceability Coverage ≥95% Git log
Compliance Work Product Completeness 100% Repository scan
Quality Defect Density ≤2.0 defects/KLOC Jira + SonarQube
Quality Code Coverage ≥80% (ASIL-B) CI/CD pipeline
Quality MISRA Compliance 100% cppcheck
Efficiency Velocity Stability σ ≤8 points Jira sprints
Efficiency Review Turnaround ≤24 hours GitHub PRs

Key Success Factors:

  • Automated Collection: Metrics generated by CI/CD (no manual reporting)
  • Real-Time Dashboard: Grafana displays live status (updated every 15 min)
  • Balanced Scorecard: Track compliance, quality, AND efficiency (not just compliance)
  • Actionable Alerts: Notify teams when metrics degrade (e.g., "Review turnaround 36h, fix process bottleneck")

Continuous Improvement Loop:

  1. Measure: Collect metrics automatically (CI/CD, Git, Jira)
  2. Analyze: Review dashboard monthly (ASPICE steering committee)
  3. Improve: Identify bottlenecks (e.g., "Code reviews taking too long"), adjust processes
  4. Repeat: Track improvement trend (e.g., "Review time decreased 50% in 3 months")

Next: Prepare for formal ASPICE assessment (Chapter 24).