5.3: Continuous ASPICE Compliance
Introduction
Traditional ASPICE compliance is assessed annually: 6 months of frantic evidence collection, followed by a 2-week audit. Continuous ASPICE flips this model—every commit, every PR, every sprint generates compliance evidence automatically. This section shows how to build a compliance pipeline that makes ASPICE assessment a non-event.
The Continuous Compliance Pipeline
Architecture Overview
The following diagram illustrates the continuous compliance pipeline architecture, showing how evidence is automatically collected, validated, and stored as a by-product of normal development activities.
Key Principle: Evidence is generated as a side-effect of normal development activities.
Pre-Commit Hooks (SWE.3 BP5 Coding Standards)
ASPICE Quality Gate at Developer Workstation
#!/bin/bash
# File: .git/hooks/pre-commit
# ASPICE Process: SWE.3 BP5 (Ensure coding standards compliance)
echo "ASPICE Pre-Commit Checks (SWE.3 BP5)..."
# Check 1: MISRA C:2012 Compliance (ASPICE SWE.3 BP5)
echo " [1/5] Running MISRA C checker..."
cppcheck --addon=misra --addon-args="--rule-file=misra.json" src/ 2> misra-report.txt
MISRA_VIOLATIONS=$(grep -c "misra-c2012" misra-report.txt || true)
if [ "$MISRA_VIOLATIONS" -gt 0 ]; then
echo " [FAIL] MISRA violations detected: $MISRA_VIOLATIONS"
echo " See: misra-report.txt"
echo " ASPICE SWE.3 BP5: Coding standards must be met before commit"
exit 1
fi
echo " [PASS] MISRA C compliance verified"
# Check 2: Code Formatting (Consistency)
echo " [2/5] Checking code formatting..."
clang-format --dry-run --Werror src/**/*.c src/**/*.h
if [ $? -ne 0 ]; then
echo " [FAIL] Code formatting issues detected"
echo " Tip: Run 'clang-format -i src/**/*.{c,h}'"
exit 1
fi
echo " [PASS] Code formatting verified"
# Check 3: Traceability (SUP.8 BP5)
echo " [3/5] Verifying requirement traceability..."
# Extract changed files
CHANGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(c|cpp)$')
if [ -n "$CHANGED_FILES" ]; then
for FILE in $CHANGED_FILES; do
# Check if file has traceability comments
if ! grep -q "@implements.*\[SWE-[0-9]\+\]" "$FILE"; then
echo " [WARN] Missing traceability in: $FILE"
echo " Tip: Add comment: // @implements [SWE-XXX] Requirement description"
echo " ASPICE SUP.8 BP5: Bidirectional traceability required"
exit 1
fi
done
fi
echo " [PASS] Traceability verified"
# Check 4: Unit Test Coverage (SWE.4 BP4)
echo " [4/5] Checking unit test coverage..."
# Run tests for changed files only (fast feedback)
if [ -n "$CHANGED_FILES" ]; then
pytest --cov=src --cov-report=term-missing --cov-fail-under=80 -q
if [ $? -ne 0 ]; then
echo " [FAIL] Unit test coverage below 80% threshold"
echo " ASPICE SWE.4 BP4: Adequate test coverage required"
exit 1
fi
fi
echo " [PASS] Unit test coverage sufficient"
# Check 5: Static Analysis (SWE.3 BP8)
echo " [5/5] Running static analysis..."
cppcheck --enable=all --error-exitcode=1 --suppress=missingIncludeSystem src/
if [ $? -ne 0 ]; then
echo " [FAIL] Static analysis issues detected"
echo " ASPICE SWE.3 BP8: Verify software units (static analysis)"
exit 1
fi
echo " [PASS] Static analysis passed"
echo ""
echo "[PASS] All ASPICE pre-commit checks passed!"
echo "ASPICE Evidence: Pre-commit checks enforce quality gates (SWE.3 BP5, SUP.8 BP5)"
exit 0
Installation:
# Make hook executable
chmod +x .git/hooks/pre-commit
# Or use pre-commit framework for team-wide hooks
pip install pre-commit
cat > .pre-commit-config.yaml <<EOF
repos:
- repo: local
hooks:
- id: aspice-checks
name: ASPICE Compliance Checks
entry: .git/hooks/pre-commit
language: system
pass_filenames: false
EOF
ASPICE Benefit: Defects caught before code review (shift-left quality).
Pull Request ASPICE Checklist
GitHub PR Template with ASPICE Compliance
<!-- File: .github/pull_request_template.md -->
<!-- ASPICE Process: SWE.3 BP7 (Verify software detailed design) -->
## Pull Request: [SWE-XXX] Title
**Requirement**: [SWE-XXX] Brief description
**Parent Requirement**: [SYS-YYY] (if applicable)
**ASPICE Process**: SWE.3 (Software Detailed Design)
---
## ASPICE Compliance Checklist
### SWE.1: Requirements (Traceability)
- [ ] **BP5**: PR title references Jira requirement (e.g., `[SWE-123]`)
- [ ] **BP5**: Code comments include `@implements [SWE-XXX]` tags
- [ ] **BP7**: Requirement changes documented (if applicable)
### SWE.2: Architecture (Design Consistency)
- [ ] **BP1**: Architecture changes documented in ADR (if applicable)
- [ ] **BP7**: Design consistent with existing architecture
- [ ] **BP3**: New interfaces documented (function headers, API docs)
### SWE.3: Detailed Design (Implementation)
- [ ] **BP5**: Code adheres to MISRA C:2012 (0 critical violations)
- [ ] **BP6**: All functions have Doxygen comments
- [ ] **BP7**: Code review completed (2 approvals required)
- [ ] **BP8**: Static analysis passed (cppcheck, SonarQube)
### SWE.4: Unit Testing (Verification)
- [ ] **BP1**: Unit tests written for new functions
- [ ] **BP3**: All unit tests pass (CI pipeline green)
- [ ] **BP4**: Code coverage ≥ 80% branch coverage
- [ ] **BP5**: Test results reviewed (see CI artifacts)
### SWE.5: Integration (if applicable)
- [ ] **BP3**: Integration tests pass (HIL tests executed)
- [ ] **BP5**: Integration test results documented
### SUP.8: Configuration Management
- [ ] **BP3**: Branch naming follows convention (`feature/SWE-XXX-description`)
- [ ] **BP5**: Commit messages reference requirements ([SWE-XXX])
- [ ] **BP6**: No merge conflicts with target branch
### SUP.9: Problem Resolution (if bug fix)
- [ ] **BP3**: Root cause analysis documented in Jira
- [ ] **BP7**: Regression test added to prevent recurrence
---
## Changes Made
### Code Changes
<!-- Describe WHAT changed (file/function level) -->
- Modified `src/braking/emergency_brake.c`:
- Added `detectPedestrian()` function (implements [SWE-234])
- Updated `activateEmergencyBrake()` with latency optimization
### Test Changes
<!-- Describe test coverage additions -->
- Added `tests/unit/test_emergency_brake.c`:
- 12 new unit tests (MC/DC coverage: 95%)
- Edge case: Camera failure handling
### Documentation Changes
<!-- Describe documentation updates -->
- Updated `docs/architecture/ADR-008-pedestrian-detection.md`
- Added Doxygen comments to all new functions
---
## Test Results
### Unit Tests (SWE.4 BP3)
=========================== test session starts =========================== platform linux -- Python 3.11.0 collected 47 items
tests/unit/test_emergency_brake.c::test_brake_latency PASSED [ 2%] tests/unit/test_emergency_brake.c::test_deceleration_rate PASSED [ 4%] ... tests/unit/test_emergency_brake.c::test_camera_failure PASSED [100%]
---------- coverage: platform linux, python 3.11.0-final-0 ----------- Name Stmts Miss Branch BrPart Cover
src/braking/emergency_brake.c 234 12 120 6 94%
TOTAL 234 12 120 6 94%
### Static Analysis (SWE.3 BP8)
- **MISRA C**: 0 critical violations - PASS
- **SonarQube**: Quality Gate PASSED
- Bugs: 0
- Vulnerabilities: 0
- Code Smells: 2 (minor)
- Technical Debt: 15 minutes
---
## Evidence Artifacts (ASPICE Work Products)
| Work Product | Location | ASPICE Reference |
|--------------|----------|------------------|
| Source Code | `src/braking/emergency_brake.c` | SWE.3 BP6 |
| Unit Tests | `tests/unit/test_emergency_brake.c` | SWE.4 BP1 |
| Test Report | CI artifacts: `test-results.xml` | SWE.4 BP5 |
| Coverage Report | CI artifacts: `coverage.html` | SWE.4 BP4 |
| Static Analysis | CI artifacts: `sonarqube-report.json` | SWE.3 BP8 |
---
## Reviewers
- **Code Review** (SWE.3 BP7): @alice, @bob (2 approvals required)
- **Safety Review** (if ASIL ≥ B): @functional-safety-manager
---
## ASPICE Assessor Notes
This PR demonstrates compliance with:
- SWE.1 BP5 (Traceability): Requirement ID in title
- SWE.3 BP5 (Coding standards): MISRA C verified
- SWE.3 BP7 (Verification): Peer review process
- SWE.4 BP3 (Unit testing): 94% coverage
- SUP.8 BP5 (CM): Version control with traceability
GitHub Actions Enforcement:
# .github/workflows/pr-compliance-check.yml
name: ASPICE PR Compliance Gate
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
aspice-compliance:
runs-on: ubuntu-latest
steps:
# SWE.1 BP5: Traceability verification
- name: Check Jira Reference in Title
run: |
PR_TITLE="${{ github.event.pull_request.title }}"
if ! echo "$PR_TITLE" | grep -qE '\[(SWE|SYS)-[0-9]+\]'; then
echo "[FAIL] PR title must reference Jira story (e.g., [SWE-123])"
echo "ASPICE SUP.8 BP5: Bidirectional traceability required"
exit 1
fi
# SWE.3 BP7: Require 2 approvals
- name: Verify Code Review Approvals
uses: actions/github-script@v6
with:
script: |
const reviews = await github.rest.pulls.listReviews({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.payload.pull_request.number
});
const approvals = reviews.data.filter(r => r.state === 'APPROVED').length;
if (approvals < 2) {
core.setFailed(`[FAIL] Only ${approvals}/2 approvals. ASPICE SWE.3 BP7 requires peer review.`);
}
# SWE.3 BP5: MISRA C compliance
- name: Run MISRA C Checker
run: |
cppcheck --addon=misra src/ 2> misra-report.txt
VIOLATIONS=$(grep -c "misra-c2012" misra-report.txt || true)
if [ "$VIOLATIONS" -gt 0 ]; then
echo "[FAIL] $VIOLATIONS MISRA violations detected"
echo "ASPICE SWE.3 BP5: Coding standards compliance required"
cat misra-report.txt
exit 1
fi
# SWE.4 BP4: Code coverage check
- name: Verify Unit Test Coverage
run: |
pytest --cov=src --cov-report=xml --cov-fail-under=80
echo "[PASS] Code coverage >= 80% (ASPICE SWE.4 BP4)"
CI/CD Evidence Generator
Automated ASPICE Work Product Creation
# File: scripts/generate_aspice_evidence.py
# Purpose: Auto-generate ASPICE work products from CI/CD artifacts
import os
import json
from datetime import datetime
from jinja2 import Template
class ASPICEEvidenceGenerator:
"""
Automatically generate ASPICE-compliant work products from CI/CD data.
ASPICE Processes: SWE.3-6, SUP.8
"""
def __init__(self, ci_artifacts_dir: str, evidence_output_dir: str):
self.artifacts_dir = ci_artifacts_dir
self.output_dir = evidence_output_dir
def generate_unit_test_report(self, test_results_xml: str) -> str:
"""
SWE.4 BP5: Review unit test results and achieve consistency.
Generate formal test report from pytest XML output.
"""
import xml.etree.ElementTree as ET
tree = ET.parse(test_results_xml)
root = tree.getroot()
test_cases = []
for testcase in root.findall(".//testcase"):
test_cases.append({
"name": testcase.get("name"),
"classname": testcase.get("classname"),
"time": float(testcase.get("time")),
"status": "FAIL" if testcase.find("failure") is not None else "PASS",
"failure_message": testcase.find("failure").text if testcase.find("failure") is not None else None
})
total_tests = len(test_cases)
passed_tests = sum(1 for tc in test_cases if tc["status"] == "PASS")
failed_tests = total_tests - passed_tests
# Generate formal test report using template
report_template = Template('''
# Unit Test Report (ASPICE SWE.4 BP5)
**Generated**: {{ timestamp }}
**Requirement**: {{ requirement_id }}
**ASPICE Process**: SWE.4 (Unit Verification)
---
## Summary
| Metric | Value |
|--------|-------|
| Total Tests | {{ total_tests }} |
| Passed | {{ passed_tests }} ({{ pass_rate }}%) |
| Failed | {{ failed_tests }} |
| Execution Time | {{ total_time }}s |
| **Status** | {{ overall_status }} |
---
## Test Cases
{% for tc in test_cases %}
### {{ tc.name }}
- **Class**: `{{ tc.classname }}`
- **Execution Time**: {{ tc.time }}s
- **Status**: {{ tc.status }}
{% if tc.failure_message %}
- **Failure**:
{{ tc.failure_message }}
{% endif %}
---
{% endfor %}
## ASPICE Compliance
- **SWE.4 BP1**: Unit test specification defined (test cases listed above)
- **SWE.4 BP3**: Unit tests executed (CI pipeline)
- **SWE.4 BP5**: Test results reviewed (this report)
**Evidence Location**: `{{ evidence_path }}`
**Traceability**: Tests implement requirements [{{ requirement_id }}]
''')
report_content = report_template.render(
timestamp=datetime.now().isoformat(),
requirement_id="SWE-234", # Extract from CI environment
total_tests=total_tests,
passed_tests=passed_tests,
failed_tests=failed_tests,
pass_rate=round((passed_tests / total_tests) * 100, 1),
total_time=round(sum(tc["time"] for tc in test_cases), 2),
overall_status="PASS" if failed_tests == 0 else "FAIL",
test_cases=test_cases,
evidence_path=test_results_xml
)
output_file = os.path.join(self.output_dir, "unit_test_report.md")
with open(output_file, "w") as f:
f.write(report_content)
print(f"[OK] Unit Test Report generated: {output_file}")
return output_file
def generate_traceability_matrix(self, git_log_file: str, jira_export_file: str) -> str:
"""
SUP.8 BP5: Ensure bidirectional traceability.
Auto-generate traceability matrix from Git commits and Jira.
"""
import re
# Parse Git commits for requirement IDs
with open(git_log_file, "r") as f:
git_log = f.read()
commit_pattern = r'\[([A-Z]+-\d+)\]'
implemented_requirements = set(re.findall(commit_pattern, git_log))
# Parse Jira export for requirements
with open(jira_export_file, "r") as f:
jira_data = json.load(f)
requirements = {req["key"]: req for req in jira_data["issues"]}
# Build traceability matrix
matrix_template = Template('''
# Traceability Matrix (ASPICE SUP.8 BP5)
**Generated**: {{ timestamp }}
**Project**: ADAS Emergency Braking System
**ASPICE Process**: SUP.8 (Configuration Management)
---
## Bidirectional Traceability
| Requirement ID | Title | Status | Implementation | Test Cases |
|----------------|-------|--------|----------------|------------|
{% for req_id in requirement_ids %}
{%- set req = requirements[req_id] %}
{%- set impl_status = "Implemented" if req_id in implemented else "Pending" %}
| {{ req_id }} | {{ req.summary }} | {{ req.status }} | {{ impl_status }} | {{ req.test_cases | join(", ") }} |
{% endfor %}
---
## Coverage Statistics
- **Total Requirements**: {{ total_requirements }}
- **Implemented**: {{ implemented_count }} ({{ implementation_rate }}%)
- **Tested**: {{ tested_count }} ({{ test_coverage }}%)
---
## ASPICE Compliance
- **SUP.8 BP5**: Bidirectional traceability established
- Requirements → Implementation (Git commits)
- Requirements → Test Cases (Jira links)
- **Evidence**: Git log + Jira export
**Last Updated**: {{ timestamp }}
''')
tested_count = sum(1 for req in requirements.values() if req.get("test_cases"))
matrix_content = matrix_template.render(
timestamp=datetime.now().isoformat(),
requirement_ids=sorted(requirements.keys()),
requirements=requirements,
implemented=implemented_requirements,
total_requirements=len(requirements),
implemented_count=len(implemented_requirements),
implementation_rate=round((len(implemented_requirements) / len(requirements)) * 100, 1),
tested_count=tested_count,
test_coverage=round((tested_count / len(requirements)) * 100, 1)
)
output_file = os.path.join(self.output_dir, "../Part_III_AI_Toolchain/13.03_Traceability_Automation.md")
with open(output_file, "w") as f:
f.write(matrix_content)
print(f"[OK] Traceability Matrix generated: {output_file}")
return output_file
def generate_release_notes(self, version: str, git_tag: str) -> str:
"""
SUP.8 BP3: Establish baselines.
Generate release notes for ASPICE baseline.
"""
release_template = Template('''
# Release Notes - Version {{ version }}
**Release Date**: {{ release_date }}
**Git Tag**: {{ git_tag }}
**ASPICE Process**: SUP.8 BP3 (Establish Baselines)
---
## Features Implemented
{% for story in stories %}
- **[{{ story.id }}]**: {{ story.title }}
- Requirement: {{ story.parent_req }}
- Test Coverage: {{ story.coverage }}%
- Status: {{ story.status }}
{% endfor %}
---
## Quality Metrics
| Metric | Value | Threshold | Status |
|--------|-------|-----------|--------|
| Code Coverage | {{ metrics.coverage }}% | >= 80% | {{ "Pass" if metrics.coverage >= 80 else "Fail" }} |
| MISRA Violations (Critical) | {{ metrics.misra_critical }} | 0 | {{ "Pass" if metrics.misra_critical == 0 else "Fail" }} |
| SonarQube Quality Gate | {{ metrics.sonarqube_status }} | PASSED | {{ "Pass" if metrics.sonarqube_status == "PASSED" else "Fail" }} |
| Integration Tests | {{ metrics.integration_pass }}/{{ metrics.integration_total }} | 100% | {{ "Pass" if metrics.integration_pass == metrics.integration_total else "Fail" }} |
---
## ASPICE Baseline
This release constitutes an ASPICE baseline (SUP.8 BP3):
- All work products consistent and complete
- Configuration items identified and controlled
- Baseline approved for integration testing
**Baseline ID**: {{ version }}
**Approval**: Product Owner + Tech Lead
---
## Traceability
All features trace to system requirements:
{% for story in stories %}
- {{ story.id }} → {{ story.parent_req }} ({{ story.parent_title }})
{% endfor %}
**Traceability Matrix**: [View Matrix](../Part_III_AI_Toolchain/13.03_Traceability_Automation.md)
''')
# In real implementation, fetch from Jira API
stories = [
{"id": "SWE-234", "title": "Emergency Braking", "parent_req": "SYS-45",
"parent_title": "Pedestrian Collision Prevention", "coverage": 94, "status": "Done"}
]
metrics = {
"coverage": 94,
"misra_critical": 0,
"sonarqube_status": "PASSED",
"integration_pass": 24,
"integration_total": 24
}
release_content = release_template.render(
version=version,
git_tag=git_tag,
release_date=datetime.now().strftime("%Y-%m-%d"),
stories=stories,
metrics=metrics
)
output_file = os.path.join(self.output_dir, f"release_notes_{version}.md")
with open(output_file, "w") as f:
f.write(release_content)
print(f"[OK] Release Notes generated: {output_file}")
return output_file
# Usage in CI/CD
generator = ASPICEEvidenceGenerator(
ci_artifacts_dir="build/artifacts",
evidence_output_dir="evidence/aspice"
)
# Automatically generate work products
generator.generate_unit_test_report("build/test-results.xml")
generator.generate_traceability_matrix("build/git-log.txt", "build/jira-export.json")
generator.generate_release_notes("v2.5.0", "v2.5.0")
Real-Time ASPICE Dashboard
Compliance Monitoring Dashboard
# File: scripts/aspice_dashboard.py
# Purpose: Real-time ASPICE compliance visualization
from flask import Flask, render_template, jsonify
from jira import JIRA
import subprocess
import json
app = Flask(__name__)
class ASPICEDashboard:
"""
Real-time ASPICE compliance monitoring.
Displays current compliance status for all processes.
"""
def __init__(self, jira_url: str, git_repo_path: str):
self.jira = JIRA(jira_url)
self.git_repo = git_repo_path
def get_compliance_status(self) -> dict:
"""
Calculate compliance status for all ASPICE processes.
"""
return {
"SWE.1": self._check_swe1_requirements(),
"SWE.2": self._check_swe2_architecture(),
"SWE.3": self._check_swe3_implementation(),
"SWE.4": self._check_swe4_unit_testing(),
"SWE.5": self._check_swe5_integration(),
"SWE.6": self._check_swe6_qualification(),
"SUP.8": self._check_sup8_configuration_mgmt(),
"MAN.3": self._check_man3_project_mgmt()
}
def _check_swe1_requirements(self) -> dict:
"""SWE.1: Software Requirements Analysis"""
# Count requirements with acceptance criteria
requirements = self.jira.search_issues('project=SWE AND type="User Story"')
total = len(requirements)
with_acceptance_criteria = sum(1 for r in requirements if r.fields.description and "Acceptance Criteria" in r.fields.description)
with_traceability = sum(1 for r in requirements if r.fields.parent)
return {
"process": "SWE.1",
"title": "Software Requirements Analysis",
"total_items": total,
"compliant_items": min(with_acceptance_criteria, with_traceability),
"compliance_rate": round((min(with_acceptance_criteria, with_traceability) / total) * 100, 1) if total > 0 else 0,
"status": "PASS" if (min(with_acceptance_criteria, with_traceability) / total) >= 0.95 else "NEEDS WORK",
"issues": [
f"{total - with_acceptance_criteria} stories missing acceptance criteria",
f"{total - with_traceability} stories missing parent Epic link"
]
}
def _check_swe3_implementation(self) -> dict:
"""SWE.3: Software Detailed Design"""
# Run static analysis and check MISRA compliance
result = subprocess.run(
["cppcheck", "--addon=misra", "src/"],
capture_output=True,
text=True
)
violations = result.stderr.count("misra-c2012")
# Check code review compliance (2 approvals)
git_log = subprocess.run(
["git", "log", "--oneline", "-n", "100"],
capture_output=True,
text=True,
cwd=self.git_repo
).stdout
total_commits = len(git_log.split("\n"))
return {
"process": "SWE.3",
"title": "Software Detailed Design",
"total_items": total_commits,
"compliant_items": total_commits if violations == 0 else 0,
"compliance_rate": 100 if violations == 0 else 0,
"status": "PASS" if violations == 0 else "FAIL",
"issues": [f"{violations} MISRA violations detected"] if violations > 0 else []
}
def _check_swe4_unit_testing(self) -> dict:
"""SWE.4: Unit Verification"""
# Check test coverage
result = subprocess.run(
["pytest", "--cov=src", "--cov-report=json"],
capture_output=True,
cwd=self.git_repo
)
with open(f"{self.git_repo}/coverage.json", "r") as f:
coverage_data = json.load(f)
coverage_percent = coverage_data["totals"]["percent_covered"]
return {
"process": "SWE.4",
"title": "Unit Verification",
"total_items": 1,
"compliant_items": 1 if coverage_percent >= 80 else 0,
"compliance_rate": coverage_percent,
"status": "PASS" if coverage_percent >= 80 else "BELOW THRESHOLD",
"issues": [f"Coverage {coverage_percent}% (threshold: 80%)"] if coverage_percent < 80 else []
}
def _check_sup8_configuration_mgmt(self) -> dict:
"""SUP.8: Configuration Management"""
# Check if all commits have requirement IDs
git_log = subprocess.run(
["git", "log", "--oneline", "-n", "100"],
capture_output=True,
text=True,
cwd=self.git_repo
).stdout
commits = git_log.split("\n")
total_commits = len([c for c in commits if c])
import re
commits_with_req_id = sum(1 for c in commits if re.search(r'\[(SWE|SYS)-\d+\]', c))
return {
"process": "SUP.8",
"title": "Configuration Management",
"total_items": total_commits,
"compliant_items": commits_with_req_id,
"compliance_rate": round((commits_with_req_id / total_commits) * 100, 1) if total_commits > 0 else 0,
"status": "PASS" if (commits_with_req_id / total_commits) >= 0.90 else "NEEDS WORK",
"issues": [f"{total_commits - commits_with_req_id} commits missing requirement IDs"] if commits_with_req_id < total_commits else []
}
# Additional methods for other processes...
@app.route("/")
def dashboard():
"""Render ASPICE compliance dashboard"""
return render_template("aspice_dashboard.html")
@app.route("/api/compliance")
def api_compliance():
"""API endpoint for compliance data"""
dashboard = ASPICEDashboard(
jira_url="https://jira.company.com",
git_repo_path="/path/to/repo"
)
return jsonify(dashboard.get_compliance_status())
if __name__ == "__main__":
app.run(debug=True, port=5000)
Dashboard HTML Template (templates/aspice_dashboard.html):
<!DOCTYPE html>
<html>
<head>
<title>ASPICE Compliance Dashboard</title>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
.process-card {
border: 1px solid #ddd;
padding: 15px;
margin: 10px;
border-radius: 8px;
}
.status-good { border-left: 4px solid #28a745; }
.status-warning { border-left: 4px solid #ffc107; }
.status-critical { border-left: 4px solid #dc3545; }
.metric { font-size: 24px; font-weight: bold; }
</style>
</head>
<body>
<h1>ASPICE Compliance Dashboard</h1>
<p>Real-time ASPICE compliance monitoring - Updated every 5 minutes</p>
<div id="compliance-summary"></div>
<script>
async function fetchCompliance() {
const response = await fetch('/api/compliance');
const data = await response.json();
const summaryDiv = document.getElementById('compliance-summary');
summaryDiv.innerHTML = '';
for (const [process, details] of Object.entries(data)) {
const card = document.createElement('div');
card.className = `process-card status-${details.status.includes('Good') ? 'good' : 'warning'}`;
card.innerHTML = `
<h2>${details.process}: ${details.title}</h2>
<div class="metric">${details.compliance_rate}% Compliant</div>
<p>${details.status}</p>
<p>${details.compliant_items} / ${details.total_items} items compliant</p>
${details.issues.length > 0 ? `<ul>${details.issues.map(i => `<li>${i}</li>`).join('')}</ul>` : ''}
`;
summaryDiv.appendChild(card);
}
}
// Initial load
fetchCompliance();
// Refresh every 5 minutes
setInterval(fetchCompliance, 300000);
</script>
</body>
</html>
Automated Evidence Archival
S3 Evidence Repository
# File: scripts/archive_evidence.py
# Purpose: Archive ASPICE evidence to S3 for assessment
import boto3
from datetime import datetime
import os
import hashlib
class ASPICEEvidenceArchiver:
"""
Archive ASPICE work products to S3 for long-term storage.
ASPICE SUP.8 BP4: Record configuration management information.
"""
def __init__(self, s3_bucket: str, project_name: str):
self.s3 = boto3.client('s3')
self.bucket = s3_bucket
self.project = project_name
def archive_sprint_evidence(self, sprint_id: str, evidence_dir: str):
"""
Archive all sprint evidence to S3.
SUP.8 BP4: Record CM information and history.
"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
s3_prefix = f"{self.project}/sprints/{sprint_id}/{timestamp}/"
evidence_manifest = []
# Upload all evidence files
for root, dirs, files in os.walk(evidence_dir):
for file in files:
local_path = os.path.join(root, file)
relative_path = os.path.relpath(local_path, evidence_dir)
s3_key = f"{s3_prefix}{relative_path}"
# Calculate SHA-256 for integrity
with open(local_path, "rb") as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
# Upload to S3
self.s3.upload_file(local_path, self.bucket, s3_key)
evidence_manifest.append({
"file": relative_path,
"s3_key": s3_key,
"sha256": file_hash,
"size_bytes": os.path.getsize(local_path),
"uploaded": datetime.now().isoformat()
})
print(f"[OK] Uploaded: {s3_key}")
# Upload manifest
import json
manifest_key = f"{s3_prefix}evidence_manifest.json"
self.s3.put_object(
Bucket=self.bucket,
Key=manifest_key,
Body=json.dumps(evidence_manifest, indent=2),
ContentType="application/json"
)
print(f"[OK] Evidence archived: s3://{self.bucket}/{s3_prefix}")
print(f" Total files: {len(evidence_manifest)}")
return f"s3://{self.bucket}/{s3_prefix}"
# Usage in CI/CD (GitHub Actions)
archiver = ASPICEEvidenceArchiver(
s3_bucket="aspice-evidence-prod",
project_name="adas-emergency-braking"
)
archiver.archive_sprint_evidence(
sprint_id="Sprint-25",
evidence_dir="evidence/aspice"
)
Summary
Continuous ASPICE Compliance Implementation:
| Automation Point | ASPICE Process | Tool | Evidence Generated |
|---|---|---|---|
| Pre-commit hooks | SWE.3 BP5 | cppcheck, clang-format | MISRA compliance logs |
| Pull Request checks | SWE.3 BP7, SUP.8 BP5 | GitHub Actions | Code review records, traceability |
| CI/CD pipeline | SWE.4-6 | pytest, Robot Framework | Test execution logs, coverage reports |
| Evidence generator | All SWE processes | Python scripts | Formal work products (reports, matrices) |
| Real-time dashboard | All processes | Flask, Chart.js | Compliance metrics visualization |
| Evidence archival | SUP.8 BP4 | AWS S3 | Long-term evidence storage |
Key Benefits:
- Zero Manual Documentation: Evidence auto-generated from development activities
- Real-Time Visibility: Dashboard shows compliance status continuously
- Assessment Readiness: Evidence always available (no scrambling before audit)
- Quality Gates: Non-compliant code cannot merge (enforced by CI/CD)
Assessment Impact:
- Before: 6 months evidence collection, 2-week audit
- After: Continuous evidence, 2-day audit review (assessors just verify existing evidence)