7.2: Common Findings
Introduction
ASPICE assessors see the same failures repeatedly: incomplete traceability, missing test coverage, undocumented architecture decisions. This section catalogs the Top 10 most common assessment findings, explains why they occur, and provides concrete corrective actions to avoid them.
Finding #1: Insufficient Requirements Traceability (SWE.1 BP5)
What Assessors Find
[FAIL] Problem: Cannot trace code back to requirements (or vice versa).
Evidence:
- Git commit messages lack requirement IDs:
"Fixed bug"instead of"[SWE-234] Fix brake latency calculation" - Jira stories have no parent Epic link (system requirements)
- Test cases don't reference requirements they verify
Assessor Question: "Show me which code implements [SWE-234]." Team Response: "Um... let me search the codebase..." (FAIL)
Impact: SWE.1 rated CL0 or CL1 (critical gap, blocks CL2).
Root Cause
- No enforcement mechanism (pre-commit hooks, CI checks)
- Developers don't understand importance of traceability
- Manual traceability (too tedious, skipped under time pressure)
Corrective Actions
1. Enforce Traceability in Pre-Commit Hook
#!/bin/bash
# .git/hooks/pre-commit
# ASPICE SUP.8 BP5: Enforce requirement traceability
COMMIT_MSG=$(cat .git/COMMIT_EDITMSG 2>/dev/null || echo "")
# Check if commit message contains requirement ID
if ! echo "$COMMIT_MSG" | grep -qE '\[(SWE|SYS|TC)-[0-9]+\]'; then
echo "[FAIL] ERROR: Commit message must reference a requirement ID"
echo " Format: [SWE-XXX] Your commit message"
echo " Example: [SWE-234] Implement pedestrian detection algorithm"
exit 1
fi
echo "[PASS] Traceability check passed"
Installation:
cp scripts/pre-commit .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
Result: IMPOSSIBLE to commit code without requirement ID.
2. Automate Traceability Matrix Generation
# scripts/generate_traceability_matrix.py
"""
Auto-generate traceability matrix from Jira + Git
ASPICE SWE.1 BP5: Ensure bidirectional traceability
"""
import re
from jira import JIRA
import git
class TraceabilityMatrix:
def __init__(self, jira_url: str, git_repo_path: str):
self.jira = JIRA(jira_url)
self.repo = git.Repo(git_repo_path)
def generate(self, project_key: str) -> list:
"""Generate traceability matrix: Requirement → Code → Test"""
matrix = []
# Get all requirements from Jira
requirements = self.jira.search_issues(f'project={project_key}')
for req in requirements:
req_id = req.key # e.g., SWE-234
# Find commits implementing this requirement
commits = []
for commit in self.repo.iter_commits():
if re.search(rf'\[{req_id}\]', commit.message):
commits.append(commit.hexsha[:7])
# Find test cases for this requirement
test_cases = req.fields.customfield_10050 # Jira custom field for tests
matrix.append({
"Requirement ID": req_id,
"Summary": req.fields.summary,
"Implemented By (Commits)": ", ".join(commits) if commits else "[FAIL] NOT IMPLEMENTED",
"Tested By": test_cases if test_cases else "[FAIL] NO TESTS",
"Status": "[PASS] Complete" if (commits and test_cases) else "[WARN] Incomplete"
})
return matrix
def export_to_excel(self, matrix: list, output_file: str):
"""Export matrix to Excel for assessor review"""
import pandas as pd
df = pd.DataFrame(matrix)
df.to_excel(output_file, index=False)
print(f"[PASS] Traceability matrix generated: {output_file}")
# Usage
matrix_gen = TraceabilityMatrix(
jira_url="https://jira.company.com",
git_repo_path="/path/to/repo"
)
matrix = matrix_gen.generate("PARKING_ASSIST")
matrix_gen.export_to_excel(matrix, "traceability_matrix.xlsx")
Frequency: Run monthly, review with team (identify gaps early).
Finding #2: Incomplete Test Coverage (SWE.4 BP4)
What Assessors Find
[FAIL] Problem: Code coverage below project-defined thresholds (e.g., 60% coverage against an 80% project target for ASIL-B code).
Evidence:
- Coverage report shows critical functions untested:
File: emergency_brake.c Line Coverage: 65% Branch Coverage: 58% [FAIL] (project threshold: ≥80%) - Test cases cover happy path only (no edge cases, error handling)
Assessor Question: "What happens if the camera fails while braking?" Team Response: "We didn't test that scenario." (FAIL)
Impact: SWE.4 rated CL1 (inadequate verification).
Root Cause
- No coverage enforcement in CI/CD
- Developers prioritize features over tests (schedule pressure)
- Legacy code inherited without tests (retrofit is painful)
Corrective Actions
1. Enforce Coverage Threshold in CI Pipeline
# .github/workflows/coverage-gate.yml
name: Code Coverage Gate
on: [pull_request]
jobs:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Tests with Coverage
run: pytest --cov=src --cov-report=xml --cov-fail-under=80
- name: Coverage Report
run: |
COVERAGE=$(python -c "import xml.etree.ElementTree as ET; print(ET.parse('coverage.xml').getroot().attrib['line-rate'])")
echo "Coverage: $(echo \"$COVERAGE * 100\" | bc)%"
if (( $(echo "$COVERAGE < 0.80" | bc -l) )); then
echo "[FAIL] Coverage below 80% project threshold"
exit 1
fi
Result: PR CANNOT merge if coverage <80%.
2. Incremental Test Coverage Improvement (for Legacy Code)
## Legacy Code Coverage Improvement Plan
**Current State**: 58% coverage (emergency_brake.c)
**Target**: 80% coverage (ASIL-B requirement)
**Phased Approach** (12 weeks):
### Phase 1 (Weeks 1-4): Test Critical Functions First
- **Priority**: Functions in safety-critical path (ASIL-B)
- **Functions to Test**:
1. `calculate_brake_force()` - Brake force calculation
2. `detect_pedestrian()` - Pedestrian detection logic
3. `activate_emergency_brake()` - Brake actuation
**Effort**: 3 developers × 20 hours = 60 hours
**Expected Coverage**: 58% → 72%
---
### Phase 2 (Weeks 5-8): Edge Cases + Error Handling
- **Scenarios to Test**:
- Camera failure (sensor timeout)
- Invalid inputs (negative distance, speed)
- Boundary conditions (distance=0, speed=max)
**Effort**: 2 developers × 20 hours = 40 hours
**Expected Coverage**: 72% → 80%
---
### Phase 3 (Weeks 9-12): Remaining Code + Refactoring
- **Actions**:
- Test utility functions (logging, diagnostics)
- Refactor untestable code (reduce cyclomatic complexity)
**Effort**: 2 developers × 10 hours = 20 hours
**Expected Coverage**: 80% → 85% (exceeds target)
---
**Total Effort**: 120 hours (3 person-months)
**ROI**: Achieve CL2 rating (enables $5M OEM contract)
Finding #3: Undocumented Architecture Decisions (SWE.2 BP1)
What Assessors Find
[FAIL] Problem: No evidence of architectural design process.
Evidence:
- No Architecture Decision Records (ADRs)
- Architecture exists in architect's head only
- No documentation of WHY design choices were made
Assessor Question: "Why did you choose Automotive Ethernet over FlexRay?" Team Response: "It seemed like a good idea at the time..." (FAIL)
Impact: SWE.2 rated CL0 (no documented architecture).
Corrective Actions
1. Mandate ADRs for Major Design Decisions
# Policy: Architecture Decision Records (ADRs)
**When to Write an ADR**:
- New architectural component added (e.g., sensor fusion module)
- Technology choice (e.g., communication protocol selection)
- Design pattern change (e.g., switch from monolith to microservices)
- Third-party library adoption (e.g., OpenCV for computer vision)
**When NOT to Write an ADR**:
- Minor refactoring (rename variable)
- Bug fixes
- Code formatting changes
---
**ADR Template** (mandatory fields):
```markdown
# ADR-XXX: [Decision Title]
**Status**: [Proposed | Accepted | Rejected | Superseded]
**Date**: YYYY-MM-DD
## Context
[What problem are we solving? What constraints exist?]
## Decision
[What solution did we choose?]
## Rationale
[WHY this solution? What are the benefits?]
## Consequences
[What are the trade-offs? Pros and cons?]
## Alternatives Considered
[What other options did we reject? Why?]
## Traceability
[Links to requirements: Implements [SYS-XXX]]
Review Process:
- Developer drafts ADR in PR
- Architecture review meeting (1 hour, quorum: 2 architects + tech lead)
- ADR approved → Merge to
docs/architecture/ - ADR rejected → Document rejection rationale
---
## Finding #4: Code Reviews Not Performed (SWE.3 BP7)
### What Assessors Find
[FAIL] **Problem**: Code merged without peer review.
**Evidence**:
- GitHub shows PRs merged without approvals
- Or: PRs have only 1 approval (project policy recommends ≥2 for safety-critical code; ASPICE requires reviews but does not mandate a specific reviewer count)
- Review comments are superficial: "LGTM 👍" (no actual verification)
**Assessor Question**: "Show me evidence that this code was reviewed for MISRA compliance."
**Team Response**: Blank stares (FAIL)
**Impact**: SWE.3 rated **CL1** (insufficient verification).
---
### Corrective Actions
#### 1. Enforce 2 Approvals in GitHub Branch Protection
```yaml
# GitHub Repository Settings → Branches → Branch Protection Rule
Branch name pattern: main, develop
[PASS] Require pull request reviews before merging
Required number of approvals: 2
[PASS] Dismiss stale pull request approvals when new commits are pushed
[PASS] Require review from Code Owners
[PASS] Require status checks to pass before merging
[PASS] ci/coverage-gate (must pass)
[PASS] ci/misra-check (must pass)
[PASS] Do not allow bypassing the above settings
(Even admins cannot force-merge)
Result: IMPOSSIBLE to merge without 2 approvals + passing CI.
2. Code Review Checklist (ASPICE-Compliant)
# Code Review Checklist (PR Template)
**Reviewer**: Before approving, verify:
## SWE.1: Requirements Traceability
- [ ] PR title references requirement ID: `[SWE-XXX]`
- [ ] Code comments include `@implements [SWE-XXX]` tags
- [ ] Changes align with requirement acceptance criteria
## SWE.3: Coding Standards (BP5)
- [ ] MISRA C compliance: CI check passed (0 critical violations)
- [ ] No compiler warnings (`-Wall -Wextra`)
- [ ] Functions have Doxygen comments (purpose, params, return)
- [ ] Cyclomatic complexity ≤15 (SonarQube check)
## SWE.4: Unit Testing
- [ ] Unit tests written for new functions
- [ ] Code coverage ≥80% (check coverage report)
- [ ] All tests pass (CI pipeline green)
- [ ] Edge cases tested (boundary values, error conditions)
## SWE.3: Design Quality
- [ ] No code duplication (DRY principle)
- [ ] Clear variable names (no single-letter variables except loop counters)
- [ ] Function length ≤50 lines (if longer, refactor)
## Security
- [ ] No hardcoded secrets (API keys, passwords)
- [ ] Input validation for external data
- [ ] No SQL injection vulnerabilities (use parameterized queries)
---
**Approval**: I verify this PR meets all ASPICE quality criteria.
Finding #5: No Change Request Management (SUP.10)
What Assessors Find
[FAIL] Problem: Requirements change frequently with no impact analysis or approval process.
Evidence:
- Jira shows requirements modified directly (no Change Request workflow)
- No record of who approved changes
- No impact analysis (effort estimate, affected modules)
Assessor Question: "How do you manage changes to requirements?" Team Response: "Product Owner just updates the Jira ticket..." (FAIL - no control)
Impact: SUP.10 rated CL0 (no change management process).
Corrective Actions
1. Implement Change Request Workflow in Jira
# Jira Workflow: Change Request (SUP.10)
name: Change Request Workflow
issue_type: Change Request
states:
- name: Submitted
description: "Requester submits change proposal"
required_fields:
- original_requirement: "Which requirement needs changing?"
- proposed_change: "What is the new requirement?"
- justification: "Why is this change needed?"
transitions:
- to: Under Review
- name: Under Review
description: "Impact analysis performed"
required_fields:
- affected_modules: "Which code modules are impacted?"
- effort_estimate: "Story points for implementation"
- risk_assessment: "Low/Medium/High risk"
approval_required:
- Product Owner
- Tech Lead
transitions:
- to: Approved
- to: Rejected
- name: Approved
description: "Change authorized, implementation can proceed"
automated_actions:
- create_user_story: true # Auto-create Jira story for implementation
- update_traceability: true
- notify_team: true
transitions:
- to: Implemented
- name: Rejected
description: "Change not approved"
required_fields:
- rejection_reason: "Why was change rejected?"
- name: Implemented
description: "Change completed and verified"
required_fields:
- implementation_commit: "Git commit SHA implementing change"
- verification_test: "Test case verifying change"
Enforcement: CANNOT modify requirements without Change Request approval.
Finding #6: Missing Integration Tests (SWE.5 BP3)
What Assessors Find
[FAIL] Problem: Components tested in isolation, but not together.
Evidence:
- Unit tests exist (SWE.4 [PASS])
- Acceptance tests exist (SWE.6 [PASS])
- NO integration tests (SWE.5 [FAIL])
Assessor Question: "How do you test that the camera driver and pedestrian detector work together?" Team Response: "We test each separately... they should work together?" (FAIL)
Impact: SWE.5 rated CL0 (no integration verification).
Corrective Actions
1. Define Integration Test Scope
## Integration Testing Strategy (SWE.5)
**What to Test** (component interactions):
1. **Camera Driver ↔ Pedestrian Detector**
- Test: Camera provides frames, detector processes frames
- Verification: Detector receives valid image data (1920x1080, 30fps)
2. **Pedestrian Detector ↔ Sensor Fusion**
- Test: Detector sends bounding boxes, fusion combines with LIDAR
- Verification: Fusion algorithm receives detections within 50ms
3. **Sensor Fusion ↔ Brake Controller**
- Test: Fusion triggers emergency brake signal
- Verification: Brake actuator receives command within 100ms
**Test Environment**: HIL (Hardware-in-the-Loop) test bench
**Test Framework**: Robot Framework
**Frequency**: Every sprint (before Sprint Review)
2. Example Integration Test (Robot Framework)
*** Settings ***
Library HILLibrary.py
*** Test Cases ***
Camera to Pedestrian Detector Integration
[Documentation] SWE.5 BP3: Integration test
[Tags] Integration ASIL-B
# Setup
Initialize Camera Driver config=camera_hil.yaml
Initialize Pedestrian Detector
# Execute
${frame} = Camera Driver Capture Frame
${detections} = Pedestrian Detector Process ${frame}
# Verify
Should Not Be Empty ${detections} msg=Detector must process camera frames
${latency_ms} = Get Processing Latency
Should Be True ${latency_ms} < 50 msg=Detection latency must be <50ms
Top 10 Common Findings Summary
| # | Finding | Affected Process | Corrective Action | Effort |
|---|---|---|---|---|
| 1 | Insufficient traceability | SWE.1 BP5 | Pre-commit hook + auto-traceability matrix | 1 week |
| 2 | Incomplete test coverage | SWE.4 BP4 | CI coverage gate + legacy test plan | 3 months |
| 3 | Undocumented architecture | SWE.2 BP1 | Mandate ADRs for design decisions | 2 weeks |
| 4 | No code reviews | SWE.3 BP7 | Enforce 2 approvals + review checklist | 1 day |
| 5 | No change management | SUP.10 | Jira Change Request workflow | 1 week |
| 6 | Missing integration tests | SWE.5 BP3 | HIL test bench + Robot Framework | 1 month |
| 7 | No retrospectives | SUP.1 BP7 | Mandatory bi-weekly retrospectives | Ongoing |
| 8 | Inconsistent process | MAN.3 | Document process in Confluence | 2 weeks |
| 9 | No qualification tests | SWE.6 BP3 | Gherkin acceptance tests + Sprint Review demos | 2 weeks |
| 10 | Poor CM practices | SUP.8 | Git branching strategy + semantic versioning | 1 week |
Pre-Assessment Gap Analysis
Self-Assessment Tool
# ASPICE Self-Assessment Checklist
class GapAnalysis:
"""Identify gaps before formal assessment"""
def check_compliance(self) -> dict:
"""Run compliance checks, return gap list"""
gaps = []
# Check 1: Traceability
traced_commits = self.count_commits_with_req_id()
total_commits = self.count_total_commits()
if (traced_commits / total_commits) < 0.95:
gaps.append({
"process": "SWE.1 BP5",
"finding": "Insufficient traceability",
"severity": "Critical",
"metric": f"{(traced_commits/total_commits)*100:.1f}% commits traced (target: 95%)"
})
# Check 2: Code Coverage
coverage = self.get_code_coverage()
if coverage < 80:
gaps.append({
"process": "SWE.4 BP4",
"finding": "Code coverage below ASIL-B threshold",
"severity": "Critical",
"metric": f"{coverage}% coverage (target: 80%)"
})
# Check 3: ADRs
adr_count = self.count_adrs()
if adr_count < 3:
gaps.append({
"process": "SWE.2 BP1",
"finding": "Insufficient architecture documentation",
"severity": "High",
"metric": f"{adr_count} ADRs found (min: 3 for typical project)"
})
return {
"total_gaps": len(gaps),
"gaps": gaps,
"assessment_readiness": "[PASS] Ready" if len(gaps) == 0 else "[FAIL] Not Ready"
}
# Run self-assessment
checker = GapAnalysis()
result = checker.check_compliance()
if result["total_gaps"] > 0:
print(f"[WARN] Found {result['total_gaps']} gaps:")
for gap in result["gaps"]:
print(f" • {gap['process']}: {gap['finding']} ({gap['severity']})")
print(f" Metric: {gap['metric']}")
else:
print("[PASS] No gaps found - ready for assessment!")
Summary
Top 3 Critical Findings (cause most failures):
- Insufficient Traceability (SWE.1 BP5) - Fix with pre-commit hooks
- Incomplete Test Coverage (SWE.4 BP4) - Fix with CI coverage gates
- No Code Reviews (SWE.3 BP7) - Fix with branch protection rules
Prevention Strategy:
- Run self-assessment 3 months before formal assessment
- Fix critical gaps first (traceability, coverage, reviews)
- Address medium gaps next (ADRs, integration tests)
- Low-priority gaps can be addressed post-assessment (if CL2 still achieved)
Next: Maintain continuous assessment readiness (24.03).