7.1: Evidence Collection

Introduction

ASPICE assessors don't take your word for it—they need objective evidence. "We do code reviews" means nothing without PR approval records. "We track requirements" means nothing without a traceability matrix. This section shows exactly what evidence to collect, where to find it, and how to package it for assessment.


Evidence vs. Documentation

What Counts as Evidence?

Work Product Type Evidence Example NOT Evidence
Requirements (SWE.1) Jira User Story export (PDF) with acceptance criteria + traceability links PowerPoint slide saying "We write requirements"
Architecture (SWE.2) Architecture Decision Record (ADR-007.md) in Git repository Verbal explanation "We chose Ethernet for sensors"
Source Code (SWE.3) Git commit log showing [SWE-234] in commit messages Screenshot of IDE showing code
Code Review (SWE.3) GitHub PR with 2 approval comments + MISRA check passed Email saying "Bob reviewed my code"
Unit Tests (SWE.4) CI pipeline log showing 94% coverage + all tests passed Test plan document (not execution proof)
Integration Tests (SWE.5) HIL test bench log (XML) with pass/fail results Manual test case list (no proof tests ran)
Traceability (SUP.8) Traceability matrix (Excel/Markdown) linking [SYS-45] → [SWE-234] → TC-001 Claim "All requirements are implemented"

Key Principle: Evidence must be verifiable (assessor can independently confirm it exists and is correct).


Evidence Collection Matrix

Per-Process Evidence Checklist

SWE.1: Software Requirements Analysis

Base Practices (BP) to Prove:

  • BP1: Specify software requirements (functional + non-functional)
  • BP5: Ensure bidirectional traceability
  • BP7: Manage changes to requirements

Required Evidence:

## SWE.1 Evidence Package

### 1. Requirements Specification (BP1)
**Format**: Jira export (JSON or PDF)

**Contents**:
- 20+ User Stories from project scope
- Each story must include:
  - Summary (clear, concise title)
  - Description ("As a / I want / So that" format)
  - Acceptance Criteria (Given/When/Then, testable)
  - Priority, Story Points
  - Safety Classification (ASIL level, if applicable)

**Location**: `evidence/SWE.1/jira_export_parking_assist.json`

**How to Generate**:
```bash
# Export Jira stories via REST API
curl -u user:token "https://jira.company.com/rest/api/2/search?jql=project=PARKING_ASSIST&fields=summary,description,acceptance_criteria" > jira_export.json

2. Traceability Matrix (BP5)

Format: Excel or Markdown

Contents:

  • Bidirectional links: System Req → Software Req → Test Case
  • Example row:
    System Req Software Req Implemented By Tested By Status
    [SYS-45] Pedestrian collision prevention [SWE-234] Emergency braking commit abc123 TC-SWE-234-1 [PASS] Complete

Location: evidence/SWE.1/traceability_matrix.xlsx

How to Generate:

# Auto-generate from Jira + Git
python scripts/generate_traceability_matrix.py \
  --jira-project PARKING_ASSIST \
  --git-repo /path/to/repo \
  --output evidence/SWE.1/traceability_matrix.xlsx

3. Change Request Records (BP7)

Format: Jira Change Request export

Contents:

  • 5-10 Change Requests showing:
    • Original requirement
    • Requested change
    • Impact analysis (affected modules, effort estimate)
    • Approval/rejection decision
    • Implementation tracking

Location: evidence/SWE.1/change_requests.pdf

Example:

# Change Request: [CR-045]

**Date**: 2025-03-15
**Requester**: Product Owner (customer feedback)

**Original Requirement**: [SWE-234] "Brake latency ≤150ms"
**Requested Change**: "Reduce brake latency to ≤100ms"

**Impact Analysis**:
- Affected Modules: Pedestrian detection algorithm, brake controller
- Effort Estimate: 2 weeks (40 hours)
- Risk: Medium (algorithm optimization required)

**Decision**: [PASS] APPROVED (safety-critical improvement)

**Implementation**: Completed in Sprint 12 (commit def456)
**Verification**: TC-SWE-234-1 updated, passed with 87ms latency

---

#### SWE.2: Software Architectural Design

**Base Practices to Prove**:
- BP1: Define software architecture
- BP3: Define interfaces
- BP7: Ensure consistency with system architecture

**Required Evidence**:

```markdown
## SWE.2 Evidence Package

### 1. Architecture Decision Records (BP1)
**Format**: Markdown files in Git (`docs/architecture/ADR-*.md`)

**Contents**:
- 3-5 ADRs documenting major architectural decisions
- Each ADR must include:
  - Context (problem statement)
  - Decision (solution chosen)
  - Rationale (why this solution)
  - Consequences (pros/cons)
  - Alternatives Considered (what was rejected, why)
  - Traceability (links to requirements)

**Location**: `evidence/SWE.2/ADRs/`

**Example ADR** (excerpt):
```markdown
# ADR-007: Use Automotive Ethernet for Sensor Communication

**Status**: Accepted
**Date**: 2025-02-10

**Context**: We need to transmit 10 MB/s LIDAR data from sensor to ECU.

**Decision**: Use Automotive Ethernet (100BASE-T1)

**Rationale**:
- Bandwidth: 100 Mbps (sufficient for 10 MB/s)
- Latency: <10ms (meets real-time requirement)
- Cost: $5/connector (vs $15 for FlexRay)
- Industry trend: Automotive Ethernet is standard for ADAS

**Alternatives Considered**:
1. CAN: Rejected (bandwidth too low, 1 Mbps max)
2. FlexRay: Rejected (overkill, 3x cost of Ethernet)

**Traceability**: Implements [SYS-67] "High-bandwidth sensor interface"

2. Interface Specification (BP3)

Format: Doxygen-generated API documentation (HTML)

Contents:

  • Function headers with parameters, return values, preconditions
  • Example:
/**
 * @brief Activates emergency braking
 *
 * Implements: [SWE-234] Emergency braking algorithm
 *
 * @param[in] distance_m Distance to obstacle (meters), range: [0.1, 100]
 * @param[in] speed_kmh Current vehicle speed (km/h), range: [0, 200]
 * @param[out] brake_force_n Calculated brake force (Newtons)
 *
 * @return 0 on success, -1 on invalid input
 *
 * @pre Vehicle speed sensor calibrated
 * @post Brake actuator receives force command within 100ms
 *
 * @safety ASIL-D (critical safety function)
 */
int activate_emergency_brake(float distance_m, float speed_kmh, float* brake_force_n);

Location: evidence/SWE.2/doxygen_html/index.html

How to Generate:

# Generate Doxygen documentation
doxygen Doxyfile
zip -r evidence/SWE.2/doxygen_api_docs.zip doxygen_html/

3. Architecture Diagram (BP7)

Format: PNG/PDF diagram showing component structure

Contents:

  • System architecture overview (block diagram)
  • Component interactions (data flow)
  • Mapping to AUTOSAR layers (if applicable)

Location: evidence/SWE.2/architecture_diagram.png

Tool: draw.io, Lucidchart, or PlantUML

Example (PlantUML):

@startuml
package "ADAS System" {
  [Camera Driver] --> [Pedestrian Detector]
  [LIDAR Driver] --> [Sensor Fusion]
  [Pedestrian Detector] --> [Sensor Fusion]
  [Sensor Fusion] --> [Emergency Brake Controller]
  [Emergency Brake Controller] --> [Brake Actuator]
}

note right of [Sensor Fusion]
  Implements: [SWE-235]
  ADR-007: Automotive Ethernet
end note
@enduml

---

#### SWE.3: Software Detailed Design

**Required Evidence**:

```markdown
## SWE.3 Evidence Package

### 1. Source Code (BP6: Develop software units)
**Format**: Git repository export (ZIP)

**Contents**:
- All source files (`.c`, `.h`, `.cpp`, `.py`)
- Organized in project structure (`src/`, `include/`, `tests/`)

**Location**: `evidence/SWE.3/source_code.zip`

**How to Generate**:
```bash
git archive --format=zip --prefix=parking_assist/ HEAD > evidence/SWE.3/source_code.zip

2. Code Review Records (BP7: Verify design)

Format: GitHub PR export (PDF)

Contents:

  • 10-15 sample Pull Requests showing:
    • PR title with requirement ID: "[SWE-234] Add pedestrian detection"
    • Code changes (diff)
    • Reviewer comments
    • 2+ approvals before merge
    • CI pipeline status (all checks passed)

Location: evidence/SWE.3/code_reviews/

How to Generate:

# Export PR to PDF (using GitHub API + wkhtmltopdf)
for PR_NUM in 42 43 44 45; do
  gh pr view $PR_NUM --web  # Opens in browser
  # Print to PDF manually, or use headless Chrome:
  chromium --headless --print-to-pdf=evidence/SWE.3/PR-$PR_NUM.pdf \
    "https://github.com/company/parking-assist/pull/$PR_NUM"
done

3. MISRA Compliance Report (BP5: Ensure coding standards)

Format: cppcheck output (TXT or XML)

Contents:

  • Static analysis results showing 0 critical MISRA violations
  • Summary: Total violations by severity

Location: evidence/SWE.3/misra_report.xml

How to Generate:

cppcheck --addon=misra --xml --output-file=evidence/SWE.3/misra_report.xml src/

---

#### SWE.4: Software Unit Verification

**Required Evidence**:

```markdown
## SWE.4 Evidence Package

### 1. Unit Test Source Code (BP1: Develop unit test specification)
**Format**: Test files in Git

**Contents**:
- All unit test files (`tests/unit/test_*.py`, `test_*.c`)
- Test cases cover:
  - Nominal cases (happy path)
  - Boundary cases (edge values)
  - Error cases (invalid inputs)

**Location**: `evidence/SWE.4/unit_tests.zip`

---

### 2. Test Execution Logs (BP3: Test software units)
**Format**: CI pipeline logs (TXT)

**Contents**:
- Pytest output showing all tests passed
- Example:

========================= test session starts ========================== collected 87 items

tests/unit/test_emergency_brake.py::test_normal_operation PASSED [ 1%] tests/unit/test_emergency_brake.py::test_edge_case_short_distance PASSED [ 2%] ... tests/unit/test_emergency_brake.py::test_invalid_speed_negative PASSED [100%]

========================= 87 passed in 12.34s ==========================


**Location**: `evidence/SWE.4/ci_pipeline_log.txt`

**How to Generate**:
```bash
# Download latest CI run log from GitHub Actions
gh run view 1234567890 --log > evidence/SWE.4/ci_pipeline_log.txt

3. Code Coverage Report (BP4: Achieve test coverage)

Format: HTML coverage report

Contents:

  • Branch coverage ≥80% (project-defined threshold; ISO 26262-6 Table 9 recommends branch coverage as the method for ASIL-B but does not mandate a specific percentage)
  • Per-file coverage breakdown
  • Uncovered lines highlighted

Location: evidence/SWE.4/coverage_report.html

How to Generate:

pytest --cov=src --cov-report=html --cov-report=term
cp htmlcov/index.html evidence/SWE.4/coverage_report.html

---

#### SWE.5: Software Integration

**Required Evidence**:

```markdown
## SWE.5 Evidence Package

### 1. Integration Test Specification (BP1)
**Format**: Robot Framework test files (`.robot`)

**Location**: `evidence/SWE.5/integration_tests.zip`

---

### 2. Integration Test Results (BP3: Perform integration test)
**Format**: XML test results (JUnit format)

**Contents**:
- Test execution results from HIL test bench
- Example:
```xml
<testsuite name="Camera Integration Tests" tests="12" failures="0">
  <testcase classname="integration.camera" name="test_camera_frame_acquisition" time="2.45">
    <system-out>Frame acquired: 1920x1080, 30fps</system-out>
  </testcase>
  <testcase classname="integration.camera" name="test_pedestrian_detection_integration" time="5.12">
    <system-out>Detection latency: 87ms (threshold: 100ms) PASS</system-out>
  </testcase>
</testsuite>

Location: evidence/SWE.5/integration_test_results.xml


---

#### SWE.6: Software Qualification Testing

**Required Evidence**:

```markdown
## SWE.6 Evidence Package

### 1. Acceptance Test Cases (BP2: Develop qualification test)
**Format**: Gherkin feature files (`.feature`)

**Location**: `evidence/SWE.6/acceptance_tests/`

---

### 2. Qualification Test Results (BP3: Perform qualification test)
**Format**: Behave HTML report

**Contents**:
- All acceptance test scenarios executed
- Pass/fail status for each scenario
- Product Owner sign-off (comment in Jira)

**Location**: `evidence/SWE.6/qualification_test_report.html`

---

### 3. Sprint Review Recording (BP3: Demonstrate to stakeholders)
**Format**: Video recording (MP4)

**Contents**:
- Live demo of feature to Product Owner
- Product Owner acceptance statement

**Location**: `evidence/SWE.6/sprint_review_demo.mp4`

**How to Capture**:
- Record Zoom/Teams meeting
- Or screen recording with OBS Studio

SUP.8: Configuration Management

Required Evidence:

## SUP.8 Evidence Package

### 1. Version Control Log (BP3: Establish baselines)
**Format**: Git log export

**Contents**:
- Commit history showing:
  - Requirement IDs in commit messages
  - Timestamps, authors
  - Baseline tags (e.g., `v1.0.0`, `v1.1.0`)

**Location**: `evidence/SUP.8/git_log.txt`

**How to Generate**:
```bash
git log --oneline --decorate --all > evidence/SUP.8/git_log.txt

2. Branch Strategy Documentation (BP1: Develop CM strategy)

Format: Markdown documentation

Contents:

  • Branching model (Gitflow, trunk-based, etc.)
  • Merge policies (require PR, 2 approvals, CI green)
  • Release tagging convention (semantic versioning)

Location: evidence/SUP.8/CM_strategy.md


3. Traceability Matrix (BP5: Ensure traceability)

Format: Excel or Markdown

Contents: (Same as SWE.1 traceability matrix - stored here too)

Location: evidence/SUP.8/traceability_matrix.xlsx


---

## Evidence Package Template

### Final Folder Structure

evidence_package_parking_assist/ ├── 00_OVERVIEW.md (Assessment scope, project summary) ├── SWE.1_Requirements/ │ ├── jira_export.json │ ├── traceability_matrix.xlsx │ └── change_requests.pdf ├── SWE.2_Architecture/ │ ├── ADR-001-sensor-fusion.md │ ├── ADR-002-control-algorithm.md │ ├── ADR-007-automotive-ethernet.md │ ├── architecture_diagram.png │ └── doxygen_api_docs.zip ├── SWE.3_Design/ │ ├── source_code.zip │ ├── code_reviews/ (PR-042.pdf, PR-043.pdf, ...) │ └── misra_report.xml ├── SWE.4_Unit_Verification/ │ ├── unit_tests.zip │ ├── ci_pipeline_log.txt │ └── coverage_report.html ├── SWE.5_Integration/ │ ├── integration_tests.zip │ └── integration_test_results.xml ├── SWE.6_Qualification/ │ ├── acceptance_tests/ (emergency_braking.feature) │ ├── qualification_test_report.html │ └── sprint_review_demo.mp4 ├── SUP.1_Quality_Assurance/ │ ├── retrospective_notes.pdf │ └── audit_checklist.xlsx ├── SUP.8_Configuration_Management/ │ ├── git_log.txt │ ├── CM_strategy.md │ └── traceability_matrix.xlsx ├── SUP.9_Problem_Resolution/ │ └── bug_reports.pdf (Jira bugs with root cause analysis) ├── SUP.10_Change_Management/ │ └── change_requests.pdf (overlap with SWE.1) └── MAN.3_Project_Management/ ├── sprint_planning_notes.pdf ├── burndown_charts.png └── project_status_reports.pdf


---

## Automated Evidence Collection Script

```python
#!/usr/bin/env python3
"""
Automated ASPICE Evidence Collection Script
Generates complete evidence package for assessment
"""

import os
import subprocess
import json
from pathlib import Path

class ASPICEEvidenceCollector:
    def __init__(self, project_name: str, output_dir: str):
        self.project = project_name
        self.output = Path(output_dir)
        self.output.mkdir(exist_ok=True)

    def collect_all_evidence(self):
        """Run all evidence collection tasks"""
        print(f"🔍 Collecting ASPICE evidence for {self.project}...")

        self.collect_swe1_requirements()
        self.collect_swe2_architecture()
        self.collect_swe3_design()
        self.collect_swe4_unit_tests()
        self.collect_swe5_integration()
        self.collect_swe6_qualification()
        self.collect_sup8_configuration()

        print(f"[PASS] Evidence package created: {self.output}")

    def collect_swe1_requirements(self):
        """SWE.1: Export Jira stories + traceability matrix"""
        swe1_dir = self.output / "SWE.1_Requirements"
        swe1_dir.mkdir(exist_ok=True)

        # Export Jira stories
        subprocess.run([
            "curl", "-u", f"{os.getenv('JIRA_USER')}:{os.getenv('JIRA_TOKEN')}",
            f"https://jira.company.com/rest/api/2/search?jql=project={self.project}",
            "-o", swe1_dir / "jira_export.json"
        ])

        # Generate traceability matrix
        subprocess.run([
            "python", "scripts/generate_traceability_matrix.py",
            "--project", self.project,
            "--output", swe1_dir / "traceability_matrix.xlsx"
        ])

        print("[PASS] SWE.1 evidence collected")

    def collect_swe2_architecture(self):
        """SWE.2: Copy ADRs + generate API docs"""
        swe2_dir = self.output / "SWE.2_Architecture"
        swe2_dir.mkdir(exist_ok=True)

        # Copy ADRs
        subprocess.run([
            "cp", "-r", "docs/architecture/ADR-*.md", swe2_dir
        ])

        # Generate Doxygen docs
        subprocess.run(["doxygen", "Doxyfile"])
        subprocess.run([
            "zip", "-r", swe2_dir / "doxygen_api_docs.zip", "doxygen_html/"
        ])

        print("[PASS] SWE.2 evidence collected")

    def collect_swe3_design(self):
        """SWE.3: Export source code + code reviews + MISRA report"""
        swe3_dir = self.output / "SWE.3_Design"
        swe3_dir.mkdir(exist_ok=True)

        # Export source code
        subprocess.run([
            "git", "archive", "--format=zip", "HEAD",
            "-o", swe3_dir / "source_code.zip"
        ])

        # Export code reviews (last 15 PRs)
        (swe3_dir / "code_reviews").mkdir(exist_ok=True)
        for pr_num in range(1, 16):
            subprocess.run([
                "gh", "pr", "view", str(pr_num), "--json", "title,body,reviews",
                ">", swe3_dir / f"code_reviews/PR-{pr_num:03d}.json"
            ])

        # Run MISRA check
        subprocess.run([
            "cppcheck", "--addon=misra", "--xml",
            "--output-file=" + str(swe3_dir / "misra_report.xml"),
            "src/"
        ])

        print("[PASS] SWE.3 evidence collected")

    def collect_swe4_unit_tests(self):
        """SWE.4: Copy unit tests + run coverage + export CI logs"""
        swe4_dir = self.output / "SWE.4_Unit_Verification"
        swe4_dir.mkdir(exist_ok=True)

        # Copy unit tests
        subprocess.run([
            "zip", "-r", swe4_dir / "unit_tests.zip", "tests/unit/"
        ])

        # Run tests with coverage
        subprocess.run([
            "pytest", "--cov=src", "--cov-report=html",
            "--cov-report=term", "--junitxml=test_results.xml"
        ])
        subprocess.run([
            "cp", "-r", "htmlcov/", swe4_dir / "coverage_report_html/"
        ])

        # Export CI logs (latest run)
        subprocess.run([
            "gh", "run", "view", "--log", ">", swe4_dir / "ci_pipeline_log.txt"
        ], shell=True)

        print("[PASS] SWE.4 evidence collected")

    def collect_swe5_integration(self):
        """SWE.5: Copy integration tests + results"""
        swe5_dir = self.output / "SWE.5_Integration"
        swe5_dir.mkdir(exist_ok=True)

        subprocess.run([
            "zip", "-r", swe5_dir / "integration_tests.zip", "tests/integration/"
        ])

        subprocess.run([
            "cp", "test_results/integration_results.xml", swe5_dir
        ])

        print("[PASS] SWE.5 evidence collected")

    def collect_swe6_qualification(self):
        """SWE.6: Copy acceptance tests + generate report"""
        swe6_dir = self.output / "SWE.6_Qualification"
        swe6_dir.mkdir(exist_ok=True)

        subprocess.run([
            "zip", "-r", swe6_dir / "acceptance_tests.zip", "tests/acceptance/"
        ])

        # Run acceptance tests
        subprocess.run([
            "behave", "tests/acceptance/",
            "--format=html", f"--outfile={swe6_dir}/qualification_report.html"
        ])

        print("[PASS] SWE.6 evidence collected")

    def collect_sup8_configuration(self):
        """SUP.8: Export Git log + traceability matrix"""
        sup8_dir = self.output / "SUP.8_Configuration_Management"
        sup8_dir.mkdir(exist_ok=True)

        subprocess.run([
            "git", "log", "--oneline", "--decorate", "--all",
            ">", sup8_dir / "git_log.txt"
        ], shell=True)

        subprocess.run([
            "cp", "CONTRIBUTING.md", sup8_dir / "CM_strategy.md"
        ])

        print("[PASS] SUP.8 evidence collected")

# Usage
if __name__ == "__main__":
    collector = ASPICEEvidenceCollector(
        project_name="PARKING_ASSIST",
        output_dir="evidence_package_parking_assist"
    )
    collector.collect_all_evidence()

    print("\n📦 Evidence package ready for assessment!")
    print(f"   Location: {collector.output}")
    print(f"   Size: {sum(f.stat().st_size for f in collector.output.rglob('*') if f.is_file()) / (1024*1024):.1f} MB")

Run Collection:

python scripts/collect_aspice_evidence.py
# Output: evidence_package_parking_assist/ (ready to share with assessor)

Summary

Evidence Collection Best Practices:

  1. Automate: Use scripts to collect evidence (reduces human error)
  2. Organize: Folder structure mirrors ASPICE processes (easy for assessor to navigate)
  3. Real Artifacts: Show actual work products (Jira exports, Git logs, CI reports)
  4. Traceability: Every evidence piece links to specific requirements/processes
  5. Completeness: Include all required work products (checklist in 24.00)

Time Investment:

  • Continuous Collection: 0 hours (automated via CI/CD)
  • Final Packaging: 4-8 hours (run collection script, review for completeness)
  • Scrambling Last-Minute: 200+ hours (if you didn't follow continuous practices)

Next: Avoid common assessment failures (24.02 Common Findings).