6.3: SEC.3 Risk Treatment Verification
Process Definition
Purpose
SEC.3 Purpose (ASPICE-CS-PAM-v2.0): The purpose is to confirm that the implementation of the design and integration of the components comply with the cybersecurity requirements, the refined architectural design and detailed design.
Outcomes
Official ASPICE-CS-PAM-v2.0 SEC.3 outcomes:
| Outcome | Description |
|---|---|
| O1 | Risk treatment verification measures are developed |
| O2 | Verification measures are selected according to the release scope |
| O3 | The implementation of the design and the integration of the components is verified. Verification results are recorded. |
| O4 | Consistency and bidirectional traceability are established between the risk treatment verification measures and the cybersecurity requirements, as well as between the risk treatment verification measures and the refined architectural design, detailed design and software units. Bidirectional traceability is established between the verification results and the risk treatment verification measures. |
| O5 | The results of the risk treatment verification are summarized and communicated to all affected parties. |
Base Practices with AI Integration
Official ASPICE-CS-PAM-v2.0 SEC.3 base practices:
| BP ID | Base Practice | AI Level | AI Application |
|---|---|---|---|
| SEC.3.BP1 | Specify risk treatment verification measures | L1-L2 | Generate verification strategy from CSRs, threat model |
| SEC.3.BP2 | Select verification measures | L1 | Suggest verification methods based on release scope, CAL |
| SEC.3.BP3 | Perform risk treatment verification activities | L2-L3 | Automated SAST/DAST/fuzzing, AI-assisted penetration testing |
| SEC.3.BP4 | Ensure consistency and establish bidirectional traceability | L2 | Automated traceability matrix generation and verification |
| SEC.3.BP5 | Summarize and communicate results | L2 | Automated report generation, dashboard updates |
AI Automation Levels:
- L1: AI provides templates, suggestions, prompts
- L2: AI executes with human review (AI-assisted)
- L3: AI fully automated execution with human oversight
ASPICE Base Practice Mapping
This section maps official SEC.3 base practices to practical implementation:
SEC.3.BP1: Specify Risk Treatment Verification Measures
ASPICE Requirement: Develop risk treatment verification measures based on cybersecurity requirements, refined architectural design, and detailed design.
AI-Assisted Implementation:
- Input Work Products: Cybersecurity requirements (CSRs from SEC.2), refined architecture, detailed design, threat model
- AI Role (L1-L2): Generate verification strategy from CSRs, suggest appropriate verification methods (SAST, DAST, fuzzing, penetration testing) based on threat severity and CAL level
- Output: Security test specification (WP 08-56) with test cases mapped to CSRs
Practical Realization: Security verification pyramid (see framework below) defines layered verification approach aligned with CAL requirements.
SEC.3.BP2: Select Verification Measures
ASPICE Requirement: Select verification measures appropriate to the release scope and criticality.
AI-Assisted Implementation:
- Input: Release scope definition, CAL level, CSR priorities
- AI Role (L1): Recommend verification method selection based on CAL (e.g., CAL 3 requires penetration testing)
- Output: Selected verification activities for current release
Practical Realization: GitLab CI pipeline stages (SAST, DAST, fuzz, pentest) configured according to release scope and CAL level.
SEC.3.BP3: Perform Risk Treatment Verification Activities
ASPICE Requirement: Execute verification activities and record results.
AI-Assisted Implementation:
- Static Analysis (L3): Automated SAST with Cppcheck, Semgrep, SCA with Trivy, secrets scanning with TruffleHog
- Dynamic Testing (L2-L3): Automated fuzzing with AFL++/libFuzzer, DAST with OWASP ZAP
- Penetration Testing (L2): AI-assisted automated scanning with expert manual review
- Output: Verification results, vulnerability reports
Practical Realization: See CI/CD pipeline configuration and CAN bus security testing framework below.
SEC.3.BP4: Ensure Consistency and Establish Bidirectional Traceability
ASPICE Requirement: Establish bidirectional traceability between:
- Risk treatment verification measures ↔ Cybersecurity requirements
- Risk treatment verification measures ↔ Refined architectural design / Detailed design / Software units
- Verification results ↔ Risk treatment verification measures
AI-Assisted Implementation:
- AI Role (L2): Automated traceability matrix generation, consistency checking
- Tooling: Link CSRs to test cases, test cases to code/design elements, test results to test cases
- Output: Traceability matrix (WP 08-56, 13-52)
Practical Realization: Test case structure includes csr_reference and threat_reference fields; automated report links results to CSRs.
SEC.3.BP5: Summarize and Communicate Results
ASPICE Requirement: Summarize verification results and communicate to all affected parties.
AI-Assisted Implementation:
- AI Role (L2): Generate executive summary from SARIF reports, consolidate findings across tools, produce security dashboard
- Output: Security verification report (WP 13-52), vulnerability assessment (WP 08-54), penetration test report (WP 13-54)
Practical Realization: See security report generation in CI pipeline and verification summary diagram below.
Security Verification Framework
The following diagram outlines the security verification framework, showing the verification activities (static analysis, dynamic testing, penetration testing, fuzzing) mapped to each SEC process base practice.
Practical Implementation: SEC.3.BP3 Verification Activities
The following sections demonstrate practical implementation of SEC.3.BP3 (Perform risk treatment verification activities) using automated AI-driven tools and frameworks.
ASPICE Alignment:
- SEC.3.BP1: Verification measures specified in security test specification
- SEC.3.BP2: Measures selected based on CAL level and release scope
- SEC.3.BP3: Verification activities executed (SAST, DAST, fuzzing, penetration testing)
- SEC.3.BP4: Traceability established via CSR references in test cases
- SEC.3.BP5: Results communicated via consolidated security reports
Static Application Security Testing (SAST)
SAST Pipeline Integration
Note: GitLab CI configuration requires customization for specific tool versions and infrastructure. For safety-critical projects, verify tool qualification requirements (TCL levels).
# .gitlab-ci.yml - Security Testing Pipeline
stages:
- sast
- dast
- fuzz
- pentest
- report
variables:
SAST_DISABLED: "false"
SECURITY_DASHBOARD_ENABLED: "true"
# ============================================================
# Static Analysis Security Testing
# ============================================================
sast_cppcheck:
stage: sast
image: cppcheck/cppcheck:latest
script:
# MISRA + Security rules
- cppcheck --enable=all \
--addon=misra.json \
--addon=cert.json \
--addon=y2038.json \
--xml \
--output-file=cppcheck-report.xml \
src/
# Convert to SARIF format
- python scripts/cppcheck_to_sarif.py cppcheck-report.xml > cppcheck.sarif
artifacts:
paths:
- cppcheck-report.xml
- cppcheck.sarif
reports:
sast: cppcheck.sarif
sast_semgrep:
stage: sast
image: returntocorp/semgrep:latest
script:
# Run Semgrep with embedded-specific rules
- semgrep scan \
--config=auto \
--config=p/c-audit \
--config=p/security-audit \
--sarif \
--output=semgrep.sarif \
src/
artifacts:
paths:
- semgrep.sarif
reports:
sast: semgrep.sarif
sast_secrets:
stage: sast
image: trufflesecurity/trufflehog:latest
script:
# Scan for secrets/credentials
- trufflehog filesystem . \
--exclude-paths=.trufflehog-ignore \
--json > secrets-report.json
# Fail if secrets found
- |
SECRETS_COUNT=$(cat secrets-report.json | wc -l)
if [ "$SECRETS_COUNT" -gt 0 ]; then
echo "ERROR: $SECRETS_COUNT potential secrets found!"
cat secrets-report.json
exit 1
fi
artifacts:
paths:
- secrets-report.json
sca_dependencies:
stage: sast
image: aquasec/trivy:latest
script:
# Scan dependencies for known vulnerabilities
- trivy fs . \
--scanners vuln \
--severity HIGH,CRITICAL \
--format sarif \
--output trivy.sarif
artifacts:
paths:
- trivy.sarif
reports:
dependency_scanning: trivy.sarif
# ============================================================
# Dynamic Analysis Security Testing
# ============================================================
dast_api:
stage: dast
image: owasp/zap2docker-stable
script:
# Start target application
- ./scripts/start_test_server.sh &
- sleep 10
# ZAP API scan
- zap-api-scan.py \
-t http://localhost:8080/api/openapi.json \
-f openapi \
-r zap-report.html \
-x zap-report.xml
artifacts:
paths:
- zap-report.html
- zap-report.xml
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
# ============================================================
# Fuzz Testing
# ============================================================
fuzz_protocol:
stage: fuzz
image: aflplusplus/aflplusplus
script:
# Build with AFL instrumentation
- mkdir -p build_fuzz && cd build_fuzz
- CC=afl-clang-fast cmake -DFUZZ_TESTING=ON ..
- make can_parser_fuzz
# Run fuzzer
- timeout 1h afl-fuzz \
-i ../test/fuzz_seeds \
-o fuzz_output \
./can_parser_fuzz @@
artifacts:
paths:
- build_fuzz/fuzz_output/
when: always
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
fuzz_libfuzzer:
stage: fuzz
script:
# Build with libFuzzer
- mkdir -p build_libfuzz && cd build_libfuzz
- CC=clang cmake -DLIBFUZZER=ON ..
- make message_parser_fuzz
# Run fuzzer
- ./message_parser_fuzz \
-max_total_time=3600 \
-artifact_prefix=crash_ \
../test/fuzz_corpus/
artifacts:
paths:
- build_libfuzz/crash_*
when: always
# ============================================================
# Penetration Testing Support
# ============================================================
pentest_setup:
stage: pentest
script:
# Generate test environment for manual pentest
- docker-compose -f docker/pentest-env.yml up -d
# Create pentest report template
- python scripts/generate_pentest_template.py
# Export attack surface documentation
- python scripts/export_attack_surface.py > attack_surface.md
artifacts:
paths:
- attack_surface.md
- pentest_template.docx
rules:
- if: $PENTEST_ENABLED == "true"
when: manual
# ============================================================
# Security Report
# ============================================================
security_report:
stage: report
script:
# Consolidate all security findings
- python scripts/consolidate_security_findings.py \
--sast cppcheck.sarif semgrep.sarif \
--sca trivy.sarif \
--dast zap-report.xml \
--output security_report.json
# Generate executive summary
- python scripts/generate_security_summary.py \
security_report.json > security_summary.md
# Check against security gates
- python scripts/check_security_gates.py security_report.json
artifacts:
paths:
- security_report.json
- security_summary.md
reports:
security: security_report.json
Automotive-Specific Security Testing
CAN Bus Security Testing
"""
CAN bus security testing framework for automotive ECUs.
"""
import can
import time
from dataclasses import dataclass
from typing import List, Optional, Dict
from enum import Enum
class TestResult(Enum):
PASS = "pass"
FAIL = "fail"
INCONCLUSIVE = "inconclusive"
@dataclass
class SecurityTestCase:
"""Security test case definition."""
id: str
name: str
category: str
description: str
csr_reference: str # Cybersecurity requirement
threat_reference: str
test_steps: List[str]
expected_result: str
pass_criteria: str
@dataclass
class SecurityTestResult:
"""Security test execution result."""
test_case: SecurityTestCase
result: TestResult
actual_result: str
evidence: str
execution_time: float
tester: str
class CANSecurityTester:
"""CAN bus security testing for automotive ECUs.
Note: Uses socketcan interface (Linux-specific); adapt for other platforms.
"""
def __init__(self, interface: str = 'can0'):
self.bus = can.interface.Bus(interface, bustype='socketcan')
self.test_results: List[SecurityTestResult] = []
def test_message_authentication(self, target_id: int,
valid_mac: bytes) -> SecurityTestResult:
"""Test SecOC message authentication implementation.
CSR-BCM-001: CAN Message Authentication
"""
test_case = SecurityTestCase(
id="SEC-TEST-001",
name="SecOC Message Authentication",
category="authentication",
description="Verify that invalid MAC causes message rejection",
csr_reference="CSR-BCM-001",
threat_reference="THR-CAN-S-001",
test_steps=[
"Send message with valid MAC",
"Verify message accepted",
"Send message with invalid MAC",
"Verify message rejected"
],
expected_result="Invalid MAC messages are rejected",
pass_criteria="Zero acceptance of invalid MAC messages"
)
start_time = time.time()
# Step 1: Send valid message
valid_msg = can.Message(
arbitration_id=target_id,
data=b'\x01\x02\x03\x04' + valid_mac,
is_extended_id=False
)
self.bus.send(valid_msg)
# Check for response (valid should be processed)
valid_response = self._wait_for_response(target_id + 1, timeout=0.1)
# Step 2: Send invalid MAC
invalid_mac = bytes([b ^ 0xFF for b in valid_mac])
invalid_msg = can.Message(
arbitration_id=target_id,
data=b'\x01\x02\x03\x04' + invalid_mac,
is_extended_id=False
)
self.bus.send(invalid_msg)
# Check for response (invalid should be rejected)
invalid_response = self._wait_for_response(target_id + 1, timeout=0.1)
execution_time = time.time() - start_time
# Evaluate result
if valid_response and not invalid_response:
result = TestResult.PASS
actual = "Valid MAC accepted, invalid MAC rejected"
elif not valid_response:
result = TestResult.FAIL
actual = "Valid MAC also rejected - implementation issue"
else:
result = TestResult.FAIL
actual = "Invalid MAC was accepted - security vulnerability!"
return SecurityTestResult(
test_case=test_case,
result=result,
actual_result=actual,
evidence=f"Valid response: {valid_response}, Invalid response: {invalid_response}",
execution_time=execution_time,
tester="automated"
)
def test_replay_protection(self, target_id: int,
message_data: bytes) -> SecurityTestResult:
"""Test replay attack protection.
CSR-BCM-002: Message Freshness
"""
test_case = SecurityTestCase(
id="SEC-TEST-002",
name="Replay Attack Protection",
category="freshness",
description="Verify that replayed messages are rejected",
csr_reference="CSR-BCM-002",
threat_reference="THR-CAN-S-002",
test_steps=[
"Capture valid message",
"Wait for freshness counter to advance",
"Replay captured message",
"Verify replay is rejected"
],
expected_result="Replayed messages are rejected",
pass_criteria="Zero acceptance of replayed messages"
)
start_time = time.time()
# Step 1: Send original message (with current counter)
original_msg = can.Message(
arbitration_id=target_id,
data=message_data,
is_extended_id=False
)
self.bus.send(original_msg)
original_response = self._wait_for_response(target_id + 1, timeout=0.1)
# Step 2: Wait for counter to advance
time.sleep(0.5)
# Step 3: Replay the same message
self.bus.send(original_msg) # Same message = old counter
replay_response = self._wait_for_response(target_id + 1, timeout=0.1)
execution_time = time.time() - start_time
if original_response and not replay_response:
result = TestResult.PASS
actual = "Original accepted, replay rejected"
else:
result = TestResult.FAIL
actual = f"Replay protection failed. Original: {original_response}, Replay: {replay_response}"
return SecurityTestResult(
test_case=test_case,
result=result,
actual_result=actual,
evidence="Captured message trace in evidence log",
execution_time=execution_time,
tester="automated"
)
def test_rate_limiting(self, target_id: int,
rate_limit: int) -> SecurityTestResult:
"""Test message rate limiting implementation.
CSR-BCM-006: Rate Limiting
"""
test_case = SecurityTestCase(
id="SEC-TEST-006",
name="Rate Limiting",
category="availability",
description="Verify message rate limiting prevents flooding",
csr_reference="CSR-BCM-006",
threat_reference="THR-CAN-D-001",
test_steps=[
f"Send messages at normal rate (<{rate_limit}/s)",
"Verify all processed",
f"Send messages above limit (>{rate_limit * 2}/s)",
"Verify excess messages dropped"
],
expected_result="Excess messages dropped",
pass_criteria=f"Message processing limited to ~{rate_limit}/s"
)
start_time = time.time()
# Step 1: Normal rate test
normal_rate = rate_limit // 2
normal_accepted = self._send_burst(target_id, normal_rate, duration=1.0)
# Step 2: Flood rate test
flood_rate = rate_limit * 3
flood_accepted = self._send_burst(target_id, flood_rate, duration=1.0)
execution_time = time.time() - start_time
# Evaluate: flood acceptance should be limited
expected_max = rate_limit * 1.1 # 10% tolerance
if normal_accepted >= normal_rate * 0.9 and flood_accepted <= expected_max:
result = TestResult.PASS
actual = f"Normal: {normal_accepted}/{normal_rate}, Flood: {flood_accepted} (limited to ~{rate_limit})"
else:
result = TestResult.FAIL
actual = f"Rate limiting ineffective. Flood accepted: {flood_accepted}"
return SecurityTestResult(
test_case=test_case,
result=result,
actual_result=actual,
evidence=f"Normal rate: {normal_accepted}, Flood rate: {flood_accepted}",
execution_time=execution_time,
tester="automated"
)
def _wait_for_response(self, response_id: int,
timeout: float) -> Optional[can.Message]:
"""Wait for CAN response message."""
end_time = time.time() + timeout
while time.time() < end_time:
msg = self.bus.recv(timeout=0.01)
if msg and msg.arbitration_id == response_id:
return msg
return None
def _send_burst(self, target_id: int, rate: int,
duration: float) -> int:
"""Send burst of messages and count accepted."""
accepted = 0
interval = 1.0 / rate
end_time = time.time() + duration
while time.time() < end_time:
msg = can.Message(
arbitration_id=target_id,
data=b'\x00\x00\x00\x00\x00\x00\x00\x00',
is_extended_id=False
)
self.bus.send(msg)
response = self._wait_for_response(target_id + 1, timeout=interval)
if response:
accepted += 1
return accepted
def generate_report(self) -> str:
"""Generate security test report (SEC.3.BP5).
ASPICE Alignment: SEC.3.BP5 - Summarize and communicate results
Work Product: WP 13-52 Security Test Report
"""
report = ["# Risk Treatment Verification Report (SEC.3)\n"]
report.append(f"**Report Type**: SEC.3 Risk Treatment Verification (ASPICE-CS-PAM-v2.0)\n")
report.append(f"**Date**: {time.strftime('%Y-%m-%d %H:%M:%S')}\n")
# Summary
total = len(self.test_results)
passed = len([r for r in self.test_results if r.result == TestResult.PASS])
failed = len([r for r in self.test_results if r.result == TestResult.FAIL])
report.append("## Summary\n")
report.append(f"| Metric | Value |")
report.append(f"|--------|-------|")
report.append(f"| Total Tests | {total} |")
report.append(f"| Passed | {passed} |")
report.append(f"| Failed | {failed} |")
report.append(f"| Pass Rate | {passed/total*100:.1f}% |\n")
# Detailed results
report.append("## Detailed Results\n")
for result in self.test_results:
status_icon = "[PASS]" if result.result == TestResult.PASS else "[FAIL]"
report.append(f"### {status_icon} {result.test_case.id}: {result.test_case.name}\n")
report.append(f"**Category**: {result.test_case.category}")
report.append(f"**CSR Reference**: {result.test_case.csr_reference}")
report.append(f"**Result**: {result.result.value}")
report.append(f"**Actual**: {result.actual_result}")
report.append(f"**Evidence**: {result.evidence}\n")
return "\n".join(report)
Penetration Testing Guidance
Pentest Scope for Automotive ECU
The diagram below defines the penetration testing scope for the BCM ECU, identifying the attack surfaces, communication interfaces, and diagnostic channels to be tested.
Test Cases (CAL 3 Required):
| ID | Test | Tool | Status |
|---|---|---|---|
| PT-001 | SecOC MAC validation | Manual/CANoe | Pending |
| PT-002 | Replay attack resistance | Custom script | Pending |
| PT-003 | UDS security access bypass | Undiscover | Pending |
| PT-004 | Firmware extraction via debug | OpenOCD | Pending |
| PT-005 | CAN bus flooding resilience | CANalyst | Pending |
| PT-006 | Side-channel key recovery | ChipWhisperer | Pending |
| PT-007 | Memory corruption (fuzzing) | AFL++ | Completed |
| PT-008 | Diagnostic protocol fuzzing | boofuzz | Completed |
Vulnerability Assessment
Vulnerability Report Template
Note: Vulnerability IDs and dates are illustrative; use project-specific conventions.
# Security Vulnerability Report (illustrative example)
vulnerability:
id: VUL-BCM-(year)-(number)
title: "Insufficient rate limiting on diagnostic interface"
discovered_date: (discovery date)
discovered_by: "Security Team"
discovery_method: penetration_test
classification:
cvss_score: 6.5
cvss_vector: "AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H"
severity: medium
cwe: "CWE-400: Uncontrolled Resource Consumption"
affected_component:
name: BCM_Diagnostics
version: "1.2.0"
file: "src/service/diag_handler.c"
function: "DiagRequestHandler()"
description: |
The diagnostic request handler does not implement proper
rate limiting, allowing an attacker with physical OBD-II
access to flood the diagnostic interface and cause
temporary denial of service.
technical_details: |
- No rate limiting on UDS service 0x22 (Read Data By Identifier)
- Sending >500 requests/second causes queue overflow
- ECU becomes unresponsive for ~5 seconds
- No safety impact (door lock functions remain operational)
proof_of_concept: |
Using CANalyst tool:
1. Connect to OBD-II port
2. Send rapid 0x22 F190 requests (VIN read)
3. Observe diagnostic timeout after ~1000 requests
4. ECU recovers after stopping attack
threat_reference: THR-DIAG-D-001
csr_reference: CSR-BCM-006
remediation:
recommendation: |
Implement rate limiting on diagnostic interface:
- Limit to 100 requests/second per client
- Implement request queue with bounded size
- Add cooldown period after threshold exceeded
fix_reference: CR-2025-016
fix_version: "1.3.0"
verification_test: SEC-TEST-006
risk_assessment:
exploitability: medium
impact: low
residual_risk: low
risk_accepted: true
accepted_by: "Security Architect"
timeline:
discovered: 2025-01-20
reported: 2025-01-20
fix_planned: 2025-01-25
fix_implemented: 2025-01-28
verified: 2025-01-30
closed: 2025-02-01
Security Verification Report
The following diagram summarizes the security verification results, presenting overall pass/fail status, coverage metrics, and outstanding findings requiring remediation.
Work Products
| WP ID | Work Product | AI Role |
|---|---|---|
| 08-56 | Security test specification | Test planning |
| 13-52 | Security test report | Result documentation |
| 08-54 | Vulnerability assessment | CVSS scoring |
| 13-54 | Penetration test report | Finding analysis |
Summary
SEC.3 Risk Treatment Verification:
- Official Purpose: Confirm that implementation of design and integration comply with cybersecurity requirements, refined architectural design, and detailed design (ASPICE-CS-PAM-v2.0)
- AI Automation Level: L2-L3 (high automation for SAST/DAST/fuzzing, AI-assisted penetration testing)
- Primary AI Value: Automated verification execution (SEC.3.BP3), traceability management (SEC.3.BP4), report generation (SEC.3.BP5)
- Human Essential: Verification strategy definition (SEC.3.BP1), verification measure selection (SEC.3.BP2), penetration testing expert review, vulnerability risk acceptance
- Key Outputs: Security test specification (WP 08-56), security test report (WP 13-52), vulnerability assessment (WP 08-54), penetration test report (WP 13-54)
- CAL Alignment: Verification rigor scales with CAL level (CAL 3 mandates penetration testing)