6.0: Security Processes (SEC)


Chapter Overview

Standards Note: SEC processes are defined in the ASPICE Cybersecurity Supplement (CS-PAM), not the core ASPICE 4.0 PAM. They are designed to align with ISO/SAE 21434 (Road Vehicles - Cybersecurity Engineering) and UNECE WP.29 R155/R156 regulations. This chapter covers the CS-PAM processes as commonly applied in automotive assessments.

Security processes address the growing need for cybersecurity in automotive and embedded systems.

Chapter Contents

Section Title Focus
10.01 SEC.1 Cybersecurity Requirements Security requirements engineering
10.02 SEC.2 Cybersecurity Implementation Security measure implementation
10.03 SEC.3 Verification Security testing and validation
10.04 ISO/SAE 21434 Integration Regulatory compliance mapping

Security in ASPICE 4.0

The SEC Process Group

Context: With the rise of connected vehicles, over-the-air (OTA) updates, and V2X communication, cybersecurity is no longer optional. UNECE Regulation R155 mandates a Cybersecurity Management System (CSMS) for vehicle type approval in the EU, Japan, and South Korea. The ASPICE SEC process group provides the assessment framework to evaluate whether an organization's cybersecurity engineering practices meet the required capability levels.

The SEC process group was introduced in the ASPICE Cybersecurity Supplement (CS-PAM) to address the automotive industry's need for a structured, assessable approach to cybersecurity engineering. Unlike SYS or SWE processes that exist in the core ASPICE 4.0 PAM, SEC processes are maintained in a separate supplement that is regularly updated to reflect the evolving threat landscape.

Key characteristics of the SEC process group:

Characteristic Description
Supplement-based SEC processes reside in CS-PAM, not the core PAM. This allows faster update cycles independent of the core standard.
Lifecycle coverage SEC spans concept phase (threat analysis) through post-production (incident response and monitoring).
CAL-driven rigor Cybersecurity Assurance Levels (CAL 1-4) determine the depth and rigor of activities, analogous to ASIL in ISO 26262.
Assessment-focused SEC processes define what to assess, not how to implement. Organizations choose their own implementation methods.
Interoperable SEC processes are designed to integrate with SYS, SWE, and SUP processes without duplication of effort.

Important: CAL levels are assigned per threat scenario based on the TARA (Threat Analysis and Risk Assessment) results. A single ECU may have components at different CAL levels depending on their exposure to threats.

SEC Process Capability Levels

Like all ASPICE processes, SEC processes are assessed against capability levels 0 through 3:

Capability Level Meaning Practical Implication
CL0 Incomplete Security activities are ad hoc or absent
CL1 Performed Security outcomes are achieved but not managed
CL2 Managed Security processes are planned, tracked, and work products are managed
CL3 Established Organization-wide standard security processes are defined and tailored per project

Security Process Framework

The following diagram places the ASPICE SEC processes within the broader context of automotive cybersecurity standards and regulations, including ISO/SAE 21434, UNECE WP.29, and their relationship to safety standards.

Standards and Regulations Context


SEC Process Definitions

SEC.1 Cybersecurity Requirements

Purpose: Define cybersecurity requirements that address identified risks.

Outcome Description
O1 Cybersecurity goals are defined
O2 Cybersecurity requirements are specified
O3 Requirements are communicated
O4 Requirements are consistent with goals

SEC.2 Cybersecurity Implementation

Purpose: Design and implement cybersecurity measures defined in requirements.

Outcome Description
O1 Security architecture is designed
O2 Security measures are implemented
O3 Implementation is consistent with requirements
O4 Implementation is communicated to affected parties

SEC.3 Cybersecurity Verification

Purpose: Verify that cybersecurity requirements are fulfilled.

Outcome Description
O1 Verification strategy defined
O2 Verification activities performed
O3 Vulnerabilities identified
O4 Verification results documented

ISO/SAE 21434 Alignment

Mapping SEC Processes to ISO/SAE 21434

Standards Relationship: ASPICE SEC processes and ISO/SAE 21434 serve complementary purposes. ISO/SAE 21434 defines what cybersecurity engineering activities must be performed throughout the vehicle lifecycle. ASPICE SEC processes provide a capability assessment model to evaluate how well an organization performs those activities. An organization can be ISO/SAE 21434-compliant but still score low on ASPICE SEC capability if their processes lack repeatability, traceability, or management discipline.

The following table maps each SEC process to the corresponding ISO/SAE 21434 clauses and lifecycle phases:

SEC Process ISO/SAE 21434 Clause Lifecycle Phase Key Activities
SEC.1 Clause 8 (TARA), Clause 9 (Concept) Concept Threat scenarios, cybersecurity goals, cybersecurity requirements derivation
SEC.2 Clause 10 (Development), Clause 11 (Validation) Development Architecture refinement, control selection, secure design patterns
SEC.3 Clause 10.4 (Verification), Clause 11 (Validation) Verification & Validation Security testing, vulnerability scanning, penetration testing
All SEC Clause 7 (Risk Management) Continuous Risk monitoring, vulnerability management, incident response

CAL to ISO/SAE 21434 Mapping

Note: CAL (Cybersecurity Assurance Level) is defined in ISO/SAE 21434 Clause 8.7. It determines the rigor of verification activities and the depth of evidence required.

CAL Level Risk Level Required Verification Activities Typical Targets
CAL 1 R1 (Low) SAST, code review Low-risk interior functions
CAL 2 R2-R3 (Medium) SAST, DAST, fuzz testing Body electronics, comfort systems
CAL 3 R4 (High) SAST, DAST, fuzzing, penetration testing Gateways, telematics, ADAS interfaces
CAL 4 R5 (Critical) Full verification suite + independent assessment Safety-relevant ECUs, V2X, autonomous driving

UNECE R155/R156 Regulatory Bridge

Organizations seeking vehicle type approval under UNECE R155 must demonstrate a Cybersecurity Management System (CSMS). The SEC processes provide the internal assessment capability that supports CSMS certification:

UNECE R155 Requirement SEC Process Support Evidence Artifact
Risk assessment methods SEC.1 (TARA) Threat analysis report (WP 08-52)
Security-by-design SEC.2 (Architecture) Security architecture document
Testing and validation SEC.3 (Verification) Security test report (WP 13-52)
Vulnerability monitoring SEC.3 + SUP.9 Vulnerability assessment (WP 08-54)
Incident response SEC.3 + MAN.6 Incident response records
Software update management SEC.3 + UNECE R156 Update verification records

AI in Cybersecurity Engineering

AI Capabilities Across the Security Lifecycle

Principle: AI excels at pattern recognition, exhaustive analysis, and continuous monitoring -- activities that are difficult for humans to perform at scale. However, risk acceptance decisions, security architecture trade-offs, and threat model completeness validation remain fundamentally human responsibilities.

AI transforms cybersecurity engineering in three primary dimensions:

1. Threat Intelligence and Analysis

AI models trained on automotive threat databases (MITRE ATT&CK for ICS, AutoISAC feeds, CVE databases) can identify relevant threats faster than manual analysis. An LLM can ingest a system architecture description and generate a preliminary STRIDE analysis covering spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege for each interface.

2. Automated Vulnerability Discovery

AI-powered static analysis tools (e.g., Semgrep with custom automotive rules, CodeQL with CWE queries) can detect vulnerability patterns that traditional rule-based scanners miss. Machine-learning-guided fuzzing (e.g., AFL++ with neural network feedback) achieves higher code coverage by learning from previous test executions.

3. Continuous Security Monitoring

Post-deployment, AI-driven anomaly detection on CAN bus traffic, Ethernet backbone, or diagnostic interfaces identifies zero-day attacks and novel threat vectors. These systems establish baseline communication patterns and flag deviations for human investigation.

AI Tool Integration by Security Activity

Security Activity AI Technique Input Output Human Review Required
Asset identification NLP extraction from architecture docs SysML models, interface specs Asset catalog with trust boundaries Yes -- completeness check
STRIDE analysis LLM-based threat enumeration Architecture + asset catalog Threat scenario list per interface Yes -- relevance validation
Attack tree generation Pattern matching + knowledge base Threat scenarios + attack DB Attack trees with feasibility ratings Yes -- feasibility calibration
CVSS scoring Rule-based + ML classification Vulnerability description CVSS vector string + base score Yes -- context adjustment
SAST scanning Pattern matching + data flow analysis Source code Vulnerability findings (SARIF) Yes -- false positive triage
Fuzz testing ML-guided mutation Binary/protocol under test Crash reports, coverage data Yes -- root cause analysis
Penetration testing Automated scanning + exploit DB Running system Finding reports Yes -- severity assessment
Anomaly detection Unsupervised ML on traffic patterns Network/CAN bus traffic Anomaly alerts Yes -- incident classification

AI Integration in Security Processes

AI Automation Levels

The following diagram maps each SEC process activity to its achievable AI automation level, illustrating which security tasks can be fully automated and which require mandatory human oversight.

Legend: L0 = No automation (human-only), L1 = AI assists, L2 = AI proposes + human approves, L3 = Fully automated

SEC Process Levels

AI-Powered Security Tools

Category AI Application Automation Level
Threat Modeling STRIDE analysis L1-L2
Risk Assessment CVSS scoring L2
Static Analysis Vulnerability detection L2-L3
Dynamic Testing Fuzzing L3
Penetration Testing Automated scanning (supplements manual testing) L2-L3
Monitoring Anomaly detection L3

SEC Process Summary with AI Integration

Detailed AI Integration per Process

Guidance: The following table provides a comprehensive view of AI integration opportunities for each SEC base practice. Organizations should adopt AI incrementally, starting with L1 (AI assists) and progressing toward L2 (AI proposes) as confidence and tooling maturity grow. L3 (fully automated) should only be used for well-defined, repeatable tasks where false-positive rates are understood.

Process Base Practice AI Integration Automation Level Human Accountability
SEC.1 BP1: Identify cybersecurity-relevant items AI scans architecture models to identify assets, interfaces, and trust boundaries L1-L2 Engineer validates asset completeness
SEC.1 BP2: Perform TARA AI generates STRIDE threat scenarios, proposes attack feasibility ratings using historical data L2 Security expert reviews threat completeness and calibrates feasibility
SEC.1 BP3: Define cybersecurity goals AI proposes cybersecurity goals from threat analysis results, maps to damage scenarios L1 Architect defines and approves goals
SEC.1 BP4: Derive cybersecurity requirements AI generates requirement text from goals, checks for consistency and completeness against known patterns L2 Requirements engineer reviews, refines, and approves
SEC.1 BP5: Ensure consistency and traceability AI auto-generates bidirectional trace links between goals, requirements, and threats L2-L3 Engineer verifies trace completeness
SEC.2 BP1: Refine architecture details AI suggests secure architecture patterns (defense-in-depth, zero-trust zones) based on requirements L1-L2 Architect selects and approves patterns
SEC.2 BP3: Select cybersecurity controls AI recommends controls from a catalog (SecOC, TLS, HSM) matched to requirements and CAL L1-L2 Architect evaluates trade-offs and selects
SEC.2 BP4: Analyze architecture for weaknesses AI performs automated architecture analysis against known weakness patterns L2 Security analyst reviews findings
SEC.2 BP7: Communicate implementation results AI generates implementation summary reports from design artifacts L2 Lead engineer reviews and distributes
SEC.3 BP1: Specify verification measures AI generates test specifications from cybersecurity requirements and threat model L1-L2 Test lead reviews and approves strategy
SEC.3 BP3: Perform verification activities AI executes SAST, DAST, fuzzing; ML-guided test case generation L2-L3 Analyst triages results, validates findings
SEC.3 BP4: Establish traceability AI auto-links test cases to requirements, results to test cases L2-L3 Engineer verifies matrix completeness
SEC.3 BP5: Summarize and communicate results AI consolidates multi-tool findings into executive dashboards L2 Manager reviews and signs off

Security vs Safety

Intersection and Differences

Key Insight: Safety (ISO 26262) protects people from the system failing. Security (ISO/SAE 21434) protects the system from people attacking it. When an attacker exploits a security vulnerability to cause a safety-relevant malfunction, both domains intersect. This intersection demands coordinated analysis and joint mitigation strategies.

Dimension Safety (ISO 26262) Security (ISO/SAE 21434)
Objective Prevent harm from system malfunctions Prevent harm from intentional attacks
Threat source Random hardware faults, systematic design errors Malicious actors (external and internal)
Risk metric ASIL (A-D) based on severity, exposure, controllability CAL (1-4) based on impact and attack feasibility
Lifecycle Concept through decommissioning Concept through decommissioning (with ongoing monitoring)
Analysis method HARA (Hazard Analysis and Risk Assessment) TARA (Threat Analysis and Risk Assessment)
Failure mode Unintended behavior due to faults Intended subversion by an attacker
Update response Safety anomaly investigation (ISO 26262 Part 7) Vulnerability management and patching
V-Model side Both (requirements and verification) Both (requirements and verification)
Regulatory ISO 26262, IEC 61508 ISO/SAE 21434, UNECE R155/R156

Where Safety and Security Converge

The most critical intersection occurs when a cybersecurity attack can cause a safety-relevant failure. Consider the following scenarios:

Scenario Safety Impact Security Vector Joint Mitigation
Spoofed braking command on CAN bus ASIL D -- unintended deceleration CAN message injection via compromised gateway SecOC authentication + safety plausibility checks
Malicious OTA firmware update ASIL B-D -- corrupted safety function Compromised update server or man-in-the-middle Secure boot + code signing + safety-validated rollback
Sensor data manipulation (LiDAR/radar) ASIL C-D -- incorrect environment model Adversarial input injection Sensor fusion cross-validation + anomaly detection
Diagnostic service abuse ASIL B -- disabling safety monitors Unauthorized UDS access via OBD-II Authentication + rate limiting + session management

Process Implication: When safety and security requirements conflict (e.g., security requires encrypted communication but safety requires deterministic timing), a joint safety-security analysis must resolve the conflict. Document the resolution rationale as evidence for both ISO 26262 and ISO/SAE 21434 assessments.


Attack Surface Analysis

AI-Powered Attack Surface Identification

Definition: The attack surface of an automotive ECU is the sum of all interfaces, communication channels, data stores, and processing elements that can be targeted by an attacker. Reducing the attack surface is a foundational cybersecurity principle.

AI assists attack surface analysis by systematically enumerating all entry points from architecture models and interface specifications. Traditional manual methods often miss indirect attack vectors (e.g., a diagnostic interface reachable through a gateway chain).

Attack Surface Categories

Category Examples AI Detection Method Typical CAL
Physical interfaces OBD-II, JTAG, debug headers, USB Hardware BOM analysis, PCB layout parsing CAL 2-3
Wired network interfaces CAN, CAN-FD, LIN, FlexRay, Automotive Ethernet Architecture model analysis, DBC/ARXML parsing CAL 2-4
Wireless interfaces Bluetooth, Wi-Fi, cellular (4G/5G), V2X, NFC Communication stack enumeration, protocol analysis CAL 3-4
Software interfaces APIs, diagnostic services (UDS), OTA update endpoints Code analysis, interface specification parsing CAL 2-4
Data stores Flash memory, EEPROM, secure storage, key material Memory map analysis, firmware structure parsing CAL 3-4
Supply chain Third-party libraries, COTS components, open-source code SCA (Software Composition Analysis), SBOM generation CAL 1-3

AI-Driven Attack Surface Reduction Workflow

  1. Enumerate: AI parses architecture models (SysML, ARXML) and source code to produce a complete interface catalog
  2. Classify: Each interface is classified by exposure level (remote, adjacent, local, physical) and trust boundary crossing
  3. Prioritize: AI ranks interfaces by risk using historical attack data and CVSS environmental metrics
  4. Recommend: AI suggests attack surface reduction measures (disable unused interfaces, add authentication, isolate via firewall rules)
  5. Verify: Automated scanning confirms that reduction measures are effective (e.g., port scanning, interface probing)
Workflow Step Input AI Contribution Output
Enumerate Architecture models, source code NLP extraction of interfaces and data flows Interface catalog
Classify Interface catalog Rule-based classification + ML-based exposure scoring Classified interface list
Prioritize Classified list + threat intelligence Risk ranking using CVSS + historical attack frequency Prioritized risk list
Recommend Prioritized list + control catalog Pattern-matched mitigation recommendations Reduction plan
Verify Reduction plan + running system Automated scanning and probing Verification report

Security Requirements Engineering

AI-Assisted Security Requirements Derivation

Process Link: Security requirements engineering (SEC.1) takes inputs from threat analysis (TARA) and produces cybersecurity requirements that flow into SEC.2 for implementation and SEC.3 for verification. AI accelerates this pipeline by automating the derivation of requirements from threat scenarios.

Requirements Derivation Pipeline

The following table illustrates how AI transforms TARA outputs into structured cybersecurity requirements:

TARA Output AI Derivation Step Generated Requirement Type Example
Threat scenario (CAN injection) Map threat to mitigation pattern Authentication requirement "The ECU shall authenticate all safety-relevant CAN messages using SecOC with CMAC-128"
Attack feasibility (High) Determine required control strength Strength requirement "The authentication mechanism shall resist attacks with feasibility rating Medium or higher"
Damage scenario (unauthorized access) Map damage to prevention mechanism Access control requirement "The diagnostic interface shall require Security Access (UDS 0x27) before executing write services"
Risk level (R4, CAL 4) Determine verification depth Verification requirement "The authentication implementation shall be validated through independent penetration testing"
Attack path (OBD-II to CAN) Identify isolation needs Segmentation requirement "The gateway shall enforce message filtering between diagnostic and vehicle CAN domains"

Quality Attributes for Security Requirements

AI-generated security requirements must satisfy the same quality attributes as any ASPICE-compliant requirement. AI can self-check against these criteria before presenting to human reviewers:

Quality Attribute Description AI Check Method
Unambiguous Single interpretation possible NLP ambiguity detection (modal verbs, vague quantifiers)
Testable Verification criteria are clear Check for measurable acceptance criteria
Traceable Linked to threat and goal Verify trace links to TARA artifacts exist
Consistent No conflicts with other requirements Cross-reference check against existing requirement set
Feasible Implementable with available technology Match against known implementation patterns
Atomic Addresses a single concern Sentence structure analysis (multiple clauses = split candidate)

Security Architecture Patterns

Defense in Depth

The diagram below illustrates the defense-in-depth architecture pattern for automotive ECUs, showing multiple layered security controls from network perimeter to application-level protection.

Defense In Depth


Secure Development Lifecycle

Integrating AI into the SDL

Definition: The Secure Development Lifecycle (SDL) is the integration of security practices into every phase of the software development process. In the context of ASPICE, the SDL is realized through the coordinated execution of SEC.1, SEC.2, and SEC.3 alongside SWE processes.

AI integration into the SDL creates a continuous security feedback loop. Rather than treating security as a gate at the end of development, AI enables security checks at every stage:

SDL Phase ASPICE Process AI-Enabled Activity Trigger
Requirements SEC.1 + SYS.2 AI-assisted TARA, automated requirement derivation from threats Architecture change, new interface added
Design SEC.2 + SWE.2/SWE.3 AI architecture review for security weaknesses, secure pattern recommendation Design review milestone
Implementation SWE.3 + SEC.2 Real-time SAST in IDE, AI-powered code review for security flaws Every commit (pre-commit hook)
Unit Testing SWE.4 + SEC.3 AI-generated security-focused unit tests, boundary value analysis Every build
Integration SWE.5 + SEC.3 Automated DAST, protocol fuzzing, API security testing Integration build
Qualification SWE.6 + SEC.3 Penetration testing (AI-assisted + manual), vulnerability assessment Release candidate
Post-Release SEC.3 + SUP.9 Continuous vulnerability monitoring, SBOM-based CVE tracking Ongoing

Security Quality Gates

Each development phase should include a security quality gate that must pass before proceeding. AI automates gate evaluation:

Gate Criteria AI Automation Pass/Fail Decision
G1: Requirements Complete All threats have derived requirements; traceability complete L2 -- AI checks coverage, flags gaps Human approves
G2: Design Secure Architecture analysis shows no unmitigated high-risk weaknesses L2 -- AI runs weakness analysis Human reviews findings
G3: Code Clean Zero critical/high SAST findings; all security requirements addressed in code L3 -- CI pipeline enforces Automated (with override)
G4: Tests Pass Security test suite passes; fuzz testing achieves target coverage L3 -- CI pipeline reports Automated (with override)
G5: Release Ready Penetration test complete; residual risk accepted; SBOM clean L2 -- AI consolidates evidence Human signs off

SEC Process Workflow

Integrated Security Development

The following diagram shows how the three SEC processes integrate into the overall development lifecycle, with security activities running in parallel with engineering processes from concept through production.

Security Lifecycle


HITL Patterns for Security

Pattern SEC Application Human Role
Reviewer AI generates threat model Expert reviews completeness
Approver AI recommends mitigations Architect approves design
Validator AI runs security scans Analyst validates findings
Decision Maker AI calculates risk scores Manager accepts residual risk
Escalation AI detects vulnerability Expert assesses severity

Process Interactions

SEC with SYS, SWE, SUP, and MAN

Integration Principle: SEC processes do not operate in isolation. Cybersecurity requirements flow into system and software engineering processes (SYS, SWE), while support processes (SUP) provide the infrastructure for configuration management, change control, and quality assurance of security artifacts. Management processes (MAN) ensure that security work is planned, resourced, and tracked.

Interaction Direction Data Flow Example
SEC.1 to SYS.2 SEC produces, SYS consumes Cybersecurity requirements are allocated to system elements CSR-BCM-001 allocated to Body Control Module
SEC.1 to SWE.1 SEC produces, SWE consumes Cybersecurity requirements refine into software requirements CSR-BCM-001 becomes SWR-SecOC-001
SEC.2 to SWE.2 SEC informs, SWE implements Security architecture constraints flow into software architecture Zero-trust zone boundaries in SW architecture
SEC.2 to SWE.3 SEC constrains, SWE implements Secure coding standards and control implementations SecOC library integration in detailed design
SEC.3 to SWE.4/5/6 SEC defines, SWE executes Security test cases integrated into unit, integration, and qualification testing Fuzz tests in SWE.5 integration test suite
SUP.1 to SEC SUP supports SEC Quality assurance of security work products QA review of threat analysis report
SUP.8 to SEC SUP supports SEC Configuration management of security artifacts Version control of cybersecurity requirements
SUP.9 to SEC.3 SUP triggers SEC Problem resolution triggers re-verification CVE disclosure triggers SEC.3 re-assessment
SUP.10 to SEC SUP supports SEC Change request management for security changes Security patch change request workflow
MAN.3 to SEC MAN plans SEC Project management includes security activities in plan TARA scheduled in project plan

Bidirectional Traceability Across Processes

Maintaining bidirectional traceability between SEC and other process groups is essential for ASPICE capability level 2 and above:

Trace Link From To Purpose
Threat to Goal Threat scenario (SEC.1) Cybersecurity goal (SEC.1) Every threat has a mitigating goal
Goal to Requirement Cybersecurity goal (SEC.1) Cybersecurity requirement (SEC.1) Every goal has derived requirements
Requirement to Architecture Cybersecurity requirement (SEC.1) Architecture element (SEC.2/SWE.2) Every requirement is addressed in architecture
Requirement to Test Cybersecurity requirement (SEC.1) Verification measure (SEC.3) Every requirement has verification evidence
Architecture to Code Architecture element (SEC.2) Software unit (SWE.3) Every security control is implemented
Test to Result Verification measure (SEC.3) Test result (SEC.3) Every test case has execution evidence

Regulatory Compliance

Applicable Standards

Standards Mapping: ASPICE SEC processes map to ISO/SAE 21434 lifecycle phases (Concept, Development, Validation) but serve as assessment criteria, not implementation requirements. Organizations must implement ISO/SAE 21434 activities; ASPICE SEC processes assess the capability of those activities.

Note: Prioritize ISO/SAE 21434 and UNECE R155/R156 for automotive projects; ISO 27001 and IEC 62443 for cross-domain applications.

Standard Scope SEC Integration
ISO/SAE 21434 Automotive cybersecurity Primary alignment
UNECE R155 CSMS type approval Compliance evidence
UNECE R156 Software updates SEC.3 verification
ISO 27001 Information security Process framework
IEC 62443 Industrial security Cross-domain

Work Products Overview

Note: Work product IDs are from CS-PAM. For official WP definitions, refer to the ASPICE Cybersecurity Supplement documentation.

WP ID Work Product SEC Process
17-51 Cybersecurity goals SEC.1
17-55 Cybersecurity requirements SEC.1
08-52 Threat analysis report SEC.1/SEC.2
08-53 Risk treatment plan SEC.2
13-52 Security test report SEC.3
08-54 Vulnerability assessment SEC.3

Implementation Roadmap

Phased Adoption of SEC Processes with AI

Approach: Organizations new to ASPICE SEC should adopt a phased approach. Start with establishing basic SEC.1 capability (TARA and requirements), then build SEC.2 (implementation controls), and finally mature SEC.3 (verification). AI integration should follow the same progression, starting with AI-assisted activities (L1) and progressing to AI-proposed activities (L2) as trust in tooling grows.

Phase 1: Foundation (Months 1-3)

Activity Goal AI Level Deliverable
Establish TARA methodology Repeatable threat analysis process L0-L1 TARA procedure document
Define cybersecurity goals template Standardized goal format L0 Goal template with examples
Set up SAST toolchain Automated static analysis in CI L3 CI pipeline with SAST stage
Train team on ISO/SAE 21434 basics Baseline knowledge N/A Training records
Create cybersecurity requirements template Structured requirement format L0 Template with quality criteria

Phase 2: Integration (Months 4-6)

Activity Goal AI Level Deliverable
AI-assisted TARA LLM generates initial threat scenarios L1-L2 AI-augmented TARA report
Automated requirement derivation AI proposes requirements from threats L2 Draft requirements for review
DAST and fuzz testing integration Dynamic testing in CI pipeline L2-L3 Extended CI pipeline
Traceability tooling Automated trace link management L2 Traceability matrix
Security quality gates Defined gate criteria per phase L2 Gate checklist and automation

Phase 3: Maturation (Months 7-12)

Activity Goal AI Level Deliverable
Full AI-assisted security lifecycle AI integrated at every SDL phase L2 Continuous security feedback
Penetration testing program Regular pentest schedule per CAL L2 (AI-assisted) Pentest reports
Vulnerability management process CVE monitoring + automated SBOM checks L2-L3 Vulnerability dashboard
ASPICE SEC assessment readiness CL2 capability for SEC.1, SEC.2, SEC.3 N/A Assessment preparation package
Continuous improvement Metrics-driven process refinement L2 Security metrics dashboard

Maturity Indicators

Indicator CL1 (Performed) CL2 (Managed) CL3 (Established)
TARA execution Done per project, ad hoc Planned, tracked, reviewed Standard process, tailored per project
Requirements traceability Partial, manual Complete, tool-supported Automated, continuously verified
SAST integration Manual runs CI-integrated, results tracked Organization-wide rules, metrics collected
Penetration testing On request Scheduled per CAL level Standard program with defined scope per CAL
Vulnerability management Reactive Monitored, tracked Proactive, SBOM-based, automated alerts

Summary

SEC Process Group in ASPICE 4.0:

  • New in ASPICE 4.0: Explicit cybersecurity processes
  • ISO/SAE 21434 Aligned: Direct mapping to automotive security standard
  • AI Integration: High potential for automation in threat analysis and testing
  • Human Essential: Risk acceptance, security architecture decisions
  • Key Focus: Defense in depth, continuous security validation

Sub-Chapter Navigation

Chapter Title Description
10.01 SEC.1 Cybersecurity Requirements Threat analysis, cybersecurity goals, requirements derivation, AI-assisted TARA
10.02 SEC.2 Cybersecurity Implementation Attack tree generation, CVSS scoring, risk treatment records, security controls
10.03 SEC.3 Risk Treatment Verification SAST/DAST pipelines, CAN bus security testing, penetration testing, vulnerability assessment
10.04 ISO/SAE 21434 Integration Regulatory compliance mapping, UNECE R155/R156 alignment, cross-standard references