3.2: Code Quality and Standards
Overview
Code quality tools enforce coding standards (MISRA C, AUTOSAR C++), detect bugs, and ensure consistent formatting. Automated enforcement in IDE and CI/CD prevents defects and maintains ASPICE compliance.
Coding Standards for Automotive
Note: MISRA C++:2023 is the latest revision (published October 2023). Verify tool support before adoption as not all analyzers support the newest rules yet.
| Standard | Language | Focus | Safety Levels | Adoption |
|---|---|---|---|---|
| MISRA C:2012 | C | Safety, portability | All ASIL | Very High |
| MISRA C++:2023 | C++ | Safety, C++ specifics | All ASIL | High |
| AUTOSAR C++14 | C++14 | Modern C++, safety | ASIL B-D | Growing |
| CERT C | C | Security | Security-critical | Medium |
| JPL C | C | Mission-critical | Space/aerospace | Low |
MISRA C Compliance Tools
Cppcheck with MISRA Addon
# Install cppcheck with MISRA addon
sudo apt-get install cppcheck
# Download MISRA addon (requires MISRA PDF)
# Run MISRA check
cppcheck --addon=misra \
--enable=all \
--std=c11 \
--platform=unix32 \
--suppress=missingIncludeSystem \
src/
# Example output:
# [src/door_lock.c:42]: (style) misra violation (rule 10.4)
# [src/door_lock.c:67]: (error) misra violation (rule 11.3)
Configuration File
Note: File patterns in the deviation configuration are illustrative. Adjust patterns to match your project's directory structure and naming conventions.
{
"script": "misra.py",
"args": [
"--rule-texts=misra_rules.txt",
"--no-summary"
],
"misra-config": {
"compliance": "mandatory",
"deviations": [
{
"rule": "2.3",
"reason": "Unused types acceptable in header files",
"files": ["*.h"]
},
{
"rule": "11.3",
"reason": "Hardware register access requires casts",
"files": ["hal/*.c"]
}
]
}
}
PC-lint Plus Configuration
// PC-lint Plus configuration for AUTOSAR C++
+autosar // Enable AUTOSAR C++14 rules
-strong(AJX) // Strong type checking
+fva // Enable all checks
-passes=2 // Two-pass analysis
// Suppress specific rules with justification
-e9003 // Rule 7-3-1: Global declarations (hardware access)
-e9026 // Rule 9-3-3: Function-like macro (legacy code)
// Custom severity
-esym(769,*) // Warning for all unused enums
-esym(714,ISR_*) // Info for interrupt handlers
Automated Code Formatting
clang-format for MISRA/AUTOSAR
# .clang-format (MISRA C style)
---
Language: Cpp
Standard: c++14
ColumnLimit: 100
# Indentation
IndentWidth: 4
UseTab: Never
TabWidth: 4
# Braces
BreakBeforeBraces: Allman
AllowShortFunctionsOnASingleLine: None
AllowShortIfStatementsOnASingleLine: Never
AllowShortLoopsOnASingleLine: false
# Pointers and References
PointerAlignment: Right
DerivePointerAlignment: false
# Spacing
SpaceAfterCStyleCast: true
SpacesInParentheses: false
SpacesInSquareBrackets: false
# Line breaks
AlwaysBreakAfterReturnType: None
BreakBeforeBinaryOperators: NonAssignment
BreakBeforeTernaryOperators: true
# Alignment
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: true
AlignEscapedNewlines: Right
AlignOperands: true
AlignTrailingComments: true
# Includes
IncludeBlocks: Regroup
SortIncludes: true
...
Pre-commit Hook
#!/bin/bash
# .git/hooks/pre-commit
# Format staged C/C++ files
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(c|cpp|h|hpp)$')
if [ -n "$STAGED_FILES" ]; then
echo "Running clang-format on staged files..."
clang-format -i $STAGED_FILES
git add $STAGED_FILES
echo "Running cppcheck..."
cppcheck --addon=misra --quiet $STAGED_FILES
if [ $? -ne 0 ]; then
echo "Cppcheck found issues. Commit aborted."
exit 1
fi
fi
exit 0
Linters and Analyzers
cpplint (Google C++ Style)
# Install
pip install cpplint
# Run on source tree
cpplint --recursive \
--filter=-whitespace/tab,-build/header_guard \
--linelength=100 \
src/
# CI integration
cpplint --output=junit src/**/*.{c,cpp,h} > cpplint-report.xml
pylint for Python Tools
# .pylintrc
[MASTER]
jobs=4
persistent=yes
extension-pkg-whitelist=numpy
[MESSAGES CONTROL]
disable=C0111, # missing-docstring (for obvious functions)
C0103, # invalid-name (allow single-letter vars in math)
R0913 # too-many-arguments (acceptable for config)
[FORMAT]
max-line-length=100
indent-string=' '
indent-after-paren=4
[DESIGN]
max-args=8
max-locals=20
max-returns=6
max-branches=15
AI-Powered Static Analysis
Traditional static analysis tools operate on fixed rule sets and pattern matching. AI-powered analysis adds contextual understanding, semantic reasoning, and the ability to detect defect patterns that rule-based systems miss.
Key Insight: AI does not replace MISRA/CERT checkers. It augments them by reducing false positives, prioritizing true defects, and finding logic errors that no rule can express.
AI Analysis Capabilities Beyond Rule-Based Checking
| Capability | Traditional Tools | AI-Enhanced Tools |
|---|---|---|
| Rule violation detection | Exact match | Exact match + intent inference |
| False positive rate | 20-40% typical | 5-15% with AI triage |
| Semantic bug detection | Limited | Context-aware reasoning |
| Cross-function analysis | Shallow | Deep call-graph traversal |
| Natural language findings | Coded messages | Plain-language explanations |
| Fix suggestions | Template-based | Context-specific code patches |
| Priority ranking | Severity only | Risk-weighted with business context |
AI-Assisted Finding Triage
#!/usr/bin/env python3
"""
AI triage of static analysis findings.
Classifies each finding as true/false positive and
suggests targeted remediation.
"""
import json
from anthropic import Anthropic
TRIAGE_PROMPT = """You are a senior embedded safety engineer.
Analyze this static analysis finding and respond in JSON:
Checker: {checker}
Rule: {rule}
File: {file}:{line}
Message: {message}
Code context:
```c
{context}
Respond with: {{ "classification": "TRUE_POSITIVE" | "FALSE_POSITIVE" | "NEEDS_REVIEW", "confidence": 0-100, "safety_impact": "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "NONE", "explanation": "...", "suggested_fix": "...", "related_rules": ["MISRA-...", "CERT-..."] }} """
def triage_findings(findings_path: str) -> list: """Triage static analysis findings using AI.""" client = Anthropic()
with open(findings_path) as f:
findings = json.load(f)
triaged = []
for finding in findings:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=512,
messages=[{
"role": "user",
"content": TRIAGE_PROMPT.format(**finding)
}]
)
result = json.loads(response.content[0].text)
result["original"] = finding
triaged.append(result)
# Sort by safety impact then confidence
impact_order = {"CRITICAL": 0, "HIGH": 1, "MEDIUM": 2, "LOW": 3, "NONE": 4}
triaged.sort(key=lambda x: (impact_order.get(x["safety_impact"], 5), -x["confidence"]))
return triaged
---
## Coding Standards Enforcement
AI tools accelerate adoption and enforcement of coding standards by providing real-time guidance, automated deviation documentation, and intelligent rule mapping across multiple standards.
### AI-Assisted MISRA C/C++ Enforcement
> **Workflow**: Developer writes code, AI identifies violations in real time, suggests compliant alternatives, and drafts deviation records where suppression is justified.
| Enforcement Stage | Manual Approach | AI-Assisted Approach |
|-------------------|-----------------|----------------------|
| Violation detection | Batch run after commit | Real-time in IDE |
| Fix guidance | Look up rule text manually | AI explains rule + shows compliant code |
| Deviation documentation | Engineer writes justification | AI drafts justification for review |
| Cross-standard mapping | Manual cross-reference | AI maps MISRA to CERT to AUTOSAR |
| New rule adoption | Full codebase audit | AI prioritizes highest-risk violations |
### AUTOSAR C++14 Compliance Checking
```yaml
# autosar-checker-config.yaml
standard: AUTOSAR-CPP14
severity_mapping:
mandatory: error
required: error
advisory: warning
ai_features:
auto_fix_suggestions: true
deviation_draft: true
cross_reference:
- MISRA-CPP-2023
- CERT-CPP
rule_categories:
- name: "Language independent issues"
rules: [A0-1-1, A0-1-2, A0-1-3, A0-1-4, A0-1-5, A0-1-6]
ai_priority: high
- name: "Initialization"
rules: [A8-5-0, A8-5-1, A8-5-2, A8-5-3]
ai_priority: critical
- name: "Exception handling"
rules: [A15-0-1, A15-0-2, A15-1-1, A15-1-2]
ai_priority: critical
CERT C Secure Coding Integration
# Run CERT C checks via Clang-Tidy with AI post-processing
clang-tidy -checks='cert-*' \
-export-fixes=cert-fixes.yaml \
src/**/*.c -- \
-I include/ -DSTM32F4xx
# AI post-process: classify findings by exploitability
python3 ai_cert_triage.py \
--input cert-fixes.yaml \
--output cert-prioritized.json \
--model claude-sonnet-4-6
Code Metrics and Quality Gates
AI-driven quality scoring goes beyond simple threshold checks. It provides trend prediction, technical debt forecasting, and risk-weighted scoring that accounts for the safety criticality of each module.
Key Quality Metrics
| Metric | ASIL A-B Target | ASIL C-D Target | AI Enhancement |
|---|---|---|---|
| Cyclomatic complexity | <= 15 per function | <= 10 per function | Predicts defect-prone functions |
| MISRA compliance | >= 95% rules | 100% mandatory + required | Prioritizes highest-risk violations |
| Code duplication | < 5% | < 3% | Identifies semantic clones |
| Comment ratio | >= 20% | >= 30% | Assesses comment quality, not just ratio |
| Function length | <= 100 lines | <= 60 lines | Suggests decomposition strategies |
| Nesting depth | <= 5 levels | <= 4 levels | Recommends flattening patterns |
| Unit test coverage | >= 80% | >= 95% (MC/DC) | Identifies untested safety paths |
| Technical debt ratio | < 5% | < 2% | Forecasts debt growth trajectory |
AI-Driven Quality Scoring
"""
AI-enhanced quality scoring for embedded safety modules.
Weights metrics by ASIL level and module criticality.
"""
ASIL_WEIGHTS = {
"ASIL_D": {
"misra_compliance": 0.25,
"cyclomatic_complexity": 0.15,
"test_coverage": 0.25,
"defect_density": 0.15,
"technical_debt": 0.10,
"review_coverage": 0.10,
},
"ASIL_B": {
"misra_compliance": 0.20,
"cyclomatic_complexity": 0.15,
"test_coverage": 0.20,
"defect_density": 0.15,
"technical_debt": 0.15,
"review_coverage": 0.15,
},
"QM": {
"misra_compliance": 0.10,
"cyclomatic_complexity": 0.15,
"test_coverage": 0.15,
"defect_density": 0.20,
"technical_debt": 0.20,
"review_coverage": 0.20,
},
}
def compute_quality_score(metrics: dict, asil_level: str) -> float:
"""Compute weighted quality score for a module."""
weights = ASIL_WEIGHTS[asil_level]
score = 0.0
for metric, weight in weights.items():
normalized = min(metrics.get(metric, 0) / 100.0, 1.0)
score += normalized * weight
return round(score * 100, 1)
Technical Debt Analysis
AI Advantage: Traditional technical debt tools count violations. AI-enhanced analysis estimates the actual effort to remediate, predicts which debt items will cause future defects, and recommends an optimal fix ordering that maximizes safety improvement per engineering hour.
| Debt Category | Example | AI Remediation Estimate |
|---|---|---|
| MISRA violations | Rule 11.3 casts in HAL layer | 2h per module (batch refactor) |
| Dead code | Unreachable branches in legacy FSM | 4h analysis + 1h removal |
| Complex functions | 150-line ISR handler | 8h decomposition + retest |
| Missing tests | Safety function without MC/DC | 16h per function |
| Outdated comments | Stale doxygen in driver layer | 1h AI-assisted regeneration |
Automated Code Review
AI code reviewers provide continuous, consistent review coverage for safety-critical codebases. They complement human reviewers by catching mechanical issues so that engineers can focus on design and safety reasoning.
AI Review Capabilities
| Review Aspect | Human Reviewer | AI Reviewer | Combined |
|---|---|---|---|
| MISRA rule checking | Slow, inconsistent | Fast, 100% coverage | AI checks, human verifies |
| Logic errors | Excellent | Good for common patterns | AI flags, human confirms |
| Safety pattern violations | Requires expertise | Learns from codebase | AI pre-screens, expert decides |
| Naming conventions | Tedious but thorough | Instant, no fatigue | AI enforces, human overrides |
| Documentation completeness | Subjective | Checklist-based | AI verifies completeness, human judges quality |
| Cross-module impact | Requires system knowledge | Analyzes call graphs | AI surfaces dependencies, human assesses risk |
AI Review Integration
# .ai-review.yaml - AI code review configuration
review_rules:
safety_critical:
enabled: true
checks:
- misra_compliance
- null_pointer_checks
- buffer_bounds_validation
- error_return_handling
- interrupt_safety
fail_on: [misra_mandatory, null_deref, buffer_overflow]
automotive_patterns:
enabled: true
checks:
- autosar_swc_patterns
- rte_api_usage
- dem_event_reporting
- nvm_block_access
severity: warning
documentation:
enabled: true
checks:
- function_header_complete
- safety_classification_present
- requirement_traceability_tag
fail_on: [missing_safety_class]
reporting:
format: [github_pr_comment, json, html]
include_fix_suggestions: true
group_by: [file, severity, rule]
Review Workflow for Safety-Critical Code
Human-in-the-Loop: AI review findings are recommendations. For ASIL C/D code, a qualified safety engineer must approve or reject every AI finding before it influences the codebase.
- Developer pushes code to feature branch
- CI triggers AI review alongside static analysis
- AI reviewer posts findings as PR comments with severity and fix suggestions
- Human reviewer receives AI-triaged finding list (critical items first)
- Human reviewer approves/rejects AI suggestions, adds design-level feedback
- Developer addresses all findings; AI verifies fixes in follow-up commit
- Final approval from qualified reviewer (ASPICE SWE.5 requirement)
Refactoring Assistance
AI-assisted refactoring helps modernize legacy embedded codebases while maintaining compliance with safety standards. The AI understands both the target coding standard and the existing code semantics, enabling safe transformations.
AI Refactoring Strategies
| Refactoring Type | Risk Level | AI Role | Compliance Concern |
|---|---|---|---|
| Extract function | Low | Suggests split points, generates signature | Must maintain MISRA compliance in new function |
| Replace magic numbers | Low | Identifies constants, proposes enum/define | MISRA Rule 7.2 (unsigned suffix) |
| Flatten nested logic | Medium | Proposes guard-clause patterns | Must preserve all execution paths |
| Modernize C to C++14 | High | Maps C patterns to AUTOSAR-compliant C++ | Full AUTOSAR C++14 rule re-check required |
| Remove dead code | Medium | Identifies unreachable paths via analysis | Safety argument: dead code may be defensive |
| Consolidate duplicates | Medium | Detects semantic clones, proposes shared function | Shared code increases coupling; ASIL decomposition impact |
Example: AI-Suggested Function Extraction
/* BEFORE: Monolithic function flagged for high complexity */
void ProcessSensorData(void)
{
/* 120 lines: read sensors, validate, filter, transform, output */
}
/* AFTER: AI-suggested decomposition */
static SensorRawData_t ReadSensorInputs(void);
static bool ValidateSensorRange(const SensorRawData_t *raw);
static SensorFiltered_t ApplyKalmanFilter(const SensorRawData_t *raw);
static void OutputProcessedData(const SensorFiltered_t *filtered);
void ProcessSensorData(void)
{
SensorRawData_t raw = ReadSensorInputs();
if (ValidateSensorRange(&raw))
{
SensorFiltered_t filtered = ApplyKalmanFilter(&raw);
OutputProcessedData(&filtered);
}
else
{
ReportSensorFault(DEM_EVENT_SENSOR_RANGE);
}
}
Safety Note: After any AI-suggested refactoring, re-run the full static analysis suite and regression tests. For ASIL C/D modules, a formal impact analysis is required before accepting structural changes.
Tool Comparison
The following table compares code quality tools with respect to their AI-enhanced features for automotive embedded development.
Note: AI feature availability changes rapidly. Verify current capabilities with vendors before making procurement decisions.
| Tool | Standards Supported | AI Features | IDE Integration | CI/CD Support | Cost | Best For |
|---|---|---|---|---|---|---|
| Coverity | MISRA, CERT, AUTOSAR | AI-assisted triage, defect prediction | VS Code, Eclipse | Jenkins, GitLab, GitHub | $$$ | ASIL D projects |
| Polyspace | MISRA, CERT | Formal verification, AI triage | MATLAB/Simulink | Jenkins, Azure | $$$ | Formal methods, model-based |
| Klocwork | MISRA, CERT, AUTOSAR | AI prioritization, incremental analysis | VS Code, Eclipse, CLion | Jenkins, GitHub | $$$ | Large codebases |
| Parasoft C/C++test | MISRA, CERT, AUTOSAR | AI fix suggestions, compliance reporting | VS Code, Eclipse | Jenkins, GitLab | $$$ | DO-178C, automotive |
| SonarQube | Partial MISRA (plugin) | AI code smell detection, debt estimation | VS Code (SonarLint) | All major CI | Free-$$$ | Quality dashboards |
| PC-lint Plus | MISRA, AUTOSAR | Rule correlation | VS Code, CLI | Any CI | $$ | Legacy migration |
| Cppcheck | MISRA (addon) | None (open-source) | VS Code, Eclipse | All major CI | Free | Lightweight checking |
| Clang-Tidy | CERT, partial MISRA | None (open-source) | VS Code, CLion | All major CI | Free | LLVM ecosystem |
| Helix QAC | MISRA, AUTOSAR, CERT | AI dashboard analytics | Eclipse | Jenkins | $$$ | Full MISRA certification |
Integration with CI/CD
Automated quality gates in CI/CD pipelines ensure that no code reaches production without passing all quality and compliance checks. AI enhances these pipelines by providing intelligent gate decisions that go beyond simple threshold checks.
Multi-Stage Quality Pipeline
# .gitlab-ci.yml - AI-enhanced quality pipeline
stages:
- format
- analyze
- ai-review
- quality-gate
formatting-check:
stage: format
script:
- clang-format --dry-run --Werror src/**/*.{c,cpp,h,hpp}
allow_failure: false
static-analysis:
stage: analyze
script:
- cppcheck --addon=misra --xml --xml-version=2 src/ 2> cppcheck-report.xml
- clang-tidy -checks='cert-*,bugprone-*' src/**/*.c -- -Iinclude/ > clang-tidy.log
artifacts:
paths:
- cppcheck-report.xml
- clang-tidy.log
ai-code-review:
stage: ai-review
script:
- python3 scripts/ai_review.py
--findings cppcheck-report.xml
--source src/
--output ai-review-report.json
--model claude-sonnet-4-6
artifacts:
paths:
- ai-review-report.json
quality-gate:
stage: quality-gate
script:
- python3 scripts/quality_gate.py
--misra-report cppcheck-report.xml
--ai-report ai-review-report.json
--asil-level $ASIL_LEVEL
--fail-on-critical
dependencies:
- static-analysis
- ai-code-review
allow_failure: false
Quality Gate Decision Logic
"""
AI-enhanced quality gate: combines static analysis results
with AI triage to make pass/fail decisions.
"""
import json
import sys
def evaluate_gate(misra_report: str, ai_report: str, asil_level: str) -> bool:
"""Evaluate quality gate with ASIL-dependent thresholds."""
thresholds = {
"ASIL_D": {"mandatory_violations": 0, "critical_findings": 0, "min_score": 95},
"ASIL_C": {"mandatory_violations": 0, "critical_findings": 0, "min_score": 90},
"ASIL_B": {"mandatory_violations": 0, "critical_findings": 2, "min_score": 85},
"ASIL_A": {"mandatory_violations": 0, "critical_findings": 5, "min_score": 80},
"QM": {"mandatory_violations": 5, "critical_findings": 10, "min_score": 70},
}
gate = thresholds[asil_level]
with open(ai_report) as f:
ai_results = json.load(f)
# Count true-positive critical findings (AI-confirmed)
critical = sum(
1 for r in ai_results
if r["classification"] == "TRUE_POSITIVE"
and r["safety_impact"] == "CRITICAL"
)
if critical > gate["critical_findings"]:
print(f"GATE FAILED: {critical} critical findings (max {gate['critical_findings']})")
return False
print("GATE PASSED")
return True
ASPICE Compliance
Code quality and standards enforcement maps directly to ASPICE process requirements. AI tools generate evidence artifacts that satisfy assessor expectations for SWE.3 and SWE.4.
ASPICE Process Mapping
| ASPICE Process | Requirement | Code Quality Activity | AI Contribution |
|---|---|---|---|
| SWE.3 Software Detailed Design and Unit Construction | BP5: Ensure consistency | Coding standards enforcement | AI verifies design-to-code consistency |
| SWE.3 | BP6: Communicate results | Quality reports, deviation records | AI generates summary reports |
| SWE.4 Software Unit Verification | BP1: Develop unit verification strategy | Static analysis tool selection | AI recommends tool configuration per ASIL |
| SWE.4 | BP2: Develop unit verification criteria | Quality gate thresholds | AI calibrates thresholds from historical data |
| SWE.4 | BP3: Perform unit verification | Static analysis execution | AI triages findings, reduces false positives |
| SWE.4 | BP4: Ensure consistency | Bidirectional traceability | AI maps findings to requirements |
| SWE.4 | BP5: Summarize and communicate | Verification reports | AI generates assessment-ready reports |
| SUP.1 Quality Assurance | BP2: Assure work products | Quality gate evidence | AI provides trend analysis dashboards |
Evidence Artifact Generation
Assessor Expectation: ASPICE assessors look for objective evidence that coding standards are defined, enforced, and deviations are documented. AI tools can generate this evidence automatically.
| Evidence Artifact | ASPICE Work Product | AI Generation Method |
|---|---|---|
| Coding standard configuration | SWE.3 WP | Export tool configuration as versioned artifact |
| Static analysis report | SWE.4 WP | Automated report from CI pipeline |
| Deviation record | SWE.3 WP | AI-drafted justification with engineer approval |
| Quality trend report | SUP.1 WP | AI dashboard with historical trend analysis |
| Tool qualification record | SUP.8 WP | AI-compiled tool validation evidence package |
Safety-Critical Code Standards
ISO 26262 imposes ASIL-dependent requirements on coding standards enforcement. Higher ASIL levels demand stricter compliance, more rigorous verification, and greater tool confidence. AI helps enforce these graduated requirements consistently across large codebases.
ASIL-Dependent Enforcement Rules
| Aspect | QM | ASIL A | ASIL B | ASIL C | ASIL D |
|---|---|---|---|---|---|
| Coding standard | Recommended | Required | Required | Required | Required |
| MISRA mandatory rules | Optional | Required | Required | Required | Required |
| MISRA required rules | Optional | Recommended | Required | Required | Required |
| MISRA advisory rules | Optional | Optional | Recommended | Recommended | Required |
| Static analysis tools | 1 tool | 1 tool | 1+ tools | 2+ tools | 2+ independent tools |
| Tool qualification | Not required | TCL 3 | TCL 2 | TCL 2 | TCL 1 |
| Deviation approval | Informal | Team lead | Safety engineer | Safety manager | Independent assessor |
| AI tool usage | Unrestricted | Supervised | Supervised + validated | Supervised + qualified | Qualified + independent verification |
AI Enforcement for ASIL D
# asil-d-enforcement.yaml
# Strictest enforcement profile for ASIL D safety functions
enforcement:
misra_c_2012:
mandatory_rules: error # Zero tolerance
required_rules: error # Zero tolerance
advisory_rules: warning # Must review, may deviate with approval
independent_tools:
primary: coverity # TUV-qualified
secondary: polyspace # TUV-qualified
agreement_required: true # Both tools must agree on clean result
ai_assistance:
triage: true # AI triage permitted
auto_fix: false # No automatic fixes for ASIL D
deviation_draft: true # AI drafts, safety engineer approves
independent_review: true # AI findings reviewed by independent engineer
quality_gate:
mandatory_violations: 0
required_violations: 0
advisory_violations_max: 10 # Each must have approved deviation
cyclomatic_complexity_max: 10
function_length_max: 60
nesting_depth_max: 4
unit_test_coverage_min: 95 # MC/DC coverage
ISO 26262 Part 6, Table 1: Methods for the design of the software unit are graded by ASIL. AI tools can enforce these method requirements automatically but must themselves be qualified to the appropriate Tool Confidence Level (TCL).
CI/CD Quality Gates
SonarQube Quality Gate
Note: Project configuration shown uses example project names and paths. Customize for your project structure.
# sonar-project.properties
sonar.projectKey=door_lock_controller
sonar.projectName=Door Lock Controller ECU
sonar.projectVersion=2.0.0
sonar.sources=src
sonar.tests=tests
sonar.language=c,cpp
# Quality gate thresholds
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=300
# Coverage requirements
sonar.coverage.exclusions=**/test/**,**/mock/**
sonar.cpd.exclusions=**/generated/**
# MISRA rules
sonar.cxx.misra.reportPath=misra-report.xml
# Custom metrics
sonar.cxx.metrics.cyclomatic_complexity=10
sonar.cxx.metrics.function_complexity=15
GitLab CI Quality Job
# .gitlab-ci.yml
code_quality:
stage: test
image: sonarsource/sonar-scanner-cli:latest
variables:
SONAR_HOST_URL: "https://sonarqube.company.com"
script:
- sonar-scanner
-Dsonar.qualitygate.wait=true
-Dsonar.login=$SONAR_TOKEN
only:
- merge_requests
- develop
- main
artifacts:
reports:
codequality: gl-code-quality-report.json
Implementation Checklist
Use this checklist to plan and track the adoption of AI-enhanced code quality and standards enforcement in your project.
Phase 1: Foundation
| Item | Owner | Status |
|---|---|---|
| Select coding standard (MISRA C:2012, AUTOSAR C++14, CERT C) based on ASIL level | Safety Engineer | |
| Configure primary static analysis tool (Coverity, Polyspace, or Cppcheck) | Build Engineer | |
| Define quality gate thresholds per ASIL level | Quality Manager | |
| Set up clang-format configuration aligned with chosen standard | Development Lead | |
| Implement pre-commit hooks for formatting and basic checks | Build Engineer | |
| Document deviation process and approval workflow | Safety Manager |
Phase 2: CI/CD Integration
| Item | Owner | Status |
|---|---|---|
| Integrate static analysis into CI pipeline | DevOps Engineer | |
| Configure SonarQube or equivalent quality dashboard | DevOps Engineer | |
| Set up quality gates that block merges on violations | Build Engineer | |
| Generate ASPICE-compliant evidence artifacts from pipeline | Quality Manager | |
| Validate tool qualification for ISO 26262 TCL requirements | Safety Engineer |
Phase 3: AI Enhancement
| Item | Owner | Status |
|---|---|---|
| Deploy AI triage for static analysis findings | AI/ML Engineer | |
| Configure AI code review for safety-critical modules | Development Lead | |
| Train AI on project-specific deviation patterns | AI/ML Engineer | |
| Integrate AI quality scoring with ASIL-weighted metrics | Quality Manager | |
| Validate AI tool outputs against known-good baselines | Verification Engineer | |
| Document AI tool qualification evidence per TCL | Safety Engineer |
Phase 4: Continuous Improvement
| Item | Owner | Status |
|---|---|---|
| Monitor false positive rates and tune AI models | AI/ML Engineer | |
| Track quality trends and adjust thresholds quarterly | Quality Manager | |
| Conduct retrospectives on AI-flagged vs. human-flagged defects | Development Lead | |
| Update coding standard configuration for new rule revisions | Safety Engineer | |
| Audit deviation database for completeness and currency | Safety Manager |
Summary
Code Quality and Standards tools ensure ASPICE compliance:
- MISRA C/C++: Mandatory for ASIL B-D automotive software
- Automated Enforcement: IDE linting + CI/CD quality gates
- Formatting: 100% automated via clang-format
- Quality Metrics: Complexity, coverage, duplication tracked
- Deviations: Documented and justified for audit trails
- AI Enhancement: Reduces false positives, prioritizes safety-critical findings, generates compliance evidence
- ASPICE Mapping: Direct traceability to SWE.3, SWE.4, and SUP.1 work products
- Safety Scaling: ASIL-dependent enforcement ensures proportionate rigor
Best Practices:
- Enforce formatting automatically (pre-commit hooks)
- Run static analysis in IDE and CI/CD
- Document all MISRA deviations with justification
- Use SonarQube or equivalent for trend tracking
- Fail builds on quality gate violations
- Deploy AI triage to reduce false positive noise by 50% or more
- Generate ASPICE evidence artifacts automatically from CI pipelines
- Scale enforcement rigor proportionally to ASIL level
- Qualify AI tools to the appropriate TCL before relying on their output
- Maintain human-in-the-loop for all safety-critical decisions