2.4: MBSE and Architecture Tools

What You'll Learn

  • Understand Model-Based Systems Engineering (MBSE) platforms
  • Learn how to configure MBSE tools for ASPICE compliance
  • Master model automation and CI/CD integration
  • Apply AI-assisted architecture analysis

Overview

Model-Based Systems Engineering (MBSE) tools provide comprehensive modeling capabilities for system and software architecture. This section covers leading MBSE platforms, their configuration, and AI integration.


MBSE Platform Comparison

Note: Platform ratings reflect capabilities as of Q4 2024. Tool capabilities and pricing change frequently; conduct formal evaluations for procurement decisions.

Feature EA Rhapsody Cameo Capella YAKINDU
SysML Support Good Excellent Excellent Excellent Limited
UML Support Excellent Excellent Excellent Fair Fair
AUTOSAR Integration Fair Excellent Good Limited Good
Code Generation Good Excellent Good Fair Excellent
Simulation Fair Excellent Good Excellent Excellent
Requirements Trace Excellent Excellent Excellent Excellent Fair
Team Collaboration Excellent Good Excellent Good Fair
Cost (lower is better) Excellent Limited Fair Excellent Excellent
Learning Curve (easier is better) Good Fair Fair Fair Excellent

Legend: Excellent = Full support, Good = Strong support, Fair = Basic support, Limited = Minimal support


Enterprise Architect

Configuration

<!-- Enterprise Architect Project Configuration -->
<EAProject>
    <ProjectSettings>
        <Name>BCM_Door_Lock_Architecture</Name>
        <Version>1.0.0</Version>
        <MDG>
            <Technology>SysML 1.6</Technology>
            <Technology>UML 2.5</Technology>
            <Technology>AUTOSAR</Technology>
        </MDG>
    </ProjectSettings>

    <ModelStructure>
        <Package name="01_Requirements" stereotype="requirements">
            <Package name="Stakeholder_Requirements"/>
            <Package name="System_Requirements"/>
            <Package name="Software_Requirements"/>
        </Package>

        <Package name="02_System_Architecture" stereotype="model">
            <Package name="Block_Definition_Diagrams"/>
            <Package name="Internal_Block_Diagrams"/>
            <Package name="Activity_Diagrams"/>
        </Package>

        <Package name="03_Software_Architecture" stereotype="model">
            <Package name="Component_Diagrams"/>
            <Package name="Class_Diagrams"/>
            <Package name="Sequence_Diagrams"/>
            <Package name="State_Machines"/>
        </Package>

        <Package name="04_Test_Architecture" stereotype="testing"/>

        <Package name="05_Traceability" stereotype="analysis"/>
    </ModelStructure>

    <Connectors>
        <ConnectorType name="Satisfies" direction="source-to-target"/>
        <ConnectorType name="Derives" direction="source-to-target"/>
        <ConnectorType name="Realizes" direction="source-to-target"/>
        <ConnectorType name="Verifies" direction="source-to-target"/>
    </Connectors>

    <TaggedValues>
        <Tag name="ASIL" values="QM,ASIL-A,ASIL-B,ASIL-C,ASIL-D"/>
        <Tag name="Status" values="Draft,Review,Approved,Implemented"/>
        <Tag name="Priority" values="Must,Should,Could,Won't"/>
    </TaggedValues>
</EAProject>

The following diagram shows how AI supports architecture evaluation by analyzing model elements against ASPICE compliance rules, design patterns, and consistency checks.

![AI Support for Architecture Evaluation](../diagrams/Part_III/13.04_AI_Architecture_Eval_1.drawio.svg)

### AI-Assisted Architecture Review

AI can automate several aspects of architecture review that traditionally require extensive manual effort. The following diagram shows how AI review agents analyze architecture models for compliance violations, pattern anti-patterns, and consistency issues — providing reviewers with prioritized findings rather than requiring line-by-line inspection.

![AI Architecture Review Workflow](../diagrams/Part_III/13.04_AI_Arch_Review_1.drawio.svg)

---

## CI/CD Integration

### GitHub Actions Workflow

```yaml
# .github/workflows/architecture-review.yml
name: Architecture Review

on:
  pull_request:
    paths:
      - 'architecture/**'
      - 'design/**/*.uml'
      - 'config/autosar/**/*.arxml'

jobs:
  architecture-review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Install dependencies
        run: |
          pip install networkx matplotlib lxml pyyaml

      - name: Extract architecture model
        run: |
          python scripts/extract_architecture.py \
            --source architecture/ \
            --output build/architecture_model.json

      - name: Run AI architecture review
        run: |
          python scripts/architecture_review.py \
            --model build/architecture_model.json \
            --thresholds config/architecture_thresholds.yaml \
            --output build/review_report.json

      - name: Generate review report
        run: |
          python scripts/generate_report.py \
            --input build/review_report.json \
            --format markdown \
            --output build/ARCHITECTURE_REVIEW.md

      - name: Check quality gates
        id: quality_gates
        run: |
          python scripts/check_quality_gates.py \
            --report build/review_report.json \
            --gates config/quality_gates.yaml

      - name: Comment on PR
        uses: actions/github-script@v7
        if: github.event_name == 'pull_request'
        with:
          script: |
            const fs = require('fs');
            const report = fs.readFileSync('build/ARCHITECTURE_REVIEW.md', 'utf8');
            const score = JSON.parse(fs.readFileSync('build/review_report.json', 'utf8'))
                              .scores.overall;

            const emoji = score >= 80 ? ':white_check_mark:' :
                         score >= 60 ? ':warning:' : ':x:';

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## ${emoji} Architecture Review Score: ${score}/100\n\n${report}`
            });

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: architecture-review
          path: |
            build/review_report.json
            build/ARCHITECTURE_REVIEW.md

      - name: Fail if below threshold
        if: steps.quality_gates.outputs.passed == 'false'
        run: |
          echo "Architecture review failed quality gates"
          exit 1

The following diagram illustrates the end-to-end AI automation pipeline for architecture review, from model extraction through automated analysis to quality gate enforcement in CI/CD.

![AI Automation in Architecture Review](../diagrams/Part_III/13.04_AI_Architecture_Review_2.drawio.svg)

---

## HITL Review Process

| Phase | Human Role | AI Role |
|-------|------------|---------|
| **Model Extraction** | Validate extraction | Parse design artifacts |
| **Metric Analysis** | Interpret results | Calculate all metrics |
| **Pattern Review** | Confirm patterns | Detect and classify |
| **Finding Triage** | Prioritize fixes | Generate findings |
| **Recommendation** | Approve changes | Suggest improvements |
| **Documentation** | Final review | Generate reports |

---

## Summary

AI Architecture Review Key Points:

- **Automated Analysis**: Coupling, cohesion, complexity metrics
- **Pattern Detection**: Layered, event-driven, microservices patterns
- **Anti-Pattern Detection**: God component, cyclic dependencies, tight coupling
- **Quality Gates**: Automated pass/fail decisions in CI/CD
- **ASPICE Support**: SWE.2.BP6 evaluation criteria automation
- **Human Oversight**: All findings require human review and approval

---

## Architecture Design Verification

### Introduction

Design verification ensures architecture and detailed design artifacts are consistent, complete, and compliant with requirements and standards. This section covers AI-powered techniques for automated design verification supporting ASPICE SUP.2 (Verification) activities at the design level.

---

## Design Verification Framework

The following diagram presents the design verification framework, showing how architectural design artifacts are verified against requirements through consistency checks, interface analysis, and traceability validation.

![Design Verification Framework](../diagrams/Part_III/13.04_Design_Verification_Framework_3.drawio.svg)

---

## Consistency Verification

### Interface Consistency Checker

```python
#!/usr/bin/env python3
"""
Design Verification System with AI-powered analysis.
Performs consistency, completeness, and compliance checks on design artifacts.
"""

from dataclasses import dataclass, field
from typing import List, Dict, Set, Optional, Any, Tuple
from enum import Enum
from collections import defaultdict
import json
import re
from abc import ABC, abstractmethod


class VerificationSeverity(Enum):
    """Severity levels for verification findings."""
    BLOCKER = "blocker"
    CRITICAL = "critical"
    MAJOR = "major"
    MINOR = "minor"
    INFO = "info"


class VerificationType(Enum):
    """Types of verification checks."""
    CONSISTENCY = "consistency"
    COMPLETENESS = "completeness"
    COMPLIANCE = "compliance"
    CORRECTNESS = "correctness"


@dataclass
class DataType:
    """Represents a data type definition."""
    name: str
    base_type: str
    size_bits: int = 0
    min_value: Optional[float] = None
    max_value: Optional[float] = None
    unit: str = ""
    constraints: List[str] = field(default_factory=list)


@dataclass
class InterfaceElement:
    """Represents an interface data element or operation."""
    name: str
    element_type: str  # data, operation, event
    data_type: str
    direction: str = ""  # in, out, inout
    description: str = ""
    constraints: Dict[str, Any] = field(default_factory=dict)


@dataclass
class InterfaceDefinition:
    """Represents a complete interface definition."""
    id: str
    name: str
    interface_type: str  # sender_receiver, client_server, etc.
    elements: List[InterfaceElement] = field(default_factory=list)
    version: str = "1.0"
    provider: str = ""
    consumers: List[str] = field(default_factory=list)


@dataclass
class DesignElement:
    """Represents a design element (component, module, etc.)."""
    id: str
    name: str
    element_type: str
    provided_interfaces: List[str] = field(default_factory=list)
    required_interfaces: List[str] = field(default_factory=list)
    properties: Dict[str, Any] = field(default_factory=dict)


@dataclass
class VerificationFinding:
    """Represents a verification finding."""
    id: str
    verification_type: VerificationType
    severity: VerificationSeverity
    title: str
    description: str
    location: str
    affected_elements: List[str]
    rule_id: str
    recommendation: str
    evidence: Dict[str, Any] = field(default_factory=dict)


@dataclass
class DesignModel:
    """Complete design model for verification."""
    name: str
    data_types: Dict[str, DataType] = field(default_factory=dict)
    interfaces: Dict[str, InterfaceDefinition] = field(default_factory=dict)
    components: Dict[str, DesignElement] = field(default_factory=dict)
    requirements: Dict[str, str] = field(default_factory=dict)  # id -> text
    trace_links: Dict[str, List[str]] = field(default_factory=dict)  # design -> reqs


class ConsistencyChecker:
    """
    Checks consistency between design elements.
    Ensures interfaces match, types align, and connections are valid.
    """

    def __init__(self, model: DesignModel):
        self.model = model
        self.findings: List[VerificationFinding] = []

    def check_all(self) -> List[VerificationFinding]:
        """Run all consistency checks."""
        self.findings = []

        self._check_interface_consistency()
        self._check_type_consistency()
        self._check_connection_consistency()
        self._check_naming_consistency()

        return self.findings

    def _check_interface_consistency(self) -> None:
        """Check that interface providers and consumers match."""
        # Build usage map
        interface_providers: Dict[str, str] = {}
        interface_consumers: Dict[str, List[str]] = defaultdict(list)

        for comp_id, comp in self.model.components.items():
            for iface_id in comp.provided_interfaces:
                if iface_id in interface_providers:
                    # Duplicate provider
                    self.findings.append(VerificationFinding(
                        id=f"CONS-IFACE-{len(self.findings):04d}",
                        verification_type=VerificationType.CONSISTENCY,
                        severity=VerificationSeverity.CRITICAL,
                        title="Duplicate Interface Provider",
                        description=f"Interface '{iface_id}' is provided by both "
                                   f"'{interface_providers[iface_id]}' and '{comp_id}'",
                        location=f"component:{comp_id}",
                        affected_elements=[comp_id, interface_providers[iface_id]],
                        rule_id="CONS-001",
                        recommendation="Ensure each interface has exactly one provider"
                    ))
                else:
                    interface_providers[iface_id] = comp_id

            for iface_id in comp.required_interfaces:
                interface_consumers[iface_id].append(comp_id)

        # Check for required interfaces without providers
        for iface_id, consumers in interface_consumers.items():
            if iface_id not in interface_providers:
                self.findings.append(VerificationFinding(
                    id=f"CONS-IFACE-{len(self.findings):04d}",
                    verification_type=VerificationType.CONSISTENCY,
                    severity=VerificationSeverity.BLOCKER,
                    title="Missing Interface Provider",
                    description=f"Interface '{iface_id}' is required by "
                               f"{consumers} but has no provider",
                    location=f"interface:{iface_id}",
                    affected_elements=consumers,
                    rule_id="CONS-002",
                    recommendation="Add a component that provides this interface"
                ))

    def _check_type_consistency(self) -> None:
        """Check that data types are consistently used."""
        # Collect all type references
        type_refs: Dict[str, List[Tuple[str, str]]] = defaultdict(list)

        for iface_id, iface in self.model.interfaces.items():
            for elem in iface.elements:
                type_refs[elem.data_type].append((iface_id, elem.name))

        # Check for undefined types
        for type_name, refs in type_refs.items():
            if type_name not in self.model.data_types:
                self.findings.append(VerificationFinding(
                    id=f"CONS-TYPE-{len(self.findings):04d}",
                    verification_type=VerificationType.CONSISTENCY,
                    severity=VerificationSeverity.MAJOR,
                    title="Undefined Data Type",
                    description=f"Data type '{type_name}' is used but not defined",
                    location=f"datatype:{type_name}",
                    affected_elements=[r[0] for r in refs],
                    rule_id="CONS-003",
                    recommendation=f"Define data type '{type_name}' or use existing type",
                    evidence={'references': refs}
                ))

    def _check_connection_consistency(self) -> None:
        """Check that interface connections are type-compatible."""
        # For each required interface, check compatibility with provider
        for comp_id, comp in self.model.components.items():
            for req_iface_id in comp.required_interfaces:
                # Find provider
                provider_comp = None
                for other_id, other_comp in self.model.components.items():
                    if req_iface_id in other_comp.provided_interfaces:
                        provider_comp = other_id
                        break

                if provider_comp:
                    req_iface = self.model.interfaces.get(req_iface_id)
                    if req_iface:
                        # Check element compatibility
                        self._check_interface_compatibility(
                            comp_id, provider_comp, req_iface
                        )

    def _check_interface_compatibility(
        self,
        consumer: str,
        provider: str,
        interface: InterfaceDefinition
    ) -> None:
        """Check interface element compatibility."""
        for elem in interface.elements:
            if elem.direction == "in" and elem.element_type == "data":
                # Check type constraints
                data_type = self.model.data_types.get(elem.data_type)
                if data_type and data_type.constraints:
                    # Record constraint for verification
                    pass  # Extended constraint checking would go here

    def _check_naming_consistency(self) -> None:
        """Check naming conventions consistency."""
        naming_patterns = {
            'interface': r'^IF_[A-Z][a-zA-Z0-9_]*$',
            'component': r'^[A-Z][a-zA-Z0-9_]*$',
            'datatype': r'^[A-Z][a-zA-Z0-9_]*(_t)?$'
        }

        # Check interface names
        for iface_id, iface in self.model.interfaces.items():
            if not re.match(naming_patterns['interface'], iface.name):
                self.findings.append(VerificationFinding(
                    id=f"CONS-NAME-{len(self.findings):04d}",
                    verification_type=VerificationType.CONSISTENCY,
                    severity=VerificationSeverity.MINOR,
                    title="Interface Naming Convention Violation",
                    description=f"Interface '{iface.name}' doesn't follow "
                               f"naming convention (expected: IF_*)",
                    location=f"interface:{iface_id}",
                    affected_elements=[iface_id],
                    rule_id="CONS-010",
                    recommendation="Rename interface to follow IF_* pattern"
                ))


class CompletenessChecker:
    """
    Checks completeness of design artifacts.
    Ensures all required elements are present and documented.
    """

    def __init__(self, model: DesignModel):
        self.model = model
        self.findings: List[VerificationFinding] = []

    def check_all(self) -> List[VerificationFinding]:
        """Run all completeness checks."""
        self.findings = []

        self._check_traceability_completeness()
        self._check_interface_completeness()
        self._check_component_completeness()
        self._check_documentation_completeness()

        return self.findings

    def _check_traceability_completeness(self) -> None:
        """Check that all design elements trace to requirements."""
        # Check components
        for comp_id, comp in self.model.components.items():
            if comp_id not in self.model.trace_links:
                self.findings.append(VerificationFinding(
                    id=f"COMP-TRACE-{len(self.findings):04d}",
                    verification_type=VerificationType.COMPLETENESS,
                    severity=VerificationSeverity.MAJOR,
                    title="Missing Requirement Traceability",
                    description=f"Component '{comp_id}' has no traced requirements",
                    location=f"component:{comp_id}",
                    affected_elements=[comp_id],
                    rule_id="COMP-001",
                    recommendation="Add trace links to source requirements"
                ))

        # Check for orphan requirements (no design coverage)
        traced_reqs: Set[str] = set()
        for links in self.model.trace_links.values():
            traced_reqs.update(links)

        for req_id in self.model.requirements:
            if req_id not in traced_reqs:
                self.findings.append(VerificationFinding(
                    id=f"COMP-TRACE-{len(self.findings):04d}",
                    verification_type=VerificationType.COMPLETENESS,
                    severity=VerificationSeverity.CRITICAL,
                    title="Requirement Not Covered by Design",
                    description=f"Requirement '{req_id}' is not traced to any "
                               f"design element",
                    location=f"requirement:{req_id}",
                    affected_elements=[req_id],
                    rule_id="COMP-002",
                    recommendation="Create design element to address requirement"
                ))

    def _check_interface_completeness(self) -> None:
        """Check that interfaces are fully specified."""
        for iface_id, iface in self.model.interfaces.items():
            # Check for empty interfaces
            if not iface.elements:
                self.findings.append(VerificationFinding(
                    id=f"COMP-IFACE-{len(self.findings):04d}",
                    verification_type=VerificationType.COMPLETENESS,
                    severity=VerificationSeverity.MAJOR,
                    title="Empty Interface Definition",
                    description=f"Interface '{iface_id}' has no elements defined",
                    location=f"interface:{iface_id}",
                    affected_elements=[iface_id],
                    rule_id="COMP-010",
                    recommendation="Add data elements or operations to interface"
                ))

            # Check for missing descriptions
            for elem in iface.elements:
                if not elem.description:
                    self.findings.append(VerificationFinding(
                        id=f"COMP-IFACE-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLETENESS,
                        severity=VerificationSeverity.MINOR,
                        title="Missing Element Description",
                        description=f"Element '{elem.name}' in interface "
                                   f"'{iface_id}' has no description",
                        location=f"interface:{iface_id}.{elem.name}",
                        affected_elements=[iface_id],
                        rule_id="COMP-011",
                        recommendation="Add description for interface element"
                    ))

    def _check_component_completeness(self) -> None:
        """Check that components are fully specified."""
        for comp_id, comp in self.model.components.items():
            # Check for isolated components
            if not comp.provided_interfaces and not comp.required_interfaces:
                self.findings.append(VerificationFinding(
                    id=f"COMP-COMP-{len(self.findings):04d}",
                    verification_type=VerificationType.COMPLETENESS,
                    severity=VerificationSeverity.MAJOR,
                    title="Isolated Component",
                    description=f"Component '{comp_id}' has no interfaces",
                    location=f"component:{comp_id}",
                    affected_elements=[comp_id],
                    rule_id="COMP-020",
                    recommendation="Add required or provided interfaces"
                ))

    def _check_documentation_completeness(self) -> None:
        """Check that design documentation is complete."""
        required_properties = ['description', 'author', 'version']

        for comp_id, comp in self.model.components.items():
            for prop in required_properties:
                if prop not in comp.properties or not comp.properties[prop]:
                    self.findings.append(VerificationFinding(
                        id=f"COMP-DOC-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLETENESS,
                        severity=VerificationSeverity.MINOR,
                        title=f"Missing {prop.title()} Documentation",
                        description=f"Component '{comp_id}' is missing "
                                   f"'{prop}' property",
                        location=f"component:{comp_id}",
                        affected_elements=[comp_id],
                        rule_id="COMP-030",
                        recommendation=f"Add '{prop}' to component properties"
                    ))


class ComplianceChecker:
    """
    Checks compliance with design standards and guidelines.
    Supports MISRA, AUTOSAR, custom rules.
    """

    def __init__(self, model: DesignModel):
        self.model = model
        self.findings: List[VerificationFinding] = []
        self.rules: Dict[str, Dict[str, Any]] = {}
        self._load_default_rules()

    def _load_default_rules(self) -> None:
        """Load default compliance rules."""
        self.rules = {
            # Interface rules
            'AUTOSAR-IF-001': {
                'description': 'Interface must have version',
                'category': 'interface',
                'check': lambda iface: bool(iface.version)
            },
            'AUTOSAR-IF-002': {
                'description': 'Sender-Receiver interface data elements must have defined type',
                'category': 'interface',
                'check': lambda iface, elem: elem.data_type in self.model.data_types
            },

            # Data type rules
            'MISRA-DT-001': {
                'description': 'Integer types must have defined range',
                'category': 'datatype',
                'check': lambda dt: dt.min_value is not None and dt.max_value is not None
                                   if dt.base_type in ['int', 'uint'] else True
            },

            # Component rules
            'DESIGN-COMP-001': {
                'description': 'Component must provide at least one interface',
                'category': 'component',
                'check': lambda comp: len(comp.provided_interfaces) > 0
            },

            # Safety rules
            'ISO26262-ASIL-001': {
                'description': 'Safety-critical components must have ASIL rating',
                'category': 'safety',
                'check': lambda comp: 'asil' in comp.properties
                                     if comp.properties.get('safety_critical') else True
            }
        }

    def check_all(self, rule_set: Optional[List[str]] = None) -> List[VerificationFinding]:
        """
        Run compliance checks.

        Args:
            rule_set: Optional list of rule IDs to check, None for all

        Returns:
            List of compliance findings
        """
        self.findings = []

        rules_to_check = rule_set or list(self.rules.keys())

        for rule_id in rules_to_check:
            if rule_id in self.rules:
                self._check_rule(rule_id)

        return self.findings

    def _check_rule(self, rule_id: str) -> None:
        """Check a single compliance rule."""
        rule = self.rules[rule_id]
        category = rule['category']

        if category == 'interface':
            self._check_interface_rule(rule_id, rule)
        elif category == 'datatype':
            self._check_datatype_rule(rule_id, rule)
        elif category == 'component':
            self._check_component_rule(rule_id, rule)
        elif category == 'safety':
            self._check_safety_rule(rule_id, rule)

    def _check_interface_rule(self, rule_id: str, rule: Dict) -> None:
        """Check interface compliance rule."""
        for iface_id, iface in self.model.interfaces.items():
            try:
                if not rule['check'](iface):
                    self.findings.append(VerificationFinding(
                        id=f"CMPL-{rule_id}-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLIANCE,
                        severity=VerificationSeverity.MAJOR,
                        title=f"Compliance Violation: {rule_id}",
                        description=rule['description'],
                        location=f"interface:{iface_id}",
                        affected_elements=[iface_id],
                        rule_id=rule_id,
                        recommendation=f"Fix compliance issue per {rule_id}"
                    ))
            except Exception:
                pass

    def _check_datatype_rule(self, rule_id: str, rule: Dict) -> None:
        """Check data type compliance rule."""
        for dt_id, dt in self.model.data_types.items():
            try:
                if not rule['check'](dt):
                    self.findings.append(VerificationFinding(
                        id=f"CMPL-{rule_id}-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLIANCE,
                        severity=VerificationSeverity.MAJOR,
                        title=f"Compliance Violation: {rule_id}",
                        description=rule['description'],
                        location=f"datatype:{dt_id}",
                        affected_elements=[dt_id],
                        rule_id=rule_id,
                        recommendation=f"Fix compliance issue per {rule_id}"
                    ))
            except Exception:
                pass

    def _check_component_rule(self, rule_id: str, rule: Dict) -> None:
        """Check component compliance rule."""
        for comp_id, comp in self.model.components.items():
            try:
                if not rule['check'](comp):
                    self.findings.append(VerificationFinding(
                        id=f"CMPL-{rule_id}-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLIANCE,
                        severity=VerificationSeverity.MAJOR,
                        title=f"Compliance Violation: {rule_id}",
                        description=rule['description'],
                        location=f"component:{comp_id}",
                        affected_elements=[comp_id],
                        rule_id=rule_id,
                        recommendation=f"Fix compliance issue per {rule_id}"
                    ))
            except Exception:
                pass

    def _check_safety_rule(self, rule_id: str, rule: Dict) -> None:
        """Check safety compliance rule."""
        for comp_id, comp in self.model.components.items():
            try:
                if not rule['check'](comp):
                    self.findings.append(VerificationFinding(
                        id=f"CMPL-{rule_id}-{len(self.findings):04d}",
                        verification_type=VerificationType.COMPLIANCE,
                        severity=VerificationSeverity.CRITICAL,
                        title=f"Safety Compliance Violation: {rule_id}",
                        description=rule['description'],
                        location=f"component:{comp_id}",
                        affected_elements=[comp_id],
                        rule_id=rule_id,
                        recommendation=f"Address safety requirement per {rule_id}"
                    ))
            except Exception:
                pass


class DesignVerificationSystem:
    """
    Integrated design verification system.
    Orchestrates all verification activities.
    """

    def __init__(self, model: DesignModel):
        self.model = model
        self.consistency_checker = ConsistencyChecker(model)
        self.completeness_checker = CompletenessChecker(model)
        self.compliance_checker = ComplianceChecker(model)

    def verify(
        self,
        checks: Optional[List[VerificationType]] = None,
        compliance_rules: Optional[List[str]] = None
    ) -> Dict[str, Any]:
        """
        Run design verification.

        Args:
            checks: Types of checks to run, None for all
            compliance_rules: Specific compliance rules to check

        Returns:
            Verification report dictionary
        """
        all_findings: List[VerificationFinding] = []

        checks_to_run = checks or list(VerificationType)

        if VerificationType.CONSISTENCY in checks_to_run:
            all_findings.extend(self.consistency_checker.check_all())

        if VerificationType.COMPLETENESS in checks_to_run:
            all_findings.extend(self.completeness_checker.check_all())

        if VerificationType.COMPLIANCE in checks_to_run:
            all_findings.extend(
                self.compliance_checker.check_all(compliance_rules)
            )

        # Generate report
        return self._generate_report(all_findings)

    def _generate_report(
        self,
        findings: List[VerificationFinding]
    ) -> Dict[str, Any]:
        """Generate verification report."""
        # Count by severity
        severity_counts = defaultdict(int)
        for f in findings:
            severity_counts[f.severity.value] += 1

        # Count by type
        type_counts = defaultdict(int)
        for f in findings:
            type_counts[f.verification_type.value] += 1

        # Determine overall status
        if severity_counts.get('blocker', 0) > 0:
            status = 'FAILED'
        elif severity_counts.get('critical', 0) > 0:
            status = 'FAILED'
        elif severity_counts.get('major', 0) > 3:
            status = 'FAILED'
        elif severity_counts.get('major', 0) > 0:
            status = 'WARNING'
        else:
            status = 'PASSED'

        return {
            'model_name': self.model.name,
            'status': status,
            'summary': {
                'total_findings': len(findings),
                'by_severity': dict(severity_counts),
                'by_type': dict(type_counts)
            },
            'findings': [self._finding_to_dict(f) for f in findings],
            'metrics': self._calculate_metrics(findings)
        }

    def _finding_to_dict(self, f: VerificationFinding) -> Dict:
        """Convert finding to dictionary."""
        return {
            'id': f.id,
            'type': f.verification_type.value,
            'severity': f.severity.value,
            'title': f.title,
            'description': f.description,
            'location': f.location,
            'affected_elements': f.affected_elements,
            'rule_id': f.rule_id,
            'recommendation': f.recommendation,
            'evidence': f.evidence
        }

    def _calculate_metrics(self, findings: List[VerificationFinding]) -> Dict:
        """Calculate verification metrics."""
        total_components = len(self.model.components)
        total_interfaces = len(self.model.interfaces)
        total_requirements = len(self.model.requirements)

        # Calculate coverage
        covered_reqs: Set[str] = set()
        for links in self.model.trace_links.values():
            covered_reqs.update(links)

        req_coverage = len(covered_reqs) / max(total_requirements, 1) * 100

        # Calculate verification quality score
        blocker_weight = 10
        critical_weight = 5
        major_weight = 2
        minor_weight = 0.5

        weighted_issues = sum([
            sum(1 for f in findings if f.severity == VerificationSeverity.BLOCKER) * blocker_weight,
            sum(1 for f in findings if f.severity == VerificationSeverity.CRITICAL) * critical_weight,
            sum(1 for f in findings if f.severity == VerificationSeverity.MAJOR) * major_weight,
            sum(1 for f in findings if f.severity == VerificationSeverity.MINOR) * minor_weight
        ])

        # Score from 0-100, where 100 is perfect
        max_expected_issues = (total_components + total_interfaces) * 0.5
        quality_score = max(0, 100 - (weighted_issues / max(max_expected_issues, 1) * 100))

        return {
            'total_components': total_components,
            'total_interfaces': total_interfaces,
            'total_requirements': total_requirements,
            'requirements_coverage': round(req_coverage, 1),
            'quality_score': round(quality_score, 1)
        }


# Example: BCM Door Lock Design Verification
def create_door_lock_design_model() -> DesignModel:
    """Create sample BCM door lock design model for verification."""
    model = DesignModel(name="BCM_DoorLock_Design")

    # Data types
    model.data_types = {
        'LockCommand_t': DataType(
            name='LockCommand_t',
            base_type='uint',
            size_bits=8,
            min_value=0,
            max_value=2
        ),
        'LockStatus_t': DataType(
            name='LockStatus_t',
            base_type='uint',
            size_bits=8,
            min_value=0,
            max_value=255
        ),
        'DoorId_t': DataType(
            name='DoorId_t',
            base_type='uint',
            size_bits=8,
            min_value=0,
            max_value=3
        )
    }

    # Interfaces
    model.interfaces = {
        'IF_LockCommand': InterfaceDefinition(
            id='IF_LockCommand',
            name='IF_LockCommand',
            interface_type='sender_receiver',
            version='1.0',
            elements=[
                InterfaceElement(
                    name='Command',
                    element_type='data',
                    data_type='LockCommand_t',
                    direction='in',
                    description='Lock command from user input'
                ),
                InterfaceElement(
                    name='DoorId',
                    element_type='data',
                    data_type='DoorId_t',
                    direction='in',
                    description='Target door identifier'
                )
            ],
            provider='CentralLockService'
        ),
        'IF_LockStatus': InterfaceDefinition(
            id='IF_LockStatus',
            name='IF_LockStatus',
            interface_type='sender_receiver',
            version='1.0',
            elements=[
                InterfaceElement(
                    name='Status',
                    element_type='data',
                    data_type='LockStatus_t',
                    direction='out',
                    description='Current lock status'
                )
            ],
            provider='DoorLockController'
        ),
        'IF_MotorControl': InterfaceDefinition(
            id='IF_MotorControl',
            name='IF_MotorControl',
            interface_type='client_server',
            version='1.0',
            elements=[],  # Intentionally empty to trigger finding
            provider='MotorDriver'
        )
    }

    # Components
    model.components = {
        'DoorLockController': DesignElement(
            id='DoorLockController',
            name='Door Lock Controller',
            element_type='swc',
            provided_interfaces=['IF_LockStatus'],
            required_interfaces=['IF_LockCommand', 'IF_DiagService'],
            properties={
                'description': 'Main door lock control logic',
                'author': 'Antonio Stepien',
                'version': '1.0',
                'safety_critical': True,
                'asil': 'QM'
            }
        ),
        'CentralLockService': DesignElement(
            id='CentralLockService',
            name='Central Lock Service',
            element_type='service',
            provided_interfaces=['IF_LockCommand'],
            required_interfaces=['IF_MotorControl'],
            properties={
                'description': 'Central locking coordination service',
                'author': 'Antonio Stepien',
                'version': '1.0'
            }
        ),
        'MotorDriver': DesignElement(
            id='MotorDriver',
            name='Motor Driver',
            element_type='driver',
            provided_interfaces=['IF_MotorControl'],
            required_interfaces=[],
            properties={
                'description': 'Lock motor hardware driver',
                # Missing author and version
            }
        )
    }

    # Requirements
    model.requirements = {
        'REQ-LOCK-001': 'System shall lock all doors within 500ms of lock command',
        'REQ-LOCK-002': 'System shall unlock specific door within 200ms of unlock command',
        'REQ-LOCK-003': 'System shall report lock status within 100ms of status change',
        'REQ-LOCK-004': 'System shall support child lock function'  # No design coverage
    }

    # Trace links
    model.trace_links = {
        'DoorLockController': ['REQ-LOCK-001', 'REQ-LOCK-002', 'REQ-LOCK-003'],
        'CentralLockService': ['REQ-LOCK-001', 'REQ-LOCK-002'],
        'MotorDriver': ['REQ-LOCK-001']
    }

    return model


# Example usage
if __name__ == "__main__":
    # Create sample design model
    model = create_door_lock_design_model()

    # Create verification system
    verifier = DesignVerificationSystem(model)

    # Run verification
    report = verifier.verify()

    # Print report
    print(f"Design Verification Report: {report['model_name']}")
    print(f"Status: {report['status']}")
    print(f"\nSummary:")
    print(f"  Total findings: {report['summary']['total_findings']}")
    print(f"  By severity: {report['summary']['by_severity']}")
    print(f"  By type: {report['summary']['by_type']}")
    print(f"\nMetrics:")
    print(f"  Requirements coverage: {report['metrics']['requirements_coverage']}%")
    print(f"  Quality score: {report['metrics']['quality_score']}")

    print("\nFindings:")
    for finding in report['findings']:
        print(f"  [{finding['severity'].upper()}] {finding['id']}: {finding['title']}")
        print(f"    Location: {finding['location']}")
        print(f"    {finding['description']}")
        print()

The following diagram shows how AI agents support the design verification process by automating consistency checks, detecting design rule violations, and generating verification evidence.

![AI Support for Verification](../diagrams/Part_III/13.04_AI_Verification_Support_4.drawio.svg)

---

## CI/CD Integration

### GitHub Actions Workflow

```yaml
# .github/workflows/design-verification.yml
name: Design Verification

on:
  push:
    paths:
      - 'design/**'
      - 'architecture/**'
  pull_request:
    paths:
      - 'design/**'
      - 'architecture/**'

jobs:
  design-verification:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Install dependencies
        run: |
          pip install lxml pyyaml jsonschema

      - name: Extract design model
        run: |
          python scripts/extract_design_model.py \
            --source design/ \
            --output build/design_model.json

      - name: Run consistency checks
        id: consistency
        run: |
          python scripts/design_verification.py \
            --model build/design_model.json \
            --checks consistency \
            --output build/consistency_report.json

      - name: Run completeness checks
        id: completeness
        run: |
          python scripts/design_verification.py \
            --model build/design_model.json \
            --checks completeness \
            --output build/completeness_report.json

      - name: Run compliance checks
        id: compliance
        run: |
          python scripts/design_verification.py \
            --model build/design_model.json \
            --checks compliance \
            --rules config/compliance_rules.yaml \
            --output build/compliance_report.json

      - name: Merge reports
        run: |
          python scripts/merge_verification_reports.py \
            --reports build/*_report.json \
            --output build/verification_report.json

      - name: Generate markdown report
        run: |
          python scripts/generate_verification_report.py \
            --input build/verification_report.json \
            --format markdown \
            --output build/DESIGN_VERIFICATION.md

      - name: Check quality gates
        id: quality_gates
        run: |
          python scripts/check_quality_gates.py \
            --report build/verification_report.json \
            --gates config/design_quality_gates.yaml

      - name: Comment on PR
        if: github.event_name == 'pull_request'
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const report = JSON.parse(fs.readFileSync('build/verification_report.json', 'utf8'));

            let emoji = ':white_check_mark:';
            if (report.status === 'FAILED') emoji = ':x:';
            else if (report.status === 'WARNING') emoji = ':warning:';

            let body = `## ${emoji} Design Verification: ${report.status}\n\n`;
            body += `| Metric | Value |\n|--------|-------|\n`;
            body += `| Total Findings | ${report.summary.total_findings} |\n`;
            body += `| Requirements Coverage | ${report.metrics.requirements_coverage}% |\n`;
            body += `| Quality Score | ${report.metrics.quality_score} |\n`;

            if (report.summary.by_severity.blocker > 0) {
              body += `\n:rotating_light: **${report.summary.by_severity.blocker} Blocker(s) found!**\n`;
            }

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: body
            });

      - name: Upload verification reports
        uses: actions/upload-artifact@v4
        with:
          name: design-verification-reports
          path: |
            build/verification_report.json
            build/DESIGN_VERIFICATION.md

      - name: Fail on quality gate violation
        if: steps.quality_gates.outputs.passed == 'false'
        run: |
          echo "Design verification failed quality gates"
          exit 1

The following diagram illustrates the CI/CD-integrated design verification pipeline, showing how AI-driven checks are automated as part of the continuous integration workflow with quality gate enforcement.

![AI Automation in Design Verification](../diagrams/Part_III/13.04_AI_Design_Verification_5.drawio.svg)

---

## HITL Verification Process

| Phase | Human Role | AI Role |
|-------|------------|---------|
| **Criteria Definition** | Define pass/fail criteria | Suggest based on standards |
| **Automated Checks** | Configure rules | Execute verification |
| **Finding Review** | Validate findings | Detect and classify |
| **Risk Assessment** | Determine impact | Calculate risk scores |
| **Remediation Planning** | Approve fixes | Suggest solutions |
| **Verification Closure** | Sign-off | Generate evidence |

---

## Summary

MBSE and Architecture Tools:

- **MBSE Platforms**: Enterprise Architect, IBM Rhapsody, Capella - comprehensive modeling for SysML/UML
- **Model Automation**: Python/Java APIs for CI/CD integration
- **AI Architecture Review**: Automated pattern detection and anti-pattern identification
- **Design Verification**: Consistency checking (interfaces, types), completeness checking (traceability), compliance checking (AUTOSAR, MISRA)
- **CI/CD Integration**: Automated verification in Git workflows
- **Human Oversight**: All verification results and findings require human review

---

**Navigation**: [← 13.03 Traceability Automation](13.03_Traceability_Automation.md) | [Contents](../00_Front_Matter/00.06_Table_of_Contents.md) | [14.00 Implementation Tools →](14.00_Implementation_Tools.md)