7.3: MAN.6 Measurement


Process Definition

Purpose

MAN.6 Purpose: To collect, analyze, and report data relating to products and processes to support effective management and demonstrate quality.

Outcomes

Outcome Description
O1 Information needs identified
O2 Measures established
O3 Data collected
O4 Data analyzed
O5 Results communicated

Base Practices with AI Integration

BP Base Practice AI Level AI Application
BP1 Identify information needs L1 Need analysis
BP2 Define measures L1 Metric recommendations
BP3 Establish collection procedures L2 Automated collection
BP4 Collect data L3 Full automation
BP5 Analyze data L2-L3 AI analytics
BP6 Communicate results L2-L3 Dashboard generation

Measurement Framework

GQM Approach (Goal-Question-Metric)

The diagram below illustrates the Goal-Question-Metric framework, showing how project goals decompose into measurable questions and their corresponding metrics for data-driven decision making.

GQM Goals


Metric Definitions

Product Metrics

Note: Tool references (gcov, Polarion, etc.) are examples; adapt to your infrastructure.

# Product Quality Metrics
product_metrics:
  # Code Quality
  code_coverage:
    name: "Code Coverage (Statement)"
    description: "Percentage of code statements executed by tests"
    formula: "(Executed statements / Total statements) × 100"
    target: "> 80%"
    threshold_warning: "< 80%"
    threshold_critical: "< 60%"
    collection: automated
    frequency: per_build
    tool: "gcov/lcov"

  branch_coverage:
    name: "Code Coverage (Branch)"
    description: "Percentage of branches executed by tests"
    formula: "(Executed branches / Total branches) × 100"
    target: "> 70%"
    threshold_warning: "< 70%"
    threshold_critical: "< 50%"
    collection: automated
    frequency: per_build
    tool: "gcov/lcov"

  mcdc_coverage:
    name: "MC/DC Coverage"
    description: "Modified Condition/Decision Coverage"
    formula: "Per ISO 26262 / DO-178C definition"
    target: "> 100% (ASIL-D)"
    applicability: "Safety-critical code only"
    collection: automated
    frequency: per_release
    tool: "Testwell CTC++"

  # Defect Metrics
  defect_density:
    name: "Defect Density"
    description: "Number of defects per thousand lines of code"
    formula: "(Total defects / KLOC)"
    target: "< 1.0"
    threshold_warning: "> 1.0"
    threshold_critical: "> 2.0"
    collection: automated
    frequency: weekly
    tool: "Jira + SonarQube"

  defect_escape_rate:
    name: "Defect Escape Rate"
    description: "Defects found after release vs total defects"
    formula: "(Post-release defects / Total defects) × 100"
    target: "< 5%"
    threshold_warning: "> 5%"
    threshold_critical: "> 10%"
    collection: manual
    frequency: per_release

  mttr:
    name: "Mean Time to Resolution"
    description: "Average time to resolve defects"
    formula: "Σ(resolution time) / count(defects)"
    target: "< 5 days (high), < 10 days (medium)"
    collection: automated
    frequency: weekly
    tool: "Jira"

  # Requirements Metrics
  requirements_coverage:
    name: "Requirements Coverage"
    description: "Requirements traced to tests"
    formula: "(Requirements with tests / Total requirements) × 100"
    target: "100%"
    threshold_warning: "< 100%"
    threshold_critical: "< 90%"
    collection: automated
    frequency: weekly
    tool: "Polarion"

  traceability_completeness:
    name: "Traceability Completeness"
    description: "Bidirectional traceability coverage"
    formula: "(Traced items / Total items) × 100"
    target: "100%"
    threshold_warning: "< 100%"
    collection: automated
    frequency: weekly
    tool: "Polarion"

Process Metrics

# Process Performance Metrics
process_metrics:
  # Schedule Metrics
  spi:
    name: "Schedule Performance Index"
    description: "Earned value vs planned value"
    formula: "EV / PV"
    interpretation: "> 1.0 ahead, < 1.0 behind"
    target: "> 0.95"
    threshold_warning: "< 0.95"
    threshold_critical: "< 0.85"
    collection: semi_automated
    frequency: weekly

  cpi:
    name: "Cost Performance Index"
    description: "Earned value vs actual cost"
    formula: "EV / AC"
    interpretation: "> 1.0 under budget, < 1.0 over budget"
    target: "> 0.95"
    threshold_warning: "< 0.95"
    threshold_critical: "< 0.85"
    collection: semi_automated
    frequency: weekly

  velocity:
    name: "Team Velocity"
    description: "Story points completed per sprint"
    formula: "Σ(completed story points)"
    target: "Stable (±10% variance)"
    collection: automated
    frequency: per_sprint
    tool: "Jira"

  # Review Metrics
  review_coverage:
    name: "Review Coverage"
    description: "Work products reviewed vs total"
    formula: "(Reviewed WPs / Total WPs) × 100"
    target: "100%"
    collection: automated
    frequency: weekly
    tool: "Crucible/GitLab"

  finding_density:
    name: "Review Finding Density"
    description: "Findings per unit reviewed"
    formula: "(Total findings / Units reviewed)"
    target: "0.1-0.3 per page (requirements)"
    collection: automated
    frequency: per_review

  # Build Metrics
  build_success_rate:
    name: "Build Success Rate"
    description: "Successful builds vs total builds"
    formula: "(Successful builds / Total builds) × 100"
    target: "> 95%"
    threshold_warning: "< 95%"
    threshold_critical: "< 80%"
    collection: automated
    frequency: daily
    tool: "Jenkins/GitLab CI"

  pipeline_duration:
    name: "Pipeline Duration"
    description: "Average CI/CD pipeline execution time"
    formula: "Avg(pipeline completion time)"
    target: "< 30 minutes"
    collection: automated
    frequency: per_build
    tool: "Jenkins/GitLab CI"

Automated Data Collection

Metrics Collection Pipeline

Note: Script paths (scripts/*.py) require project setup; adapt for your CI/CD infrastructure.

# GitLab CI Metrics Collection
stages:
  - build
  - test
  - analyze
  - report

variables:
  METRICS_DB: "influxdb://metrics.example.com:8086"

collect_build_metrics:
  stage: build
  script:
    - mkdir -p metrics
    - echo "build_start=$(date +%s)" >> metrics/build_metrics.txt

    # Build project
    - cmake -B build -DCMAKE_BUILD_TYPE=Release
    - cmake --build build

    - echo "build_end=$(date +%s)" >> metrics/build_metrics.txt

    # Count lines of code
    - cloc src/ --json > metrics/loc_metrics.json

    # Record build status
    - echo "build_status=success" >> metrics/build_metrics.txt
  artifacts:
    paths:
      - metrics/

collect_test_metrics:
  stage: test
  script:
    # Run tests with coverage
    - cd build && ctest --output-junit test_results.xml
    - gcovr --xml coverage.xml

    # Extract metrics
    - python scripts/extract_test_metrics.py \
        test_results.xml coverage.xml > metrics/test_metrics.json
  artifacts:
    paths:
      - metrics/
      - build/coverage.xml
    reports:
      junit: build/test_results.xml
      coverage_report:
        coverage_format: cobertura
        path: build/coverage.xml

collect_code_quality_metrics:
  stage: analyze
  script:
    # SonarQube analysis
    - sonar-scanner

    # Extract quality metrics from SonarQube
    - python scripts/extract_sonar_metrics.py > metrics/quality_metrics.json

    # Static analysis metrics
    - cppcheck --enable=all --xml src/ 2>&1 | \
        python scripts/parse_cppcheck.py > metrics/static_analysis.json
  artifacts:
    paths:
      - metrics/

push_metrics:
  stage: report
  script:
    # Aggregate all metrics
    - python scripts/aggregate_metrics.py metrics/ > aggregated_metrics.json

    # Push to time-series database
    - python scripts/push_to_influxdb.py aggregated_metrics.json

    # Generate dashboard data
    - python scripts/generate_dashboard.py aggregated_metrics.json

    # Send alerts if thresholds exceeded
    - python scripts/check_thresholds.py aggregated_metrics.json
  artifacts:
    paths:
      - aggregated_metrics.json

Metrics Extraction Script

#!/usr/bin/env python3
"""
Metrics extraction and aggregation for ASPICE MAN.6.
"""

import json
import xml.etree.ElementTree as ET
from dataclasses import dataclass, asdict
from typing import Dict, List, Optional
from datetime import datetime
import requests

@dataclass
class Metric:
    """Single metric measurement."""
    name: str
    value: float
    unit: str
    timestamp: str
    source: str
    threshold_status: str  # ok, warning, critical

@dataclass
class MetricsReport:
    """Aggregated metrics report."""
    project: str
    version: str
    timestamp: str
    build_number: str
    metrics: List[Metric]


class MetricsCollector:
    """Collect metrics from various sources.

    Note: External API calls (SonarQube, Jira) require proper error
    handling and authentication for production use.
    """

    def __init__(self, config: Dict):
        self.config = config
        self.metrics: List[Metric] = []

    def collect_from_junit(self, junit_path: str) -> None:
        """Extract test metrics from JUnit XML."""

        tree = ET.parse(junit_path)
        root = tree.getroot()

        # Count test results
        total_tests = int(root.attrib.get('tests', 0))
        failures = int(root.attrib.get('failures', 0))
        errors = int(root.attrib.get('errors', 0))
        skipped = int(root.attrib.get('skipped', 0))
        time = float(root.attrib.get('time', 0))

        passed = total_tests - failures - errors - skipped
        pass_rate = (passed / total_tests * 100) if total_tests > 0 else 0

        self.metrics.append(Metric(
            name="test_total",
            value=total_tests,
            unit="count",
            timestamp=datetime.now().isoformat(),
            source="junit",
            threshold_status="ok"
        ))

        self.metrics.append(Metric(
            name="test_pass_rate",
            value=round(pass_rate, 2),
            unit="percent",
            timestamp=datetime.now().isoformat(),
            source="junit",
            threshold_status=self._check_threshold("test_pass_rate", pass_rate)
        ))

        self.metrics.append(Metric(
            name="test_duration",
            value=round(time, 2),
            unit="seconds",
            timestamp=datetime.now().isoformat(),
            source="junit",
            threshold_status="ok"
        ))

    def collect_from_coverage(self, coverage_path: str) -> None:
        """Extract coverage metrics from Cobertura XML."""

        tree = ET.parse(coverage_path)
        root = tree.getroot()

        line_rate = float(root.attrib.get('line-rate', 0)) * 100
        branch_rate = float(root.attrib.get('branch-rate', 0)) * 100

        self.metrics.append(Metric(
            name="coverage_statement",
            value=round(line_rate, 2),
            unit="percent",
            timestamp=datetime.now().isoformat(),
            source="gcov",
            threshold_status=self._check_threshold("coverage_statement", line_rate)
        ))

        self.metrics.append(Metric(
            name="coverage_branch",
            value=round(branch_rate, 2),
            unit="percent",
            timestamp=datetime.now().isoformat(),
            source="gcov",
            threshold_status=self._check_threshold("coverage_branch", branch_rate)
        ))

    def collect_from_sonarqube(self, project_key: str) -> None:
        """Fetch metrics from SonarQube API."""

        sonar_url = self.config.get('sonarqube_url', 'http://localhost:9000')
        api_url = f"{sonar_url}/api/measures/component"

        params = {
            'component': project_key,
            'metricKeys': 'bugs,vulnerabilities,code_smells,coverage,duplicated_lines_density'
        }

        try:
            response = requests.get(api_url, params=params)
            data = response.json()

            for measure in data.get('component', {}).get('measures', []):
                metric_name = f"sonar_{measure['metric']}"
                value = float(measure['value'])

                self.metrics.append(Metric(
                    name=metric_name,
                    value=value,
                    unit="count" if measure['metric'] in ['bugs', 'vulnerabilities', 'code_smells'] else "percent",
                    timestamp=datetime.now().isoformat(),
                    source="sonarqube",
                    threshold_status=self._check_threshold(metric_name, value)
                ))
        except Exception as e:
            print(f"Warning: Could not fetch SonarQube metrics: {e}")

    def collect_from_jira(self, project_key: str) -> None:
        """Fetch defect metrics from Jira."""

        jira_url = self.config.get('jira_url')
        auth = self.config.get('jira_auth')

        if not jira_url:
            return

        # Query for defects
        jql = f'project = {project_key} AND type = Bug'
        api_url = f"{jira_url}/rest/api/2/search"

        try:
            response = requests.get(
                api_url,
                params={'jql': jql, 'maxResults': 0},
                auth=auth
            )
            data = response.json()
            total_defects = data.get('total', 0)

            # Query for open defects
            jql_open = f'{jql} AND status NOT IN (Done, Closed)'
            response = requests.get(
                api_url,
                params={'jql': jql_open, 'maxResults': 0},
                auth=auth
            )
            open_defects = response.json().get('total', 0)

            self.metrics.append(Metric(
                name="defects_total",
                value=total_defects,
                unit="count",
                timestamp=datetime.now().isoformat(),
                source="jira",
                threshold_status="ok"
            ))

            self.metrics.append(Metric(
                name="defects_open",
                value=open_defects,
                unit="count",
                timestamp=datetime.now().isoformat(),
                source="jira",
                threshold_status=self._check_threshold("defects_open", open_defects)
            ))
        except Exception as e:
            print(f"Warning: Could not fetch Jira metrics: {e}")

    def _check_threshold(self, metric_name: str, value: float) -> str:
        """Check metric value against thresholds."""

        thresholds = self.config.get('thresholds', {})
        metric_thresholds = thresholds.get(metric_name, {})

        critical = metric_thresholds.get('critical')
        warning = metric_thresholds.get('warning')
        direction = metric_thresholds.get('direction', 'higher_better')

        if critical is not None:
            if direction == 'higher_better' and value < critical:
                return 'critical'
            elif direction == 'lower_better' and value > critical:
                return 'critical'

        if warning is not None:
            if direction == 'higher_better' and value < warning:
                return 'warning'
            elif direction == 'lower_better' and value > warning:
                return 'warning'

        return 'ok'

    def generate_report(self, project: str, version: str, build: str) -> MetricsReport:
        """Generate metrics report."""

        return MetricsReport(
            project=project,
            version=version,
            timestamp=datetime.now().isoformat(),
            build_number=build,
            metrics=self.metrics
        )

    def to_json(self) -> str:
        """Export metrics as JSON."""

        report = self.generate_report(
            self.config.get('project', 'unknown'),
            self.config.get('version', '0.0.0'),
            self.config.get('build', 'unknown')
        )
        return json.dumps(asdict(report), indent=2)

Metrics Dashboard

The following diagram shows the project metrics dashboard, consolidating product quality, process performance, and AI tool effectiveness metrics into a unified reporting view.

Project Metrics Dashboard


Work Products

WP ID Work Product AI Role
08-29 Measurement plan Template generation
13-24 Measurement report Automated generation
14-08 Quality dashboard Automated visualization

Summary

MAN.6 Measurement:

  • AI Level: L2-L3 (High automation potential)
  • Primary AI Value: Automated collection, analysis, dashboards
  • Human Essential: Metric selection, interpretation
  • Key Outputs: Metrics reports, dashboards
  • Integration: Feeds MAN.3, MAN.5, SUP.1