4.1: Source Code as Single Source of Truth


What You'll Learn

By the end of this chapter, you will be able to:

  • Explain the "source code as truth" principle
  • Design editor-agnostic development workflows
  • Configure CI/CD pipelines as the authoritative processing layer
  • Avoid editor/IDE lock-in
  • Apply source-code-truth to documentation, traceability, and configuration management
  • Recognize anti-patterns that erode a single source of truth
  • Implement practical workflows for safety-critical projects

The Core Principle

The diagram below shows how source code flows from any editor through the authoritative pipeline, which enforces quality gates regardless of which tool produced the code.

Source Code Pipeline

Any editor works. The pipeline is the authority.


What This Means

Source Code Repository

Aspect Principle
Authority Repository is the authoritative source
Versioning All history tracked
Collaboration Team works on same codebase
Traceability Commits link to requirements/issues

CI/CD Pipeline

Aspect Principle
Processing Pipeline processes source
Quality Quality gates enforce standards
Build Reproducible builds from source
Verification Automated testing and analysis

Editors/IDEs

Aspect Principle
Role Editing interface only
Interchangeable Any editor works
Local enhancements AI, linting are local conveniences
No lock-in No required editor

Single Source of Truth Principle

In safety-critical systems governed by ASPICE, ISO 26262, or DO-178C, the single source of truth principle is not merely a convenience — it is a compliance requirement. When multiple representations of the same information exist without a clear authoritative source, inconsistencies become inevitable, and inconsistencies in safety-critical systems can lead to hazardous failures.

Principle: Every piece of project information — requirements, design, code, tests, configuration — must have exactly one authoritative location. All other representations are derived from that location and verified against it.

Why Safety-Critical Systems Demand This

Concern Consequence of Multiple Sources Single-Source Mitigation
Requirements ambiguity Conflicting interpretations lead to wrong implementation One canonical requirements store, all views derived
Design inconsistency Architecture documents diverge from actual code Code is truth; design docs generated or verified from code
Test coverage gaps Tests written against stale requirements Tests linked to requirements in one traceable chain
Audit failure Assessors find contradictions across work products Single source ensures consistency across all artifacts
Safety argument collapse Safety case references outdated evidence Evidence generated from source, always current

The Cost of Duplication

When information lives in two places, one of them is always wrong. In regulated environments, this creates a specific and measurable risk:

  • ASPICE SUP.10 (Change Request Management) requires that changes propagate to all affected work products. Duplication makes this exponentially harder.
  • ISO 26262 Part 8 Clause 6 requires configuration management of all safety-related work products. Duplicate sources create uncontrolled artifacts.
  • DO-178C Section 7 requires that software lifecycle data be controlled. Shadow copies violate this requirement.

Documentation as Code

The documentation-as-code approach treats documentation with the same rigor applied to source code: it is version-controlled, reviewed, tested, and deployed through the same pipeline.

Key Insight: If documentation does not live in the repository, it will drift from the code it describes. If it does not pass through the pipeline, its quality is unverified.

Core Practices

Practice Description Benefit
Docs in repo Documentation files stored alongside source code Same versioning, same branching, same reviews
Markdown format Plain-text format readable by humans and machines No proprietary tooling required
Pipeline validation CI/CD checks doc structure, links, and freshness Broken references caught automatically
Review process Documentation changes go through pull requests Peer review ensures accuracy
Generated artifacts PDFs, HTML, and other formats built from source Single source, multiple outputs

CI/CD for Documentation

A documentation pipeline should enforce the same quality gates as a code pipeline:

# Example documentation pipeline stage
doc_quality:
  stage: verify
  script:
    - markdownlint docs/**/*.md
    - check-links docs/**/*.md
    - verify-traceability --requirements docs/requirements/ --tests test/
    - build-docs --format pdf --output output/
  artifacts:
    paths:
      - output/*.pdf

What Belongs in the Repository

Artifact In Repository Rationale
Requirements (Markdown/ReqIF) Yes Version-controlled, diffable, reviewable
Architecture descriptions Yes Co-evolves with code
API documentation source Yes Generated from code annotations
Test plans and procedures Yes Linked to requirements and code
Build and deployment scripts Yes Reproducible pipeline
Generated PDFs/HTML No Built by pipeline, stored as artifacts
Presentation slides No Derived work, not source of truth
Meeting minutes No Administrative, not engineering truth

AI and Documentation Drift

AI-generated content introduces a new category of documentation drift. When an AI tool generates or updates documentation, the output may be plausible but subtly wrong — and because it reads well, human reviewers may not catch the inaccuracies.

Warning: AI-generated documentation that is not verified against the actual source code is a liability, not an asset. Plausible-sounding text that contradicts the implementation is worse than no documentation at all.

How Drift Occurs

Drift Mechanism Description Example
Stale context AI generates docs based on outdated code snapshots Function signature changed but AI-generated docstring reflects old parameters
Hallucinated behavior AI infers behavior that does not exist AI documents a retry mechanism that was never implemented
Assumption propagation AI assumes standard patterns that do not apply AI documents thread-safety for a single-threaded module
Version mismatch AI trained on older API versions AI references deprecated functions as current
Partial understanding AI documents only what it can see in a prompt window AI misses critical edge cases handled elsewhere in the codebase

Mitigation Strategies

Strategy Implementation Effectiveness
Source-code verification Pipeline compares AI-generated docs against code structure High — catches structural mismatches
Human review mandate All AI-generated documentation requires human sign-off High — but depends on reviewer diligence
Freshness timestamps Every generated doc carries a timestamp and source commit hash Medium — enables staleness detection
Automated consistency checks CI job validates that documented interfaces match actual interfaces High — catches drift automatically
Regeneration over patching Regenerate docs from source rather than patch existing AI output High — eliminates accumulated drift
Diff-based review Show only what changed between AI regeneration cycles Medium — reduces review burden

The HITL Requirement for AI Documentation

In safety-critical contexts, AI-generated documentation must follow the same HITL (Human-in-the-Loop) pattern as AI-generated code:

  1. AI generates or updates documentation
  2. Pipeline validates structural correctness
  3. Human reviews content accuracy
  4. Approved documentation is merged to the authoritative branch
  5. Pipeline builds and publishes derived artifacts

Skipping step 3 is never acceptable for safety-related work products.


Traceability from Source

ASPICE requires bidirectional traceability across all engineering levels. The single source of truth principle enables this by ensuring that every traceable link points to exactly one canonical location.

Traceability Chain

Level Artifact Links To
Stakeholder needs Stakeholder requirements document System requirements
System requirements SyRS (System Requirements Specification) Software requirements, Hardware requirements
Software requirements SRS (Software Requirements Specification) Architecture, Detailed design
Architecture SAD (Software Architecture Document) Detailed design, Components
Detailed design Source code modules Unit tests
Unit tests Test results Requirements (closing the loop)

Implementing Traceability in Source

Traceability identifiers can be embedded directly in source code, keeping the link alive at the point of truth:

/**
 * @requirement SWR-042
 * @design SDD-017
 * @safety ASIL-B
 *
 * Calculate braking distance based on current velocity and
 * road surface coefficient.
 */
float calculate_braking_distance(float velocity, float surface_coeff)
{
    /* Implementation */
}
# test_braking.py
class TestBrakingDistance:
    """
    Verifies: SWR-042
    Design Ref: SDD-017
    """
    def test_dry_road(self):
        assert calculate_braking_distance(30.0, 0.8) < 45.0

Pipeline-Enforced Traceability

The CI/CD pipeline can verify that traceability is complete:

traceability_check:
  - Parse all @requirement tags from source code
  - Parse all Verifies: tags from test code
  - Cross-reference against requirements database
  - Report: uncovered requirements, orphaned tests, broken links
  - Quality gate: FAIL if any requirement lacks both implementation and test

Configuration Management

The single source of truth principle is the foundation of ASPICE SUP.8 (Configuration Management). When all project artifacts are derived from or controlled within the repository, configuration management becomes tractable rather than heroic.

SUP.8 Base Practices and Source Code Truth

SUP.8 Base Practice How Source Code Truth Supports It
BP1: Develop a CM strategy Strategy is straightforward: repository is the authority
BP2: Identify configuration items Every file in the repository is a configuration item with full history
BP3: Establish baselines Git tags and branches define baselines precisely
BP4: Manage change requests Pull requests with linked issues constitute change control
BP5: Manage changes to CIs Commits with review and approval manage every change
BP6: Ensure completeness Pipeline verifies all required artifacts are present and consistent
BP7: Manage storage and delivery Repository and artifact store provide controlled storage

Configuration Items in the Repository

Configuration Item Repository Location Baseline Mechanism
Source code src/, include/ Git tags on release branches
Test code test/ Same tag as source
Build configuration ci/, Makefile Same tag as source
Documentation source docs/ Same tag as source
Tool configuration config/ Same tag as source
Requirements (if text-based) docs/requirements/ Same tag as source
Pipeline definition Jenkinsfile, .gitlab-ci.yml Same tag as source

Principle: If an artifact cannot be reconstructed from a tagged commit in the repository, it is not under proper configuration management.


Editor-Agnostic Workflow

Recommended Architecture

The following diagram shows the recommended separation between developer workstations (where editing happens) and the CI/CD pipeline (where enforcement happens), with the repository as the handoff point.

Developer to Pipeline Flow

What Goes Where

Element Location Rationale
Code style rules Repository Enforced by pipeline
Linting config Repository Consistent across editors
AI prompts Repository Standardized usage
Build scripts Repository Reproducible builds
Quality gates Pipeline Authoritative enforcement
Editor settings Local Personal preference

Practical Implementation

Repository Structure

project/
├── src/                     # Source code
├── include/                 # Headers
├── test/                    # Test code
├── docs/                    # Documentation
├── config/
│   ├── .clang-format       # Code style (used by pipeline)
│   ├── .clang-tidy         # Static analysis rules
│   ├── sonar-project.properties  # SonarQube config
│   └── misra.rules         # MISRA configuration
├── ci/
│   ├── Jenkinsfile         # Pipeline definition
│   ├── build.sh            # Build script
│   └── quality-gate.sh     # Quality enforcement
├── .editorconfig           # Cross-editor formatting
└── README.md

.editorconfig Example

# Cross-editor settings
root = true

[*]
indent_style = space
indent_size = 4
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

[*.c]
indent_size = 4

[*.py]
indent_size = 4

[Makefile]
indent_style = tab

AI Integration Pattern

Editor-Local AI

Aspect Implementation
AI Assistant In-editor (Copilot, Claude, etc.)
Role Suggestion, generation
Authority None (suggestions only)
Verification Pipeline (not editor)

Pipeline AI

Aspect Implementation
AI Analysis In pipeline (code review, analysis)
Role Quality gate component
Authority Block/pass (with human review)
Verification Part of authoritative process

The following diagram contrasts the boundary between local AI assistance (advisory, no authority) and pipeline AI analysis (authoritative, with enforcement power).

AI Authority Split - Local vs Pipeline


Living Documentation Architecture

Keeping documentation in sync with source code requires more than good intentions. It requires a technical architecture that makes drift detectable and correctness verifiable.

Architecture Components

Component Role Implementation
Source code annotations Embed documentation hooks in code Doxygen, Javadoc, docstrings, custom tags
Documentation source files Human-authored explanatory content Markdown files in docs/ directory
Generation engine Produce derived documentation from source Doxygen, Sphinx, MkDocs, Pandoc
Validation layer Verify consistency between code and docs Custom CI scripts, link checkers, schema validators
Publishing pipeline Build and distribute final artifacts CI/CD stage producing PDF, HTML, or hosted site

Sync Verification Approach

The pipeline should verify documentation freshness on every commit:

doc_sync_check:
  steps:
    - Extract public API signatures from source code
    - Extract documented API signatures from docs
    - Compare: flag any undocumented public APIs
    - Compare: flag any documented APIs that no longer exist
    - Verify all requirement tags in code have matching entries in requirements docs
    - Verify all test tags reference valid requirements
    - FAIL if discrepancies exceed threshold

Freshness Policy

Documentation Type Maximum Staleness Verification Method
API reference 0 commits (always current) Generated from source on every build
Architecture overview 1 release cycle Manual review at release gate
Requirements specification 0 commits (always current) Traceability check in pipeline
User guides 1 release cycle Manual review triggered by feature changes
Safety documentation 0 commits (always current) Automated consistency checks

AI Documentation Agents

AI agents can be deployed to monitor and maintain documentation quality, operating within the HITL framework required by safety standards.

Principle: AI documentation agents propose changes; humans approve them. The agent never writes directly to the authoritative branch.

Agent Roles

Agent Type Responsibility Trigger
Drift detector Compares documentation against current code and flags inconsistencies Every commit or nightly schedule
Coverage reporter Identifies undocumented modules, functions, or requirements Pull request gate
Freshness monitor Flags documentation that has not been updated since related code changed Nightly or per-release
Style enforcer Checks documentation against style guide and formatting rules Every commit
Link validator Verifies internal and external links are not broken Nightly schedule

Agent Workflow

  1. Trigger: Code change is pushed or scheduled job fires
  2. Analysis: Agent compares code state against documentation state
  3. Report: Agent generates a structured report of discrepancies
  4. Proposal: Agent opens a draft pull request with suggested documentation updates
  5. Review: Human reviewer evaluates proposals for accuracy
  6. Merge: Approved changes are merged to the authoritative branch

Guardrails for AI Documentation Agents

Guardrail Purpose
Draft-only PRs Agent cannot merge its own changes
Scope limitation Agent operates only on documentation files, never on source code
Confidence scoring Agent labels each suggestion with a confidence level
Audit log All agent actions are logged for traceability
Human escalation Low-confidence suggestions are flagged for senior review
Rollback capability Any agent-generated change can be reverted to the previous human-approved state

Markdown as Standard

For safety-critical documentation managed as code, Markdown has emerged as the practical standard. Its simplicity is its strength.

Why Markdown Works

Property Benefit for Safety-Critical Projects
Plain text Diffable in version control; no binary lock-in
Human readable Reviewable without specialized tools
Machine parseable Automated validation, extraction, and transformation
Tool ecosystem Pandoc, MkDocs, Sphinx, Hugo, and hundreds of others
Lightweight No heavy IDE or proprietary license required
Convertible Transforms to PDF, HTML, DOCX, LaTeX, ReqIF

Markdown Limitations and Mitigations

Limitation Mitigation
No native requirements management Use structured YAML frontmatter or link to external RMS
Limited table formatting Use HTML tables for complex layouts; keep Markdown tables simple
No built-in cross-referencing Use consistent ID schemes and automated link validation
No access control on sections Manage access at the repository/branch level
No built-in review workflow Use pull request reviews (standard Git workflow)

Structured Markdown for Requirements

Requirements can be captured in Markdown with structured metadata:

### SWR-042: Braking Distance Calculation

| Field | Value |
|-------|-------|
| ID | SWR-042 |
| Priority | Mandatory |
| ASIL | B |
| Parent | SYS-REQ-018 |
| Status | Approved |
| Version | 1.2 |

**Description**: The system shall calculate braking distance based on
current vehicle velocity and road surface friction coefficient.

**Rationale**: Required for forward collision warning activation threshold.

**Acceptance Criteria**:
- Braking distance is computed within 10ms of velocity update
- Accuracy within 5% of physics-based reference model
- Handles surface coefficients in range [0.1, 1.0]

Practical Workflow

The following step-by-step workflow illustrates how to maintain source code truth on a daily basis in a safety-critical project.

Step 1: All Changes Start in the Repository

Every modification — code, documentation, configuration, test — begins as a change in the version-controlled repository. No changes are made in external tools and then "synced back."

Step 2: Branch and Link

Create a feature branch linked to a requirement or change request:

git checkout -b feature/SWR-042-braking-distance

The branch name encodes traceability. The pull request description references the requirement ID.

Step 3: Implement with Embedded Traceability

Write code with requirement and design tags embedded. Write or update documentation in the same branch. Write or update tests that reference the same requirement.

Step 4: Pipeline Validates

On push, the CI/CD pipeline:

Check Purpose
Build Code compiles and links
Unit tests Functional correctness verified
Static analysis MISRA, coding standards enforced
Traceability check All modified requirements have tests
Documentation check No broken links, no stale API docs
Coverage check Code coverage meets threshold

Step 5: Review

Human reviewers verify:

  • Code correctness and safety implications
  • Documentation accuracy
  • Traceability completeness
  • AI-generated content (if any) is factually correct

Step 6: Merge to Authoritative Branch

After approval, the branch is merged. The authoritative branch now contains the updated code, documentation, tests, and traceability — all in one atomic change.

Step 7: Baseline

At release milestones, the authoritative branch is tagged. This tag constitutes a baseline: a complete, consistent, traceable snapshot of the entire project state.


Benefits

Team Flexibility

  • Developers choose preferred editors
  • New team members use familiar tools
  • No training on specific IDE

Vendor Independence

  • No lock-in to specific IDE vendor
  • Switch AI tools without disruption
  • Upgrade/replace tools independently

Process Stability

  • Pipeline changes independent of editors
  • Quality standards enforced consistently
  • Reproducible results

AI Portability

  • AI tools can be changed
  • Compare AI assistants easily
  • Future AI tools integrate simply

Anti-Patterns

The following anti-patterns undermine the single source of truth and create risk in safety-critical projects.

Anti-Pattern Problem Better Approach
IDE-specific build Only works in one IDE Use build scripts
Editor-only linting Inconsistent enforcement Pipeline linting
AI-required editor Limits tool choice AI in pipeline + any editor
Local-only quality Different standards per dev Pipeline gates
Wiki documentation Docs live outside the repo, diverge from code Docs-as-code in the repository
Email-based requirements Requirements not version-controlled Requirements in repo or linked RMS
Manual traceability matrix Spreadsheet maintained by hand, always stale Automated traceability from tagged source
Copy-paste configuration Config files duplicated across projects with manual edits Shared config with project-specific overrides in repo
Shadow repositories Teams maintain private repos alongside the official one One repository, branching strategy for isolation
Unreviewed AI output AI-generated docs merged without human verification HITL review for all AI-generated content
Tool-specific artifacts as source Proprietary tool database treated as source of truth Export to open format in repo; tool is a view, not the source
Post-hoc documentation Docs written after release instead of alongside development Documentation updated in the same commit as code changes

Implementation Checklist

Use this checklist when establishing or auditing source-code-truth practices on a project.

# Item Status
1 All source code is in a single version-controlled repository
2 All documentation source files are in the same repository
3 All build and pipeline definitions are in the repository
4 All tool configurations (linting, formatting, analysis) are in the repository
5 .editorconfig is present for cross-editor consistency
6 CI/CD pipeline builds from repository without manual steps
7 Pipeline enforces code style, static analysis, and test execution
8 Pipeline validates documentation links and structure
9 Traceability tags are embedded in source code and test code
10 Pipeline verifies traceability completeness
11 Requirements are stored in or linked from the repository
12 Baselines are established via Git tags at release milestones
13 No build depends on a specific IDE or editor
14 AI-generated content is reviewed by humans before merge
15 Documentation freshness is monitored automatically
16 No shadow copies of controlled artifacts exist outside the repository
17 Change requests are linked to commits via branch names or PR metadata
18 Generated artifacts (PDFs, HTML) are built by pipeline, not committed

Summary

The "source code as single source of truth" principle:

  1. Repository is authority: Source code is the definitive artifact
  2. Pipeline processes: CI/CD is the authoritative processing layer
  3. Editors are interfaces: Any editor can be used
  4. AI serves code: AI tools assist with source, not replace it
  5. No lock-in: Technology can change without process disruption
  6. Documentation lives with code: Docs-as-code prevents drift
  7. Traceability is embedded: Links from requirements to tests live in source
  8. Configuration management follows naturally: SUP.8 compliance is built-in when the repo is the single source
  9. AI agents assist, humans decide: Documentation agents propose; humans approve
  10. Anti-patterns are identifiable: Knowing what breaks the principle helps prevent it