8.2: Assessment Preparation
Assessment Overview
Assessment Types
| Type | Purpose | Duration | Assessor |
|---|---|---|---|
| Self-Assessment | Internal improvement | 1-2 weeks | Internal team |
| Mini-Assessment | Quick health check | 2-3 days | Qualified assessor |
| Full Assessment | Certification | 1-2 weeks | Accredited assessor |
| Surveillance | Maintain certification | 2-3 days | Accredited assessor |
Assessment Phases
The diagram below shows the assessment phases from planning through execution and follow-up, indicating the activities, participants, and deliverables at each stage.
Evidence Preparation
Evidence Categories
# Evidence Categories for ASPICE Assessment
evidence_categories:
work_products:
description: "Documents and artifacts produced by processes"
examples:
- "Requirements specifications"
- "Architecture documents"
- "Test reports"
- "Review records"
ai_support: "Automated collection and cataloging"
process_records:
description: "Records showing process execution"
examples:
- "Meeting minutes"
- "Progress reports"
- "Approval records"
- "Change records"
ai_support: "Automated extraction from tools"
tools_infrastructure:
description: "Tools used to support processes"
examples:
- "Requirements management tool"
- "Version control system"
- "CI/CD pipeline"
- "Test automation"
ai_support: "Tool configuration export"
interviews:
description: "Information from practitioners"
examples:
- "Process understanding"
- "Role responsibilities"
- "Problem resolution"
- "Improvement activities"
ai_support: "Interview preparation assistance"
Evidence Matrix
The following diagram maps evidence types to ASPICE processes, showing which work products, records, and tool outputs serve as assessment evidence for each process area.
Readiness Checklist
Pre-Assessment Checklist
Note: Project names and dates are illustrative; customize for actual project.
# Assessment Readiness Checklist (template)
readiness_checklist:
project: (Project Name)
target_level: 2
assessment_date: (assessment date)
organizational_readiness:
- item: "Assessment sponsor identified"
status: complete
responsible: Project Manager
- item: "Assessment scope defined"
status: complete
responsible: Process Owner
- item: "Resources allocated"
status: complete
responsible: Management
- item: "Schedule agreed"
status: complete
responsible: Assessment Coordinator
process_readiness:
swe1:
- item: "All outcomes achieved"
status: complete
evidence: "SRS_v1.3, Trace_Matrix_v1.2"
- item: "Work products complete"
status: complete
evidence: "17-08, 17-11, 17-12"
- item: "Reviews conducted"
status: complete
evidence: "RR-SWE1-001, RR-SWE1-002"
- item: "Performance objectives defined"
status: complete
evidence: "Project Plan Section 4.2"
- item: "Progress monitored"
status: complete
evidence: "Weekly Status Reports"
swe2:
- item: "All outcomes achieved"
status: complete
evidence: "Architecture_v2.0"
- item: "Work products complete"
status: complete
evidence: "17-04, 17-05"
- item: "Reviews conducted"
status: complete
evidence: "RR-SWE2-001"
swe3:
- item: "All outcomes achieved"
status: complete
evidence: "Source code, Unit tests"
- item: "Work products complete"
status: complete
evidence: "04-04, 11-05"
- item: "Code reviews conducted"
status: complete
evidence: "GitLab MR reviews"
swe4:
- item: "All outcomes achieved"
status: complete
evidence: "Unit test results"
- item: "Coverage targets met"
status: complete
evidence: "Coverage report 87%"
- item: "Defects tracked"
status: complete
evidence: "Jira defect records"
swe5:
- item: "All outcomes achieved"
status: in_progress
evidence: "Integration ongoing"
gap: "2 integration issues open"
swe6:
- item: "All outcomes achieved"
status: not_started
evidence: "Scheduled for Feb"
evidence_readiness:
- item: "All work products under version control"
status: complete
location: "GitLab repository"
- item: "Traceability complete"
status: complete
location: "Polarion"
- item: "Review records available"
status: complete
location: "Confluence"
- item: "Status reports archived"
status: complete
location: "SharePoint"
- item: "Tool configurations documented"
status: complete
location: "Git repo /tools/"
interview_readiness:
- role: "Project Manager"
available: true
topics: ["Planning", "Monitoring", "Risk"]
- role: "SW Architect"
available: true
topics: ["Architecture", "Design decisions"]
- role: "SW Developer"
available: true
topics: ["Implementation", "Unit testing"]
- role: "Test Engineer"
available: true
topics: ["Integration testing", "Qualification"]
- role: "QA Engineer"
available: true
topics: ["Reviews", "CM", "QA"]
overall_readiness:
swe1: ready
swe2: ready
swe3: ready
swe4: ready
swe5: partial # Integration not complete
swe6: not_ready # Not started yet
recommendation: |
Recommend postponing assessment until SWE.5 and SWE.6 complete.
Expected readiness: 2025-03-01
AI-Assisted Evidence Collection
Automated Evidence Gatherer
"""
AI-assisted evidence collection for ASPICE assessment.
"""
import os
import json
from dataclasses import dataclass
from typing import List, Dict, Optional
from datetime import datetime
from pathlib import Path
@dataclass
class Evidence:
"""Evidence item for assessment."""
id: str
process: str
attribute: str
type: str # work_product, record, tool
title: str
location: str
version: str
date: str
content_summary: str
relevance_score: float
@dataclass
class EvidencePackage:
"""Complete evidence package for a process."""
process_id: str
process_name: str
target_level: int
evidence_items: List[Evidence]
coverage_summary: Dict[str, float]
gaps: List[str]
collection_date: str
class EvidenceCollector:
"""AI-assisted evidence collection for ASPICE.
Note: Organizations may need tool-specific integrations (e.g.,
Polarion, Jama, DOORS) beyond basic file pattern matching.
"""
def __init__(self, project_root: str, config: Dict):
self.project_root = Path(project_root)
self.config = config
self.evidence_map = self._load_evidence_map()
def _load_evidence_map(self) -> Dict:
"""Load evidence requirements per process."""
return {
'SWE.1': {
'PA1.1': {
'work_products': [
('17-08', 'Software Requirements Specification', ['SRS', 'requirements', '.md', '.docx']),
('17-11', 'Traceability Record', ['trace', 'matrix', '.xlsx']),
('17-12', 'Verification Criteria', ['verification', 'criteria'])
],
'records': [
'Review records',
'Approval records'
]
},
'PA2.1': {
'records': [
'Requirements plan',
'Progress reports',
'Quality metrics'
]
},
'PA2.2': {
'records': [
'Work product list',
'CM records',
'Review records'
]
}
}
}
def collect_evidence(self, process_id: str,
target_level: int) -> EvidencePackage:
"""Collect evidence for a process."""
evidence_items = []
coverage = {}
# Collect PA 1.1 evidence
pa11_evidence = self._collect_pa11_evidence(process_id)
evidence_items.extend(pa11_evidence)
coverage['PA1.1'] = self._calculate_coverage('PA1.1', pa11_evidence, process_id)
# Collect PA 2.1/2.2 evidence if targeting Level 2+
if target_level >= 2:
pa21_evidence = self._collect_pa21_evidence(process_id)
pa22_evidence = self._collect_pa22_evidence(process_id)
evidence_items.extend(pa21_evidence)
evidence_items.extend(pa22_evidence)
coverage['PA2.1'] = self._calculate_coverage('PA2.1', pa21_evidence, process_id)
coverage['PA2.2'] = self._calculate_coverage('PA2.2', pa22_evidence, process_id)
# Identify gaps
gaps = self._identify_gaps(process_id, target_level, coverage)
return EvidencePackage(
process_id=process_id,
process_name=self._get_process_name(process_id),
target_level=target_level,
evidence_items=evidence_items,
coverage_summary=coverage,
gaps=gaps,
collection_date=datetime.now().isoformat()
)
def _collect_pa11_evidence(self, process_id: str) -> List[Evidence]:
"""Collect PA 1.1 evidence - work products."""
evidence = []
process_map = self.evidence_map.get(process_id, {}).get('PA1.1', {})
# Search for work products
for wp_id, wp_name, patterns in process_map.get('work_products', []):
found = self._search_for_evidence(patterns)
if found:
evidence.append(Evidence(
id=f"{process_id}-PA11-{wp_id}",
process=process_id,
attribute='PA1.1',
type='work_product',
title=wp_name,
location=str(found[0]),
version=self._get_version(found[0]),
date=self._get_date(found[0]),
content_summary=self._summarize_content(found[0]),
relevance_score=0.9
))
return evidence
def _collect_pa21_evidence(self, process_id: str) -> List[Evidence]:
"""Collect PA 2.1 evidence - performance management."""
evidence = []
# Search for planning documents
plan_files = self._search_for_evidence(['plan', 'schedule', '.md', '.xlsx'])
for f in plan_files[:3]: # Limit to top 3
evidence.append(Evidence(
id=f"{process_id}-PA21-{len(evidence)}",
process=process_id,
attribute='PA2.1',
type='record',
title=f.stem,
location=str(f),
version='1.0',
date=self._get_date(f),
content_summary=self._summarize_content(f),
relevance_score=0.7
))
# Search for status reports
status_files = self._search_for_evidence(['status', 'report', 'progress'])
for f in status_files[:3]:
evidence.append(Evidence(
id=f"{process_id}-PA21-{len(evidence)}",
process=process_id,
attribute='PA2.1',
type='record',
title=f.stem,
location=str(f),
version='1.0',
date=self._get_date(f),
content_summary=self._summarize_content(f),
relevance_score=0.7
))
return evidence
def _collect_pa22_evidence(self, process_id: str) -> List[Evidence]:
"""Collect PA 2.2 evidence - work product management."""
evidence = []
# Search for CM records
cm_files = self._search_for_evidence(['version', 'baseline', 'changelog'])
for f in cm_files[:3]:
evidence.append(Evidence(
id=f"{process_id}-PA22-{len(evidence)}",
process=process_id,
attribute='PA2.2',
type='record',
title=f.stem,
location=str(f),
version='1.0',
date=self._get_date(f),
content_summary=self._summarize_content(f),
relevance_score=0.7
))
# Search for review records
review_files = self._search_for_evidence(['review', 'approval', 'sign'])
for f in review_files[:3]:
evidence.append(Evidence(
id=f"{process_id}-PA22-{len(evidence)}",
process=process_id,
attribute='PA2.2',
type='record',
title=f.stem,
location=str(f),
version='1.0',
date=self._get_date(f),
content_summary=self._summarize_content(f),
relevance_score=0.7
))
return evidence
def _search_for_evidence(self, patterns: List[str]) -> List[Path]:
"""Search project for files matching patterns.
Note: Basic pattern matching; production use would benefit from
more sophisticated content analysis and tool-specific APIs.
"""
found = []
for pattern in patterns:
if pattern.startswith('.'):
# Extension search
found.extend(self.project_root.rglob(f'*{pattern}'))
else:
# Content/name search
found.extend(self.project_root.rglob(f'*{pattern}*'))
# Deduplicate and sort by modification time
unique = list(set(found))
unique.sort(key=lambda x: x.stat().st_mtime, reverse=True)
return unique[:10] # Return top 10
def _calculate_coverage(self, attribute: str,
evidence: List[Evidence],
process_id: str) -> float:
"""Calculate evidence coverage for an attribute."""
if not evidence:
return 0.0
expected = len(self.evidence_map.get(process_id, {}).get(attribute, {}).get('work_products', [])) + \
len(self.evidence_map.get(process_id, {}).get(attribute, {}).get('records', []))
if expected == 0:
return 1.0 if evidence else 0.0
return min(1.0, len(evidence) / expected)
def _identify_gaps(self, process_id: str, target_level: int,
coverage: Dict[str, float]) -> List[str]:
"""Identify evidence gaps."""
gaps = []
if coverage.get('PA1.1', 0) < 0.86:
gaps.append(f"PA1.1 coverage {coverage.get('PA1.1', 0)*100:.0f}% - need work products")
if target_level >= 2:
if coverage.get('PA2.1', 0) < 0.51:
gaps.append(f"PA2.1 coverage {coverage.get('PA2.1', 0)*100:.0f}% - need performance management records")
if coverage.get('PA2.2', 0) < 0.51:
gaps.append(f"PA2.2 coverage {coverage.get('PA2.2', 0)*100:.0f}% - need work product management records")
return gaps
def _get_version(self, file_path: Path) -> str:
"""Get version from file metadata or content."""
return "1.0" # Simplified
def _get_date(self, file_path: Path) -> str:
"""Get modification date of file."""
return datetime.fromtimestamp(file_path.stat().st_mtime).isoformat()
def _summarize_content(self, file_path: Path) -> str:
"""Generate content summary for file."""
return f"Content from {file_path.name}" # Simplified
def _get_process_name(self, process_id: str) -> str:
"""Get process name from ID."""
names = {
'SWE.1': 'Software Requirements Analysis',
'SWE.2': 'Software Architectural Design'
}
return names.get(process_id, process_id)
def generate_report(self, package: EvidencePackage) -> str:
"""Generate evidence collection report."""
report = [f"# Evidence Collection Report\n"]
report.append(f"**Process**: {package.process_id} - {package.process_name}")
report.append(f"**Target Level**: {package.target_level}")
report.append(f"**Collection Date**: {package.collection_date}\n")
# Coverage summary
report.append("## Coverage Summary\n")
for attr, cov in package.coverage_summary.items():
status = "[OK]" if cov >= 0.86 else "[WARN]" if cov >= 0.51 else "[X]"
report.append(f"- {attr}: {cov*100:.0f}% {status}")
# Evidence list
report.append("\n## Evidence Items\n")
for evidence in package.evidence_items:
report.append(f"### {evidence.title}")
report.append(f"- **ID**: {evidence.id}")
report.append(f"- **Type**: {evidence.type}")
report.append(f"- **Location**: {evidence.location}")
report.append(f"- **Relevance**: {evidence.relevance_score*100:.0f}%\n")
# Gaps
if package.gaps:
report.append("\n## Gaps Identified\n")
for gap in package.gaps:
report.append(f"- {gap}")
return "\n".join(report)
Interview Preparation
Interview Guide
The diagram below presents the interview preparation guide, outlining the key topics, sample questions, and evidence to gather for each role during an ASPICE assessment interview.
Work Products
| WP ID | Work Product | Purpose |
|---|---|---|
| 15-10 | Assessment plan | Assessment scope and schedule |
| 15-11 | Evidence catalog | Collected evidence list |
| 15-12 | Interview schedule | Interview logistics |
Summary
Assessment Preparation:
- Evidence First: Collect and organize before assessment
- Gap Analysis: Identify and address gaps early
- AI Support: Automated evidence collection, gap detection
- Human Essential: Interview preparation, readiness decision
- Success Factor: Thorough preparation reduces assessment risk