2.4: Generic Practices


What You'll Learn

By the end of this section, you will be able to:

  • Explain how generic practices support process attributes
  • Describe generic practices for each capability level
  • Apply generic practices across all ASPICE processes
  • Use generic practices to boost your process capability
  • Understand how AI transforms the implementation of generic practices
  • Prepare assessment-ready evidence using AI-assisted techniques
  • Map AI capabilities to specific generic practices for targeted automation

What Are Generic Practices?

While base practices are process-specific, Generic Practices (GP) apply to any process to achieve a specific process attribute. They are universal.

Base vs Generic Practices

Key distinction: Base practices answer "What does this process do?" while generic practices answer "How well is this process managed?" A process can produce correct outputs (base practices satisfied) yet still be poorly managed, unplanned, and unrepeatable (generic practices not satisfied).


Generic Practice Groups

In ASPICE 4.0, generic practices are organized into groups that correspond directly to process attributes. Each capability level introduces new process attributes, and each process attribute is supported by a defined set of generic practices. Understanding these groups is essential for systematic capability improvement.

GP Group Structure in ASPICE 4.0

GP Group Process Attribute Capability Level Focus Area
GP 2.1.x PA 2.1 Performance Management Level 2 Planning, monitoring, adjusting process execution
GP 2.2.x PA 2.2 Work Product Management Level 2 Controlling and reviewing work products
GP 3.1.x PA 3.1 Process Definition Level 3 Establishing organizational standard processes
GP 3.2.x PA 3.2 Process Deployment Level 3 Deploying and monitoring defined processes

ASPICE 4.0 note: Levels 4 and 5 are defined through process attributes PA 4.1/4.2 and PA 5.1/5.2, but ASPICE 4.0 does not prescribe explicit generic practices at those levels. Instead, organizations demonstrate quantitative management and continuous innovation through evidence of statistical process control and systematic improvement programs.

How GP Groups Relate to Each Other

The GP groups build on one another in a cumulative fashion. You cannot meaningfully implement Level 3 generic practices without first having Level 2 generic practices in place:

Progression Relationship
GP 2.1.x before GP 3.1.x You must manage process performance before you can define an organizational standard for it
GP 2.2.x before GP 3.2.x You must control work products before you can deploy standardized work product management
GP 3.1.x enables GP 3.2.x A defined standard process is a prerequisite for deploying that process across projects

GP Group Scope and Boundaries

Each GP group has a distinct scope of concern:

GP Group Scope Who Is Responsible Typical Artifacts
GP 2.1.x Project-level process execution Project manager, process owner Plans, status reports, resource records
GP 2.2.x Project-level work product control Configuration manager, quality lead CM records, review protocols, baselines
GP 3.1.x Organization-level process standardization Process engineering group, SEPG Process handbook, tailoring guidelines, competency models
GP 3.2.x Project-level deployment of standards Project manager, process coach Tailored process descriptions, training records, deployment metrics

Generic Practices by Level

Level 2 Generic Practices

GP 2.1: Performance Management (PA 2.1)

GP What You Do Evidence
GP 2.1.1 Identify objectives and define strategy for process performance Documented objectives, strategy
GP 2.1.2 Plan the performance of the process Process/project plans
GP 2.1.3 Determine resource needs Resource requirements
GP 2.1.4 Identify and make available resources Resource allocation records
GP 2.1.5 Monitor and adjust the performance of the process Status reports, change records
GP 2.1.6 Manage interfaces between involved parties Interface agreements
GP 2.1.1 — Identifying Objectives and Strategy

This is the starting point for managed process execution. Without clear objectives, monitoring becomes meaningless because there is nothing to measure against. The strategy defines how the process will be executed to meet those objectives.

Aspect What Assessors Look For
Objectives Specific, measurable goals tied to process purpose (e.g., "complete software requirements specification with fewer than 3% TBDs by milestone M2")
Strategy Documented approach to achieving objectives, including methods, standards, and constraints
Traceability Clear link between process objectives and project/business goals
Communication Evidence that objectives are communicated to all involved parties

AI opportunity: AI can analyze historical project data to suggest realistic objectives based on past performance. For example, an AI assistant can review defect rates from previous projects and recommend achievable quality targets for the current iteration.

GP 2.1.2 — Performance Planning

Planning the process means defining activities, sequences, dependencies, timelines, and milestones. This is where many organizations struggle because plans are often too generic or disconnected from actual execution.

Planning Element Description AI Support Potential
Activity sequencing Ordering process activities with dependencies AI can suggest optimal sequences based on historical project patterns
Timeline estimation Estimating duration for each activity AI can provide effort estimates calibrated against past projects
Milestone definition Defining checkpoints for progress verification AI can recommend milestone placement based on risk analysis
Contingency planning Identifying risks and mitigation strategies AI can analyze risk databases and suggest mitigations
GP 2.1.3 and GP 2.1.4 — Resource Determination and Availability

These two practices are closely related. GP 2.1.3 identifies what resources are needed, while GP 2.1.4 ensures those resources are actually available. Resources include personnel, tools, infrastructure, and budget.

Resource Category GP 2.1.3 (Determine) GP 2.1.4 (Make Available)
Personnel Define required roles, skills, and effort Assign named individuals, confirm availability
Tools Identify required tools and licenses Procure, install, and verify tool readiness
Infrastructure Specify computing, lab, and test environment needs Provision environments, confirm access
Budget Estimate costs for all resource categories Secure budget approval, track expenditure
GP 2.1.5 — Monitoring and Adjustment

This practice closes the plan-do-check-act cycle. Without active monitoring, plans become stale documents rather than living management instruments.

Monitoring Activity Frequency Typical Evidence
Progress tracking against plan Weekly or bi-weekly Status reports, burndown charts
Resource utilization review Monthly Resource utilization reports
Risk reassessment At each milestone or when triggered by events Updated risk registers
Plan adjustment As needed when deviations are detected Change records, revised plans with rationale

AI opportunity: AI dashboards can provide real-time process monitoring by aggregating data from project management tools, version control systems, and CI/CD pipelines. Anomaly detection algorithms can flag deviations from plan before they become critical.

GP 2.1.6 — Interface Management

Interface management addresses the coordination between all parties involved in or affected by the process. This includes internal stakeholders, suppliers, customers, and cross-functional teams.

Interface Type Examples Management Approach
Customer-supplier OEM to Tier-1 requirements handoff Formal interface agreements, regular sync meetings
Cross-functional Software team to hardware team Joint review sessions, shared issue trackers
Tool-based Between development and CI/CD environments Integration specifications, API contracts
Organizational Between project team and quality department RACI matrices, escalation procedures

GP 2.2: Work Product Management (PA 2.2)

GP What You Do Evidence
GP 2.2.1 Define the requirements for work products WP specifications
GP 2.2.2 Define requirements for storage and control of work products CM procedures
GP 2.2.3 Identify, store and control work products CM records, baselines
GP 2.2.4 Review and adjust work products Review records, change records
GP 2.2.1 — Defining Work Product Requirements

Every significant work product should have defined requirements that specify its content, structure, and quality criteria. These requirements serve as the basis for reviews and acceptance decisions.

Work Product Aspect What to Define Example
Content requirements Required sections, information elements SRS template with mandatory sections: scope, definitions, functional requirements, non-functional requirements
Quality criteria Measurable quality characteristics Completeness (< 3% TBDs), consistency (zero contradictions), testability (every requirement verifiable)
Format standards Templates, notation standards IEEE 830 format, natural language with SHALL/SHOULD/MAY keywords
Relationships Dependencies on and from other work products SRS traces to stakeholder requirements and architecture

AI verification: AI tools can automatically check work products against their defined requirements. For example, an NLP-based tool can scan a requirements specification to flag ambiguous language, missing SHALL statements, or sections that fall below minimum content thresholds.

GP 2.2.2 — Storage and Control Requirements

This practice defines the configuration management regime for work products: where they are stored, how versions are managed, who can modify them, and how changes are tracked.

Control Aspect Requirement Implementation
Storage location Defined, accessible, backed up Git repository, document management system
Version control Every change traceable to author, date, rationale Git commits with meaningful messages, branch policies
Access control Appropriate read/write permissions Repository permissions, role-based access
Baseline management Defined baseline points for controlled release Tagged releases, baseline labels at milestones
Change authorization Defined approval workflow for changes Pull request reviews, change control board
GP 2.2.3 — Identifying, Storing, and Controlling Work Products

This is the execution of the regime defined in GP 2.2.2. It ensures that work products are actually placed under configuration management and that the defined controls are followed.

Activity Evidence Common Finding When Missing
Work product identification List of controlled items with unique identifiers Work products exist but cannot be uniquely referenced
Version management Version history for each controlled item Multiple versions exist without clear lineage
Baseline creation Baseline records at defined milestones No stable reference points; teams work against moving targets
Change tracking Change logs with rationale for each modification Changes made without documentation; traceability lost
GP 2.2.4 — Reviewing and Adjusting Work Products

Reviews ensure that work products meet their defined requirements. Adjustment ensures that issues found during reviews are resolved.

Review Type Purpose Typical Evidence
Peer review Technical quality verification Review checklists, annotated documents, issue lists
Quality gate review Milestone readiness verification Gate criteria checklists, sign-off records
Customer review External acceptance Meeting minutes, approval records
Automated review Consistency and compliance checking Tool-generated reports, static analysis results

AI verification: AI-powered review tools can perform first-pass reviews of work products, checking for completeness, consistency, and compliance with templates. This allows human reviewers to focus on semantic correctness and domain-specific concerns rather than structural compliance.

Level 3 Generic Practices

GP 3.1: Process Definition (PA 3.1)

GP What You Do Evidence
GP 3.1.1 Establish and maintain the standard process Organizational process documentation
GP 3.1.2 Determine required competencies Competency definitions
GP 3.1.3 Determine required resources Resource requirements
GP 3.1.4 Determine suitable methods to monitor the standard process Process metrics, monitoring criteria
GP 3.1.1 — Establishing and Maintaining the Standard Process

The standard process is the organization's documented, approved, and maintained description of how a particular process should be performed. It serves as the baseline from which project-specific processes are tailored.

Standard Process Element Description AI Adaptation
Process description Activities, inputs, outputs, roles, entry/exit criteria AI can help draft process descriptions by analyzing existing project documentation and extracting common patterns
Tailoring guidelines Rules for adapting the standard process to project context AI can recommend tailoring decisions based on project characteristics (size, safety level, domain)
Process assets Templates, checklists, tool configurations, examples AI can generate draft templates and populate them with domain-appropriate content
Measurement framework Metrics to assess process effectiveness AI can suggest relevant metrics by analyzing industry benchmarks and organizational goals

Tailoring with AI: When a new project begins, an AI assistant can analyze the project's characteristics (ASIL level, team size, technology stack, customer requirements) and recommend a tailored process derived from the organizational standard. The process engineer reviews and approves the recommendation, maintaining human accountability while saving significant setup time.

GP 3.1.2 — Determining Required Competencies

This practice requires that the organization identifies the competencies (knowledge, skills, experience) needed to perform the standard process effectively.

Competency Dimension Examples How AI Helps
Technical knowledge Programming languages, safety standards, domain expertise AI can analyze role descriptions and job postings to identify competency gaps
Process knowledge ASPICE practices, review techniques, configuration management AI can recommend training programs based on identified gaps
Tool proficiency IDE usage, test frameworks, CI/CD tools AI can assess tool proficiency through automated skill assessments
Soft skills Communication, collaboration, stakeholder management AI-based assessment tools can evaluate written communication quality
GP 3.1.3 — Determining Required Resources

At Level 3, resource requirements are defined at the organizational level as part of the standard process, not just per project.

Resource Category Organizational Standard Project Tailoring
Development tools Approved tool chain list with qualification status Project selects from approved list based on needs
Test environments Reference test environment configurations Project scales and configures for specific targets
Infrastructure Standard CI/CD pipeline architecture Project customizes build and deployment stages
Reference materials Organizational knowledge base, lessons learned Project-specific domain references added
GP 3.1.4 — Monitoring Methods for the Standard Process

The organization must define how it monitors whether the standard process is effective and followed.

Monitoring Method What It Measures Frequency
Process audits Conformance to standard process Quarterly or per milestone
Process metrics Effectiveness indicators (defect density, cycle time, rework rate) Continuous collection, periodic analysis
Lessons learned Improvement opportunities from project experience At project milestones and completion
Benchmarking Comparison against industry standards or internal targets Annually

GP 3.2: Process Deployment (PA 3.2)

GP What You Do Evidence
GP 3.2.1 Deploy a defined process satisfying context-specific requirements Tailored process description
GP 3.2.2 Ensure required competencies for defined roles Training records, competency assessments
GP 3.2.3 Ensure required resources to support the defined process Resource allocation, tool availability
GP 3.2.4 Monitor the performance of the defined process Process measurements, audit records
GP 3.2.1 — Deploying the Defined Process

Deployment means instantiating the standard process for a specific project context. The tailoring must follow the guidelines defined in GP 3.1.1 and be documented with justifications.

Deployment Activity Description AI Support
Context analysis Assess project characteristics that influence tailoring AI analyzes project parameters and suggests applicable tailoring options
Tailoring decisions Select and justify deviations from standard process AI provides rationale templates and risk assessments for tailoring choices
Process documentation Create project-specific process description AI drafts the tailored process document from standard process and tailoring decisions
Team communication Ensure all team members understand the defined process AI can generate process summaries, quick-reference guides, and onboarding materials
GP 3.2.2 — Ensuring Competencies

Once the defined process is deployed, the organization must verify that the people performing the process have the required competencies.

Competency Assurance Activity Evidence AI Role
Gap analysis Competency assessment results vs. requirements AI compares team profiles against role requirements
Training delivery Training records, certificates AI-based learning platforms provide personalized training paths
Mentoring/coaching Coaching logs, knowledge transfer records AI can supplement coaching with contextual guidance during task execution
Verification Post-training assessments, demonstrated performance AI can evaluate work products to verify competency application
GP 3.2.3 — Ensuring Resources

Resources identified in the standard process must be available for the project. This includes tools, infrastructure, and personnel.

Resource Assurance Activity What to Verify Common Pitfall
Tool availability All required tools installed, licensed, and configured Tools specified but not actually deployed; licenses expired
Environment readiness Development, test, and integration environments operational Environments provisioned late; configuration drift from standard
Personnel allocation Named individuals assigned to defined roles Roles defined but personnel overcommitted across projects
Budget confirmation Approved budget covers all resource needs Budget approved at proposal stage but not updated for actual needs
GP 3.2.4 — Monitoring the Defined Process

Monitoring at Level 3 goes beyond project-level status tracking. It focuses on whether the defined process is effective, followed, and producing expected results.

Monitoring Focus Method Typical Metric
Process conformance Internal audits, compliance checks Percentage of process steps followed correctly
Process effectiveness Outcome analysis, defect causal analysis Defect escape rate, rework percentage
Process efficiency Effort tracking, cycle time analysis Effort per work product, lead time to completion
Process improvement Lessons learned, retrospectives Number of improvement actions implemented

Applying Generic Practices

Let's see how this works in practice with real examples:

Example: SWE.1 at Level 2

Generic Practice SWE.1 Application
GP 2.1.1 (Objectives/Strategy) "Complete SRS by milestone X with <5% TBDs"
GP 2.1.2 (Planning) Requirements phase in project plan
GP 2.1.3 (Resource needs) Requirements engineer, tools identified
GP 2.1.4 (Resource availability) Requirements engineer assigned
GP 2.1.5 (Monitor/Adjust) Weekly status review, re-plan on changes
GP 2.1.6 (Interfaces) Stakeholder communication agreements
GP 2.2.1 (WP requirements) SRS template with required sections
GP 2.2.2 (Storage/Control requirements) CM procedures for requirements documents
GP 2.2.3 (Control) SRS under configuration management
GP 2.2.4 (Review/Adjust) SRS review before architecture phase

Example: SWE.4 at Level 3

Generic Practice SWE.4 Application
GP 3.1.1 (Standard process) Organization unit testing standard
GP 3.1.2 (Competencies) Test engineer skills defined
GP 3.1.3 (Resources) Test framework, CI environment
GP 3.1.4 (Monitoring methods) Coverage metrics, defect tracking
GP 3.2.1 (Deploy) Project-specific test process
GP 3.2.2 (Competencies) Unit testing training
GP 3.2.3 (Resources) Test tools allocated, CI pipeline available
GP 3.2.4 (Monitor) Test coverage metrics, process adherence

Generic Practice Evidence Matrix

Level 2 Evidence Checklist

GP Evidence Type Example
GP 2.1.1 Documented objectives "95% requirements coverage", strategy documents
GP 2.1.2 Plans Project plan, process plan
GP 2.1.3 Resource needs Resource requirements documentation
GP 2.1.4 Resource availability Staff allocation, tool licenses
GP 2.1.5 Monitoring/Adjustment Status reports, change records
GP 2.1.6 Interface agreements RACI matrix, communication protocols
GP 2.2.1 WP specifications Templates, content requirements
GP 2.2.2 Storage/Control requirements CM procedures
GP 2.2.3 CM records Version history, baselines
GP 2.2.4 Review records Checklists, meeting minutes

Level 3 Evidence Checklist

GP Evidence Type Example
GP 3.1.1 Organizational standard Process handbook
GP 3.1.2 Competency definitions Skill matrix, role requirements
GP 3.1.3 Resource requirements Infrastructure needs, tool requirements
GP 3.1.4 Monitoring methods Process metrics, monitoring criteria
GP 3.2.1 Tailored process Project process description
GP 3.2.2 Competency evidence Training records, certifications
GP 3.2.3 Resource availability Tool allocation, staff assignments
GP 3.2.4 Performance data Process measurements, audit records

AI Impact on Generic Practices

AI fundamentally changes the economics and practicality of implementing generic practices. Tasks that were previously expensive, manual, and error-prone become automated, consistent, and scalable. This section examines the structural impact of AI on each GP group.

Shift from Manual to Automated Process Management

Traditional Approach AI-Augmented Approach Impact
Manual status reports compiled weekly Real-time dashboards aggregated from tools GP 2.1.5 monitoring becomes continuous rather than periodic
Manual review of work products against templates Automated compliance checking with AI GP 2.2.4 reviews cover 100% of items instead of sampling
Manual tailoring of standard processes AI-recommended tailoring based on project profile GP 3.2.1 deployment becomes faster and more consistent
Periodic competency assessments Continuous skill tracking through AI analysis of contributions GP 3.2.2 competency assurance becomes ongoing

AI Impact on Performance Management (GP 2.1.x)

GP Traditional Challenge AI Transformation
GP 2.1.1 Objectives are vague or unrealistic AI analyzes historical data to suggest SMART objectives with confidence intervals
GP 2.1.2 Plans are overly optimistic or disconnected from reality AI provides effort estimates calibrated on past performance; updates plans dynamically
GP 2.1.3 Resource needs estimated by gut feeling AI models resource demand based on project parameters and historical patterns
GP 2.1.4 Resource conflicts discovered too late AI detects allocation conflicts across projects and alerts managers proactively
GP 2.1.5 Monitoring is lagging; issues detected after the fact AI provides leading indicators and early warning signals from process telemetry
GP 2.1.6 Interface agreements exist on paper but are not enforced AI monitors interface compliance (e.g., API contract testing, requirement handoff completeness)

AI Impact on Work Product Management (GP 2.2.x)

GP Traditional Challenge AI Transformation
GP 2.2.1 Work product requirements are generic checklists AI generates context-specific quality criteria based on work product type and project risk
GP 2.2.2 CM procedures exist but compliance varies AI enforces CM policies through automated checks on commits, merges, and releases
GP 2.2.3 Baselines are created late or inconsistently AI triggers baseline creation automatically when defined criteria are met
GP 2.2.4 Reviews are bottlenecks; reviewers miss structural issues AI performs first-pass review, freeing human reviewers for semantic and domain analysis

AI Impact on Process Definition and Deployment (GP 3.x)

GP Traditional Challenge AI Transformation
GP 3.1.1 Standard process is documented but stale AI monitors actual practice vs. documented process and flags deviations for process owners
GP 3.1.2 Competency requirements are generic AI analyzes actual task demands to refine competency profiles continuously
GP 3.1.3 Resource requirements are underestimated AI predicts resource needs based on project scope and complexity metrics
GP 3.1.4 Monitoring is reactive AI provides predictive monitoring with trend analysis and forecasting
GP 3.2.1 Tailoring is inconsistent across projects AI ensures tailoring follows guidelines and documents rationale automatically
GP 3.2.2 Training gaps discovered during audits AI identifies competency gaps in real time and recommends targeted training
GP 3.2.3 Tools are available but misconfigured AI validates tool configurations against standard process requirements
GP 3.2.4 Process performance data collected but not analyzed AI performs continuous analysis and surfaces actionable insights

Critical principle: AI augments but does not replace human accountability. An AI tool can flag that a work product fails quality criteria (GP 2.2.4), but a human must decide whether to accept, reject, or waive the finding. ASPICE requires demonstrated human judgment in process management decisions.


AI Support for Generic Practices

AI can boost your generic practice implementation:

GP Area How AI Helps Automation Level
Planning Draft plans from templates L1
Monitoring Automated metric collection L2-L3
Review AI-assisted work product review L2
Compliance Automated compliance checking L2-L3
Data collection Continuous process metrics L3
Improvement Trend analysis, anomaly detection L3

Evidence Generation

One of the most time-consuming aspects of ASPICE assessments is producing evidence that generic practices are implemented. AI can dramatically reduce this burden while improving evidence quality.

AI-Assisted Evidence Collection Strategies

Evidence Category Manual Collection Effort AI-Assisted Approach Time Savings
Planning evidence Search through project management tools, extract relevant plans AI aggregates plans, links them to GP requirements, highlights gaps 60-70%
Monitoring evidence Compile status reports, extract relevant metrics AI generates monitoring summaries from tool data with GP traceability 70-80%
Review evidence Locate review records, compile checklists and minutes AI indexes review artifacts and maps them to work products 50-60%
CM evidence Verify version histories, baseline records, access logs AI scans repositories and generates CM compliance reports 80-90%
Competency evidence Collect training records, certifications, skill assessments AI maintains a live competency database with automated tracking 60-70%

Building an Evidence Repository

A systematic approach to evidence generation ensures assessment readiness at all times rather than scrambling before an assessment.

Repository Component Purpose AI Role
Evidence catalog Map every GP to its required evidence types AI maintains a live mapping and flags missing evidence
Evidence store Centralized storage for all assessment evidence AI indexes artifacts and tags them with GP references
Evidence freshness tracker Track when evidence was last updated AI alerts when evidence becomes stale (e.g., plan not updated in 3 months)
Gap analysis dashboard Identify GPs without sufficient evidence AI continuously evaluates evidence coverage and highlights risks

Traceability from GP to Evidence

For each generic practice, assessors need to see a clear chain from the GP requirement to the evidence that demonstrates its implementation.

Chain Element Description AI Contribution
GP requirement The specific generic practice statement AI parses ASPICE PAM and extracts GP statements
Implementation How the organization implements the GP AI maps organizational procedures to GP requirements
Artifact The specific document, record, or data that demonstrates implementation AI identifies and links artifacts from project repositories
Currency Evidence that the practice is current and active AI verifies artifact dates and update frequency

Practical tip: Configure your AI tooling to generate a "GP Evidence Report" at regular intervals (e.g., monthly). This report should list each GP in scope, its current evidence status (green/yellow/red), and links to the evidence artifacts. When an assessment is announced, the report immediately shows your readiness level.


Assessment Preparation

Preparing for an ASPICE assessment is a significant effort. AI can streamline this preparation by automating repetitive tasks and ensuring consistency across the assessment scope.

Pre-Assessment Readiness Check

Before an assessment, organizations should verify that evidence is complete, current, and accessible. AI can automate much of this verification.

Readiness Activity Manual Approach AI-Assisted Approach
Evidence completeness check Assessor manually walks through GP checklist against artifacts AI scans artifact repository against GP checklist and generates coverage report
Evidence currency verification Manual review of document dates and version histories AI checks last-modified dates, flags stale or outdated artifacts
Cross-referencing Manual verification that plans reference standards, reviews reference plans, etc. AI traces cross-references and identifies broken links
Interviewee preparation Reading process documentation before interviews AI generates role-specific briefing packages summarizing what each interviewee should know
Presentation preparation Creating slides summarizing process implementation AI generates assessment presentation drafts from process documentation

Assessment Simulation

AI can conduct mock assessments to identify weaknesses before the real assessment takes place.

Simulation Activity Description AI Capability
GP questioning Generate typical assessor questions for each GP AI generates questions based on ASPICE PAM indicators and common assessment patterns
Evidence challenge Test whether evidence withstands scrutiny AI applies assessor heuristics to identify weak or ambiguous evidence
Gap identification Find areas where evidence is insufficient AI compares evidence against rating thresholds (N/P/L/F) and predicts likely ratings
Remediation planning Prioritize actions to close gaps before assessment AI ranks gaps by impact on overall rating and effort to close

Assessment simulation example: An AI tool reviews your SWE.1 implementation and reports: "GP 2.1.5 (Monitoring) — evidence is rated P (Partially achieved). Status reports exist for 3 of 8 milestones. Recommendation: generate status reports for remaining milestones or provide alternative monitoring evidence such as dashboard screenshots and metric trends."

During the Assessment

Assessment Phase AI Support
Opening AI provides a summary of process scope, organizational context, and assessment readiness
Document review AI retrieves requested artifacts rapidly from the evidence repository
Interviews AI prepares interviewees with likely questions and expected answers (used for preparation only, not during the interview itself)
Consolidation AI helps the organization track preliminary findings and plan immediate corrective actions
Closing AI generates a response plan template based on assessment findings

Mapping GPs to AI Capabilities

Different AI capabilities support different generic practices. This mapping helps organizations target their AI investments for maximum impact on capability improvement.

AI Capability Categories

AI Capability Description Examples
Natural Language Processing (NLP) Understanding, generating, and analyzing text Requirements analysis, document review, report generation
Data Analytics Processing and analyzing structured data Metrics dashboards, trend analysis, statistical process control
Pattern Recognition Identifying recurring patterns in data Defect pattern detection, process deviation identification
Predictive Modeling Forecasting future states based on historical data Effort estimation, risk prediction, schedule forecasting
Automation Executing repetitive tasks without human intervention CI/CD pipelines, automated testing, report generation
Knowledge Management Organizing, retrieving, and synthesizing information Lessons learned databases, process asset libraries, competency tracking

GP-to-AI Capability Mapping

Generic Practice Primary AI Capability Secondary AI Capability Example Implementation
GP 2.1.1 (Objectives) Data Analytics Predictive Modeling Analyze past projects to recommend achievable objectives
GP 2.1.2 (Planning) Predictive Modeling NLP Generate draft plans with AI-estimated timelines; auto-populate from templates
GP 2.1.3 (Resource needs) Predictive Modeling Data Analytics Model resource demand based on project parameters
GP 2.1.4 (Resource availability) Data Analytics Automation Track resource allocation across projects in real time
GP 2.1.5 (Monitoring) Data Analytics Pattern Recognition Real-time dashboards with anomaly detection
GP 2.1.6 (Interfaces) Automation NLP Automated interface compliance checking
GP 2.2.1 (WP requirements) NLP Knowledge Management Generate quality criteria from organizational standards
GP 2.2.2 (Storage/Control) Automation Data Analytics Enforce CM policies through automated tooling
GP 2.2.3 (Control) Automation Data Analytics Automated baseline creation and version tracking
GP 2.2.4 (Review) NLP Pattern Recognition AI-assisted first-pass review of work products
GP 3.1.1 (Standard process) Knowledge Management NLP Maintain and update standard process based on collected experience
GP 3.1.2 (Competencies) Data Analytics Knowledge Management Continuous competency tracking and gap analysis
GP 3.1.3 (Resources) Data Analytics Predictive Modeling Resource requirement forecasting at organizational level
GP 3.1.4 (Monitoring methods) Data Analytics Pattern Recognition Predictive monitoring with trend analysis
GP 3.2.1 (Deploy) NLP Knowledge Management AI-recommended tailoring based on project profile
GP 3.2.2 (Competencies) Knowledge Management Data Analytics Personalized training recommendations based on skill gaps
GP 3.2.3 (Resources) Automation Data Analytics Automated resource provisioning and configuration validation
GP 3.2.4 (Monitor) Data Analytics Pattern Recognition Continuous process performance monitoring and alerting

Investment Prioritization

When deciding which AI capabilities to invest in first, consider the following priority matrix:

Priority AI Capability Rationale GPs Supported
1 (Highest) Data Analytics Supports monitoring across all GP groups; immediate ROI from automated dashboards GP 2.1.5, GP 2.2.3, GP 3.1.4, GP 3.2.4
2 NLP Enables work product review and generation; reduces manual effort for documentation GP 2.2.1, GP 2.2.4, GP 3.1.1, GP 3.2.1
3 Automation Enforces CM policies and reduces human error in repetitive tasks GP 2.2.2, GP 2.2.3, GP 2.1.6, GP 3.2.3
4 Predictive Modeling Improves planning accuracy; requires historical data to be effective GP 2.1.2, GP 2.1.3, GP 3.1.3
5 Pattern Recognition Identifies deviations and trends; most valuable at higher capability levels GP 2.1.5, GP 3.1.4, GP 3.2.4
6 Knowledge Management Supports organizational learning; builds value over time GP 3.1.1, GP 3.1.2, GP 3.2.2

Common Assessment Findings

Understanding typical assessment findings related to generic practices helps organizations anticipate and prevent weaknesses. The following findings are among the most frequently observed in ASPICE assessments.

Level 2 Common Findings

Finding Affected GP Root Cause How AI Helps
Objectives not documented or too vague GP 2.1.1 Objectives are discussed verbally but not recorded; or objectives are generic ("deliver on time") without measurable criteria AI suggests SMART objectives based on project type and historical data
Plans exist but are not maintained GP 2.1.2, GP 2.1.5 Initial plan created but never updated as project evolves; no re-planning trigger defined AI monitors plan vs. actuals and alerts when deviation exceeds threshold
Resource needs not formally identified GP 2.1.3 Resource allocation done informally; no documented analysis of what resources are needed and why AI generates resource requirement documents from project scope and WBS
Monitoring is reactive, not proactive GP 2.1.5 Status reports compiled after problems occur; no leading indicators tracked AI provides real-time dashboards with predictive indicators and early warnings
Interface management is informal GP 2.1.6 Interfaces managed through ad-hoc communication; no formal agreements AI tracks interface obligations and flags overdue deliverables
Work product quality criteria not defined GP 2.2.1 Templates exist but lack explicit quality criteria; reviews have no objective basis AI generates quality checklists from standards and templates automatically
CM procedures not consistently followed GP 2.2.2, GP 2.2.3 Procedures documented but developers bypass them; no enforcement mechanism AI enforces CM policies through pre-commit hooks and merge checks
Reviews lack documented criteria GP 2.2.4 Reviews conducted but without checklists or predefined criteria; findings not tracked AI provides review checklists and tracks finding resolution

Level 3 Common Findings

Finding Affected GP Root Cause How AI Helps
Standard process exists but is not used GP 3.1.1, GP 3.2.1 Process handbook on the shelf; projects define their own processes independently AI compares project process against standard and flags deviations
Tailoring not documented GP 3.2.1 Projects deviate from standard process without recording why AI auto-generates tailoring rationale documents based on project characteristics
Competency gaps not identified GP 3.1.2, GP 3.2.2 No systematic competency assessment; training is ad hoc AI maintains competency profiles and identifies gaps against role requirements
Process metrics collected but not analyzed GP 3.1.4, GP 3.2.4 Data gathered (e.g., defect counts, effort) but no analysis or action taken AI performs automated analysis, generates trend reports, and flags anomalies
Standard process not maintained GP 3.1.1 Process defined once and never updated; lessons learned not incorporated AI monitors process usage patterns and suggests updates based on collected experience
Deployment inconsistent across projects GP 3.2.1 Some projects follow the standard process, others do not; no governance mechanism AI provides deployment compliance dashboards and automated conformance audits

Finding Severity and Remediation Priority

Severity Impact on Rating Remediation Priority Typical Remediation Time
Prevents F rating Process attribute rated L or below despite strong base practices Immediate — address before assessment if possible 2-4 weeks with AI support
Reduces to L rating Some evidence exists but gaps prevent F; assessor cannot confirm full achievement High — plan remediation within current sprint 1-2 weeks with AI support
Minor weakness Evidence exists but could be stronger; assessor notes as improvement opportunity Medium — incorporate into next process improvement cycle Ongoing
Observation Not a finding but a suggestion for improvement; does not affect rating Low — consider for future improvement As convenient

Practical advice: The most common reason organizations fail to achieve Level 2 is not a lack of technical capability but a lack of documented evidence for GP 2.1.x practices. Plans, objectives, and monitoring records must exist as explicit artifacts. AI tools that automatically generate these artifacts from project management data can eliminate this class of findings entirely.


Implementation Checklist

Use this checklist to verify that your organization has addressed all generic practices at Levels 2 and 3. For each item, record the status (Done, In Progress, Not Started, N/A) and link to the relevant evidence.

Level 2 Implementation Checklist

# GP Checklist Item Status Evidence Link
1 GP 2.1.1 Process performance objectives are documented and measurable
2 GP 2.1.1 A strategy for achieving objectives is defined
3 GP 2.1.2 A process performance plan exists with activities, timelines, and milestones
4 GP 2.1.2 The plan is maintained and updated when changes occur
5 GP 2.1.3 Resource needs (personnel, tools, infrastructure) are identified and documented
6 GP 2.1.4 Required resources are allocated and available
7 GP 2.1.4 Resource availability is verified (not just planned)
8 GP 2.1.5 Process performance is monitored against the plan at defined intervals
9 GP 2.1.5 Deviations from plan trigger documented adjustment actions
10 GP 2.1.6 Interfaces with involved parties are identified and documented
11 GP 2.1.6 Interface agreements include responsibilities, deliverables, and communication protocols
12 GP 2.2.1 Requirements for work products are defined (content, format, quality criteria)
13 GP 2.2.2 Storage and control requirements for work products are defined (CM procedures)
14 GP 2.2.3 Work products are identified, placed under version control, and baselined
15 GP 2.2.3 Changes to work products are tracked with rationale
16 GP 2.2.4 Work products are reviewed against defined requirements
17 GP 2.2.4 Review findings are documented and resolved

Level 3 Implementation Checklist

# GP Checklist Item Status Evidence Link
18 GP 3.1.1 A standard process is documented and approved at the organizational level
19 GP 3.1.1 Tailoring guidelines are defined for adapting the standard process
20 GP 3.1.1 The standard process is maintained and updated based on experience
21 GP 3.1.2 Required competencies for the standard process are identified
22 GP 3.1.3 Required resources for the standard process are identified at the organizational level
23 GP 3.1.4 Methods for monitoring the standard process effectiveness are defined
24 GP 3.1.4 Process metrics are defined and collected
25 GP 3.2.1 The project process is derived from the standard process following tailoring guidelines
26 GP 3.2.1 Tailoring decisions are documented with justifications
27 GP 3.2.2 Personnel competencies are assessed against requirements
28 GP 3.2.2 Training or other competency development is provided where gaps exist
29 GP 3.2.3 Required resources (tools, environments, personnel) are available for the defined process
30 GP 3.2.4 Performance of the defined process is monitored using defined methods and metrics
31 GP 3.2.4 Process monitoring results are analyzed and actions taken when needed

Using this checklist with AI: Configure an AI agent to periodically scan your project repositories, document management systems, and project management tools against this checklist. The agent should update the status column automatically based on artifact detection and generate a report highlighting items that require human attention.


Summary

Generic Practices:

  • Apply universally across all processes
  • Support process attribute achievement
  • Define how well processes are managed (not what they do)
  • Enable consistent capability improvement
  • Can be augmented by AI for efficiency
  • Are organized into groups that map directly to process attributes at each capability level
  • Require documented evidence that withstands assessment scrutiny
  • Benefit significantly from AI automation in evidence generation, monitoring, and review

Understanding and implementing generic practices is essential for climbing the capability levels. They are the "how well" to the base practices' "what." With AI support, the overhead of implementing generic practices — particularly the documentation, monitoring, and evidence generation aspects — can be reduced dramatically, allowing teams to focus on the substance of process improvement rather than the mechanics of compliance.