3.3: AI Capabilities and Limitations
What You'll Learn
By the end of this chapter, you will be able to:
- Identify activities where AI provides high value
- Recognize known AI limitations
- Apply mitigation strategies for AI weaknesses
- Set realistic expectations for AI augmentation
AI Capability Spectrum
The following diagram maps AI capabilities from high-confidence areas (pattern recognition, code generation) to low-confidence areas (novel design, safety judgment), helping teams set realistic expectations for each task type.
High Capability Areas
Pattern Recognition
| Capability | Maturity | Application |
|---|---|---|
| Code style detection | High | Linting, formatting |
| Boilerplate recognition | High | Code generation |
| Common bug patterns | High | Static analysis |
| API usage patterns | High | Code completion |
Text Generation
| Capability | Maturity | Application |
|---|---|---|
| Code comments | High | Documentation |
| Function documentation | High | API docs |
| Error messages | High | User feedback |
| Commit messages | High | Version control |
Code Completion
| Capability | Maturity | Application |
|---|---|---|
| Line completion | High | IDE integration |
| Function bodies | Medium-High | Code generation |
| Boilerplate code | High | Scaffolding |
| API calls | Medium-High | Integration code |
Consistency Checking
| Capability | Maturity | Application |
|---|---|---|
| Style consistency | High | Code review |
| Naming conventions | High | Quality checks |
| API consistency | Medium | Interface review |
| Cross-reference validation | Medium | Traceability |
Medium Capability Areas
Complex Code Generation
| Capability | Maturity | Limitation |
|---|---|---|
| Algorithm implementation | Medium | May have edge case bugs |
| Multi-file changes | Medium | Context may be incomplete |
| Refactoring | Medium | May break dependencies |
| Framework-specific code | Medium | Training data cutoff |
Test Generation
| Capability | Maturity | Limitation |
|---|---|---|
| Unit test creation | Medium | May miss edge cases |
| Test data generation | Medium | May not cover boundaries |
| Integration tests | Medium | Complex setup issues |
| Coverage improvement | Medium | May generate shallow tests |
Documentation
| Capability | Maturity | Limitation |
|---|---|---|
| API documentation | Medium | May miss nuances |
| Architecture diagrams | Medium | May oversimplify |
| User guides | Medium | May lack context |
| Technical specs | Medium | May need significant editing |
Low Capability Areas
Architecture Decisions
| Limitation | Impact | Mitigation |
|---|---|---|
| Lacks project context | May suggest inappropriate patterns | Human review required |
| No understanding of constraints | May ignore performance needs | Explicit constraint specification |
| Training data bias | May favor common over optimal | Consider alternatives |
Novel Problem Solving
| Limitation | Impact | Mitigation |
|---|---|---|
| Pattern-matching based | Cannot truly innovate | Human creativity for novel solutions |
| Limited reasoning | May not find optimal solutions | Multiple AI suggestions + human judgment |
| No true understanding | Solutions may be superficial | Deep human analysis |
Safety-Critical Decisions
| Limitation | Impact | Mitigation |
|---|---|---|
| No accountability | Cannot be held responsible | Human sign-off required |
| No domain certification | Cannot certify safety claims | Qualified human engineers |
| Potential for errors | Undetected errors have high impact | Multi-layer verification |
Known AI Limitations
Hallucination
Definition: AI generates plausible-sounding but incorrect information.
Examples in Development:
- Inventing non-existent API functions
- Citing incorrect register addresses
- Creating fake library versions
- Generating incorrect test assertions
Mitigation:
- Always verify AI-generated facts
- Test AI-generated code
- Cross-reference documentation
- Use static analysis
Context Window Limits
Definition: AI cannot maintain context beyond a certain size.
Impact:
- May miss dependencies in large codebases
- Cannot understand complete system architecture
- May generate inconsistent code across files
- May forget earlier conversation context
Mitigation:
- Provide focused context
- Use AI for scoped tasks
- Maintain human system understanding
- Document architectural decisions
Knowledge Cutoff
Definition: AI training data has a cutoff date.
Impact:
- May not know latest library versions
- May miss recent security vulnerabilities
- May suggest deprecated practices
- May not know new language features
Mitigation:
- Verify against current documentation
- Update AI tools regularly
- Use RAG (Retrieval-Augmented Generation) where available
- Human awareness of recent changes
Non-Determinism
Definition: Same input may produce different outputs.
Impact:
- Results not reproducible
- Difficult to debug AI behavior
- May introduce inconsistencies
- Testing AI outputs is challenging
Mitigation:
- Use temperature=0 where possible
- Record AI outputs for audit
- Accept variability in non-critical areas
- Human review for consistency
AI Role by Activity
Requirements Engineering
| Activity | AI Role | Human Role |
|---|---|---|
| Stakeholder interviews | None | Full ownership |
| Requirements writing | Draft generation | Review, approval |
| Consistency checking | Primary analysis | Final decision |
| Traceability linking | Suggestion | Verification |
| Priority setting | Analysis support | Decision making |
Architecture
| Activity | AI Role | Human Role |
|---|---|---|
| Pattern selection | Suggestions | Decision |
| Interface definition | Draft generation | Review |
| Allocation decisions | Analysis support | Full ownership |
| Trade-off analysis | Option generation | Evaluation |
Implementation
| Activity | AI Role | Human Role |
|---|---|---|
| Boilerplate code | Full generation | Review |
| Complex algorithms | Draft + iteration | Verification |
| Error handling | Suggestions | Decision |
| Code review | First pass | Final decision |
Testing
| Activity | AI Role | Human Role |
|---|---|---|
| Unit test generation | Primary | Review |
| Test data creation | Primary | Validation |
| Coverage analysis | Full automation | Strategy |
| Bug investigation | Analysis support | Root cause |
Realistic Expectations
What to Expect
| Expectation | Reality |
|---|---|
| AI generates perfect code | AI code needs review and testing |
| AI understands requirements | AI pattern-matches on text |
| AI replaces engineers | AI augments engineers |
| AI never makes mistakes | AI makes different mistakes than humans |
| AI improves automatically | AI needs training and feedback |
Setting Appropriate Goals
| Goal | Realistic Target |
|---|---|
| Productivity improvement | 20-40% for suitable tasks |
| Quality improvement | Fewer human oversight errors |
| Cost reduction | Reallocation, not elimination |
| Time savings | On routine tasks |
| Skill amplification | Junior engineers become more capable |
Summary
AI capabilities and limitations must be understood for effective use:
High Capability:
- Pattern recognition
- Text generation
- Code completion
- Consistency checking
Limitations:
- Hallucination (fabricated information)
- Context limits (cannot see everything)
- Knowledge cutoff (training data age)
- Non-determinism (variable outputs)
Mitigation: Human oversight, verification, and realistic expectations.