📋 What It Is
An 8-tab interactive decision workbook that transforms Chapter 2's reasoning research — Chain-of-Thought, ReAct, Tree-of-Thoughts, PEV loops, multi-agent debate, RAG/KAG memory patterns, and 40+ more techniques — into a structured selection process your team can execute in a single design session.
This isn't a reference document you read once. It's an operational instrument with 89 live formulas that connect your project constraints to technique selections to an auto-generated Architecture Decision Record — so every choice is grounded in your actual budget, latency tolerance, team maturity, and compliance requirements.
The core problem it solves: enterprise teams consistently over-engineer or under-engineer their agent's reasoning stack. They reach for Graph-of-Thought when Chain-of-Thought would do the job 10× cheaper, or they skip verification entirely on a compliance-critical agent because nobody mapped the Accuracy Requirement to a reflection mechanism. This workbook makes that mapping explicit and auditable.
Includes 5 domain-specific worked examples — LegalAssist (litigation), ComplianceGuard (pharma), WealthAdvisor (private banking), DealForge (PE due diligence), and ClaimsPilot (insurance) — each walking through the complete selection process with rationales for every technique chosen AND rejected.
👥 Who It's For
- Solution architects choosing reasoning approaches for a new agent — need to justify why CoT over ReAct with data and constraint alignment
- Engineering leads running design sessions where 5 people have 5 opinions — need a structured framework that turns debate into decisions
- AI strategists and consultants explaining to clients why one reasoning approach was chosen over another — need the ADR as a deliverable artifact
- Enterprise architects evaluating whether the proposed agent stack matches organizational team maturity, budget, and compliance posture
- C-level executives (CAIO/CIO/CTO) validating the agent investment matches organizational constraints
- GRC and compliance reviewers verifying explainability, autonomy, and verification mechanisms match regulatory requirements
⏱ When to Use It
- New agent design — before writing code. Select your reasoning stack while changes are free
- Architecture review — when an existing agent's technique choices are inherited or undocumented
- Technique upgrade evaluation — considering adding Tree-of-Thoughts or multi-agent patterns to an existing agent
- Vendor/framework selection — first decide WHAT techniques you need (this workbook), then evaluate which framework supports them
- Sprint planning — use the Priority Breakdown (P1/P2/P3) to sequence technique implementation
- Executive review — present the Selection Summary as your Architecture Decision Record
- Post-incident analysis — was the reasoning technique appropriate? Was the reflection pipeline strong enough?
📦 What It Produces
- Architecture Decision Record — auto-calculated Selection Summary with technique counts, priority breakdown, and 15 constraint checks with ✓ OK / ⚠ warning indicators
- Reasoning Stack Blueprint — selected techniques from 6 composable layers with documented rationale for every selection AND rejection
- Verification Pipeline Design — selected reflection mechanisms stacked into a concrete QA pipeline with overhead and error reduction estimates
- Knowledge Architecture Specification — selected memory patterns combining retrieval strategy (RAG/CAG/KAG) with context management
- Constraint Mismatch Report — 15 automated checks surfacing mismatches (⚠ TOO COMPLEX, ⚠ OVER BUDGET, ⚠ TOO SLOW, ⚠ NOT AUDITABLE, ⚠ SKILL GAP) before you build
🚀 How to Use It — Quickstart
- Step 1. Open Project Context. Fill in project metadata, rate all 8 constraints (Budget, Latency, Accuracy, Explainability, Autonomy, Compliance, Team AI Maturity, Data Availability). These inputs drive everything downstream.
- Step 2. Switch to Reasoning Techniques. Walk through 6 composable layers: select techniques per category, set Relevance/Selected/Priority, adjust editable 1–5 ratings to your context.
- Step 3. Repeat for Reflection & Self-Correction, Memory & Knowledge, Agent Patterns, and Learning & Feedback.
- Step 4. Open the Selection Summary. Everything auto-populates: technique counts, priority breakdown, and 15 constraint checks. Look for ⚠ warnings — these are mismatches between selections and constraints.
- Step 5. Resolve mismatches. If ⚠ TOO SLOW, raise Latency Tolerance or deselect high-latency techniques. If ⚠ SKILL GAP, drop the complex technique or invest in training.
- Step 6. Print or share the Selection Summary as your Architecture Decision Record.
👁 Preview — What's Inside
8 Tabs, 89 Live Formulas
| Tab | What It Does |
| Project Context ★ | 8 constraint ratings + 8 task type selections that drive all downstream analysis |
| Reasoning Techniques | 18 techniques across 6 composable layers with "LegalAssist" worked example |
| Reflection & Self-Correction | 10 QA mechanisms with "ComplianceGuard" 4-step verification pipeline example |
| Memory & Knowledge | 8 patterns (RAG/CAG/KAG + context management) with "WealthAdvisor" example |
| Agent Patterns | 5 multi-agent architectures with "DealForge" evidence-based example |
| Learning & Feedback | 6 strategies from logging to RLHF with "ClaimsPilot" 12-month roadmap |
| Selection Summary | Auto-generated ADR with 15-point constraint check system |
| Glossary | 40-term reference with cross-references |
5 Domain-Specific Worked Examples
| Example | Domain | Key Insight |
| LegalAssist | Legal / Litigation | Skipping Layer 3 kept latency under tolerance. PEV catches hallucinated citations. |
| ComplianceGuard | Pharma / Regulatory | 4-step verification pipeline: error rate 26% → 0%. |
| WealthAdvisor | Finance / Wealth Mgmt | 3 memory patterns serving 3 distinct knowledge needs. |
| DealForge | Private Equity / M&A | Single agent failed at 39%. Multi-agent hit 93%. |
| ClaimsPilot | Insurance / Claims | 12-month learning timeline. Why RLHF was evaluated and rejected. |
📝 Version History
| Version | Date | Changes |
| v1 |
March 2026 |
8-tab interactive decision workbook with 89 live formulas. 47 techniques across 5 domains. Per-tab interactive columns with dropdowns. 15-point constraint check system. 5 domain-specific worked examples. |
Rate This Deliverable
How useful did you find this resource?