
Chain-of-thought (CoT) is a prompting and reasoning technique where an AI model is guided to break down complex problems into intermediate logical steps before arriving at a final answer — making its reasoning process explicit and verifiable.
Why It Matters
Without chain-of-thought reasoning, language models tend to jump directly to answers, which often leads to errors on multi-step problems involving math, logic, or causal reasoning. By making the model "show its work," CoT dramatically improves accuracy on complex tasks and makes it possible for humans to audit where reasoning goes wrong.
CoT is also foundational to modern extended thinking and adaptive thinking architectures, where models dynamically adjust how much reasoning effort to apply per task.
How It Works
Chain-of-thought reasoning can be triggered through several mechanisms:
- Prompting. Adding phrases like "Let's think step by step" or providing few-shot examples with explicit reasoning steps encourages the model to decompose problems.
- Extended thinking. Modern models like Claude Opus 4.7 have built-in thinking blocks where the model reasons internally before generating a response, with configurable effort levels from minimal to "x-high."
- Adaptive thinking. The latest evolution allows models to dynamically decide how much chain-of-thought reasoning to apply per turn — using minimal thinking for simple queries and deep multi-step deliberation for complex tasks.
- Self-verification. Advanced CoT implementations include self-checking steps where the model re-evaluates its reasoning chain for logical inconsistencies before committing to a final answer.
Example
Without CoT: "What is 17 × 24?" → "408" (correct, but opaque)
With CoT: "Let me break this down: 17 × 24 = 17 × 20 + 17 × 4 = 340 + 68 = 408." The intermediate steps make the reasoning auditable and reduce errors on harder problems.
Adaptive CoT (Claude Opus 4.7): For a simple factual lookup, the model uses minimal thinking. For a complex multi-file code refactor, it automatically allocates extended reasoning with self-verification steps.