Context Management 2 min read

Chain-of-Thought

Also known as: CoT, Chain-of-Thought Prompting, Step-by-Step Reasoning

A prompting technique that improves AI reasoning by instructing the model to decompose complex problems into intermediate reasoning steps before arriving at a final answer.

Definition

A prompting technique that improves AI reasoning by instructing the model to decompose complex problems into intermediate reasoning steps before arriving at a final answer.

Context Management 2 min read C

Overview

Chain-of-Thought (CoT) prompting is a technique that dramatically improves the reasoning ability of large language models by encouraging them to "show their work" — generating intermediate reasoning steps rather than jumping directly to an answer. First demonstrated by Google researchers in 2022, CoT has become a standard technique in prompt engineering.

How It Works

Instead of producing a single output, the model is encouraged to break down its reasoning into explicit steps. For example, without CoT a model might answer "What is 24 x 17?" with just "408". With CoT and the prompt "Let's think step by step", the model would show: "24 x 17 = 24 x 10 + 24 x 7 = 240 + 168 = 408". The explicit reasoning steps help the model arrive at correct answers for problems that require multi-step logic.

Variants

Zero-Shot CoT

Simply adding "Let's think step by step" to the prompt without providing any examples. Surprisingly effective across many reasoning tasks.

Few-Shot CoT

Providing examples that demonstrate the step-by-step reasoning process, teaching the model both the format and the reasoning approach.

Tree of Thought (ToT)

An extension that explores multiple reasoning paths simultaneously, evaluating each path and pruning less promising ones.

Context Management Implications

CoT trades increased context window usage (more output tokens) for improved accuracy. This is a context management decision — allocating more of the output budget to reasoning steps versus final answers.