Search⌘ K
AI Features

Chain-of-Thought (CoT) Prompting

Explore chain-of-thought prompting techniques that enhance large language model outputs by guiding the model through step-by-step reasoning. Understand zero-shot and few-shot variants to improve multi-step problem solving while managing trade-offs in cost and latency. This lesson helps you apply CoT effectively to tasks requiring dependent reasoning steps.

In the previous lesson, you learned how carefully selected few-shot examples can steer a large language model toward higher-quality outputs. But there is a class of problems where even the best examples fall short. Consider a word problem that requires two arithmetic operations chained together, or a logic puzzle where the answer depends on resolving three intermediate facts. When you prompt a model with a standard question-and-answer format, it attempts to leap directly from the question to the final answer in a single step. That leap works fine for simple tasks, but for multi-step reasoning, the model effectively guesses at the result without working through the logic, and errors compound silently.

Here is a concrete example. Suppose you prompt a model with the following word problem: A store sells apples in bags of 6. Maria buys 4 bags and gives away 8 apples. How many apples does she have left? A standard prompt often produces an incorrect answer like 20 or 28 because the model collapses two operations (multiplication and subtraction) into a single prediction. The intermediate step, calculating that 4 bags of 6 equals 24 apples, never appears in the output, so the model has no explicit scaffold to guide its next calculation.

Chain-of-thought (CoT) prompting solves this problem by instructing or demonstrating intermediate reasoning steps so the model “shows its work” before arriving at a final answer. Formalized by Wei et al. (2022), CoT has become one of the most impactful prompt engineering techniques available. This lesson covers why CoT works, how to distinguish between its few-shot and zero-shot variants, how to implement zero-shot CoT with a single instruction, and the practical trade-offs you need to evaluate before deploying it.

How chain-of-thought prompting works

...