Back to Glossary
LLMs

Chain of Thought (CoT)

Chain of Thought (CoT) prompting is a technique used with large language models (LLMs) that encourages the model to explicitly verbalize its reasoning process step-by-step before arriving at a final answer. This approach significantly improves the model's ability to solve complex reasoning tasks by breaking them down into smaller, more manageable steps.

Explanation

Chain of Thought prompting aims to mimic human problem-solving by forcing the LLM to articulate its thought process. Instead of directly asking the model for an answer, the prompt is designed to elicit intermediate reasoning steps. For example, instead of asking "What is 23 * 15?", a CoT prompt might be "Let's think step by step. First, multiply 23 by 10, which is 230. Then, multiply 23 by 5, which is half of 230, so 115. Finally, add 230 and 115." Then the model is prompted for the answer, which it is more likely to get correct after the step-by-step reasoning. The effectiveness of CoT relies on the model's ability to generate coherent and relevant reasoning steps. The size and capabilities of the underlying LLM significantly influence its ability to perform CoT effectively; larger models generally exhibit better performance. CoT has proven particularly effective in tasks requiring arithmetic reasoning, common sense reasoning, and symbolic reasoning, where articulating the reasoning process helps the model avoid errors and biases. There are multiple techniques to implement CoT including manual generation of example CoT reasoning in the prompt (few-shot CoT), or using techniques to automatically generate the reasoning (e.g. Zero-shot CoT, Auto-CoT).

Related Terms