Back to Glossary
LLMs

Small language models

Small language models (SLMs) are language models with a significantly reduced number of parameters compared to large language models (LLMs). They are designed to perform specific tasks efficiently, often with lower computational costs and resource requirements.

Explanation

SLMs typically range from a few million to several billion parameters, whereas LLMs can have hundreds of billions or even trillions of parameters. This smaller size enables SLMs to be deployed on resource-constrained devices like mobile phones or edge devices, making them suitable for applications where low latency and privacy are critical. While they might not possess the same level of general knowledge or reasoning capabilities as LLMs, SLMs can be fine-tuned on specific datasets to achieve comparable or even superior performance for targeted tasks. They are often preferred in scenarios where explainability, auditability, and reduced environmental impact are important considerations. Furthermore, their compact size facilitates faster training and experimentation cycles.

Related Terms