Back to Glossary
Foundations

Parameters

In the context of machine learning models, parameters are the internal variables that the model learns during training from the input data. These parameters define the model's skill on a problem, are estimated or learned from data, and are adjusted to minimize the difference between the model's predictions and the actual values.

Explanation

Parameters are the core components of a machine learning model that are adjusted during the training process. They represent the learned relationships within the data. For example, in a linear regression model, the coefficients of the input features are the parameters. In neural networks, the weights and biases of the connections between neurons are the parameters. The training process involves feeding the model with data and iteratively updating these parameters based on a chosen optimization algorithm (e.g., gradient descent) and a loss function. The loss function quantifies the error between the model's predictions and the true values. The optimization algorithm adjusts the parameters to minimize this error. A model with a large number of parameters can potentially learn more complex relationships but is also more prone to overfitting, where the model performs well on the training data but poorly on unseen data. Regularization techniques are often used to prevent overfitting in such cases. The number of parameters is often used as a proxy for the size/complexity of a model.

Related Terms