Back to Glossary
Neural Networks

Forward propagation

Forward propagation, also known as forward pass, is the process by which a neural network computes an output from a given input. It involves passing the input data through the network's layers, with each layer performing a transformation on the data based on its weights and biases, ultimately producing a prediction at the output layer.

Explanation

In forward propagation, the input data enters the first layer of the neural network. Each neuron in that layer calculates a weighted sum of the inputs, adds a bias term, and then applies an activation function to produce an output. This output becomes the input for the next layer, and the process repeats until the signal reaches the output layer. The output layer then provides the network's prediction. This process is deterministic given a fixed set of weights and biases; therefore, the goal of training a neural network using backpropagation is to find the optimal weights and biases that minimize the difference between the predicted output and the actual target values in the training data. The efficiency of forward propagation is critical for real-time applications like image recognition and natural language processing, where low latency is essential. Vectorization and parallel computing techniques are commonly used to speed up forward propagation, enabling neural networks to process large amounts of data quickly. Variations on forward propagation exist to make predictions more efficient like speculative decoding.

Related Terms