« Back to Glossary Index

An activation function in neural networks is a mathematical function applied to the output of a neuron, introducing non-linearity into the model. This non-linearity enables neural networks to learn and model complex patterns beyond simple linear relationships.

Key Functions of Activation Functions:

  • Non-Linearity: They allow neural networks to approximate complex, non-linear mappings between inputs and outputs.
  • Controlling Output: Activation functions determine the output of a neuron, influencing the network’s ability to learn and represent data.

Common Types of Activation Functions:

  • Sigmoid: Maps input values to a range between 0 and 1, often used in binary classification tasks.
  • Tanh (Hyperbolic Tangent): Maps input values to a range between -1 and 1, providing outputs centered around zero.
  • ReLU (Rectified Linear Unit): Outputs the input directly if positive; otherwise, it outputs zero, promoting sparsity and reducing computation.
  • Softmax: Converts a vector of values into a probability distribution, commonly used in multi-class classification problems.
« Back to Glossary Index