Input: Training data $\mathcal{D} = \{(x, y)\}$, Learning Rate $\eta$, Initial weights $w$, Bias $b$
for each epoch do
$\text{errors} = 0$
for each $(x, y) \in \mathcal{D}$ do
$\hat{y} = \text{Activation}(w \cdot x + b)$; // Forward pass
$e = y - \hat{y}$; // Error computation
if $e \neq 0$ then
$w = w + \eta \cdot e \cdot x$; // Update weights
$b = b + \eta \cdot e$; // Update bias
$\text{errors} = \text{errors} + 1$
end for
if $\text{errors} = 0$ then break; // Convergence check
end for
Output: Trained parameters $(w, b)$
Mathematics is the language with which God has written the universe.