What is NMF?

Mar 13, 2026

Nonnegative Matrix Factorization decomposes a matrix X ≈ U × V where all values stay ≥ 0. This constraint produces interpretable, parts-based representations — no subtraction, only addition.

PCA vs NMF

Mar 13, 2026

PCA finds orthogonal axes that can go negative. NMF stays in the nonnegative cone — giving you parts you can actually interpret.

NMF: The Complete Study Guide

Mar 13, 2026

A comprehensive 4-minute animated walkthrough of Nonnegative Matrix Factorization — from definition and properties through algorithms, constrained variants, and structured extensions.

NMF is Secretly K-Means

Mar 13, 2026

Under squared Euclidean distance, NMF with hard assignments recovers K-means clustering. Under KL divergence, it recovers PLSA. Same matrix, different lens.

The Bias-Variance Tradeoff

Feb 27, 2026

Too simple and you miss the pattern. Too complex and you memorize the noise. Every modeling decision navigates that U-curve.

Linear Models

Feb 26, 2026

92% accuracy. Trains in 0.3 seconds. Fully interpretable. If you can't beat a linear model, your neural network isn't learning — it's memorizing.

The Perceptron

Feb 25, 2026

Rosenblatt builds the perceptron in 1958. GPT-4 still uses the same linear separator at its final layer. Some ideas don't get replaced — they get buried under complexity.

Bayesian Probability

Feb 25, 2026

A 99% accurate test says you're sick. The real odds? 9%. P(disease | positive) is not P(positive | disease) — that one symbol flip changes everything.

Probability as Ignorance

Feb 24, 2026

P(heads) = 0.5 doesn't describe the coin — it describes what you don't know about the flip. Probability isn't a property of the universe. It's a property of your ignorance.