← Home
ArxCafe
/
ML Engineering
/
Layer 0
/
Part 1
Book 0 · Part 1
Layer 0
Part 1 — Learning as Optimization
Learning is expected loss minimization under uncertainty. Training is optimization on samples. This part locks in the objective and the core failure mode: distribution gap.
Concepts
Chapter 0.1 — Learning as Expected Loss Minimization
Chapter 0.2 — Data, Reality, and the Distribution Gap
Chapter 0.3 — Why Training Is Optimization, Not Intelligence
Next
Continue to Part 2 — Linear Algebra
Start Chapter 0.4 — Scalars, Vectors, and Feature Spaces