1. EachPod

Gradient Descent Unpacked: From Valleys to Neural Networks

Author
Mike Breault
Published
Fri 27 Dec 2024
Episode Link
None

A concise, STEM-minded tour of gradient descent. We start with the valley-floor intuition, trace its 19th–20th century roots (Cauchy and Hadamard), and show how the method recasts equations as minimization problems. The episode dives into learning rate, local minima vs saddle points, and practical variants—SGD, momentum, Nesterov, and ADAM—before looking at real-world applications in training deep neural networks and other nonlinear optimization tasks.


Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.

Sponsored by Embersilk LLC

Share to: