Stochastic Gradient Descent - Wikipedia - en.wikipedia.org ![rw-book-cover|200x400](https://readwise-assets.s3.amazonaws.com/static/images/article1.be68295a7e40.png) ## Metadata - Author: **en.wikipedia.org** - Full Title: Stochastic Gradient Descent - Wikipedia - Category: #articles - Tags: #ai #mathematic - URL: https://en.wikipedia.org/wiki/Stochastic_gradient_descent ## Highlights - Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate