Effect of Regularization in Neural Net Training - medium.com

## Metadata
- Author: **medium.com**
- Full Title: Effect of Regularization in Neural Net Training
- Category: #articles
- Tags: #ai
- URL: https://medium.com/deep-learning-experiments/science-behind-regularization-in-neural-net-training-9a3e0529ab80
## Highlights
- On applying dropout, the distribution of weights across all layers changes from a zero mean uniform distribution to a zero mean gaussian distribution. This is similar to the weight decaying effect of L2 regularization on model weights
- Linear separability: Sparse representations are also more likely to be linearly separable, or more easily separable with less non-linear machinery, simply because the information is represented in a high-dimensional space.