# Metadata
Source URL:: https://medium.com/deep-learning-experiments/science-behind-regularization-in-neural-net-training-9a3e0529ab80
Topics:: #ai
---
# Effect of Regularization in Neural Net Training
co-authored with Daryl Chang
## Highlights
> [!quote]+ Updated on 161022_111001
>
> On applying dropout, the distribution of weights across all layers changes from a zero mean uniform distribution to a zero mean gaussian distribution. This is similar to the weight decaying effect of L2 regularization on model weights
> [!quote]+ Updated on 161022_111242
>
> Linear separability: Sparse representations are also more likely to be linearly separable, or more easily separable with less non-linear machinery, simply because the information is represented in a high-dimensional space.