Created at 130723
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought
Last modified date: 130723
Commit: 0
# Related
# Self supervised learning
| Technique/Concept | Description |
| ------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Contrastive Learning** | A technique where the model learns to identify which samples are similar and which are different. It's often used in image recognition tasks. |
| **Momentum Contrast (MoCo)** | A method that maintains a dynamic dictionary of encoded representations from the past and uses a contrastive loss to make the current representation similar to its positive counterpart in the dictionary and different from the negatives. |
| **SimCLR** | A simple framework for contrastive learning of visual representations. It removes the need for specialized architectures or a memory bank, simplifying contrastive learning. |
| **BYOL (Bootstrap Your Own Latent)** | A method that learns representations by predicting the future representation of the same data point under a different view. It does not require negative pairs for training. |
| **SwAV (Swapping Assignments between multiple Views of the same image)** | A method that uses online clustering and cluster assignment swapping to learn representations. It does not require pairwise comparisons of features. |
| **Noise Contrastive Estimation (NCE)** | A technique used to train models to distinguish a true data sample from noise samples. It's often used in language modeling tasks. |
| **Self-Supervised Learning with [[Transformer]]s** | Transformers, like BERT, are trained in a self-supervised manner by masking some words in a sentence and predicting them. |
| **Self-Supervised Learning in Reinforcement Learning** | Techniques like curiosity-driven learning, where the agent is rewarded for exploring the environment, are self-supervised methods used in reinforcement learning. |