#ai #compound-knowledge Created at 240423 # [Anonymous feedback](https://www.admonymous.co/louis030195) # [[Epistemic status]] #shower-thought Last modified date: 240423 Commit: 0 # Related - [[Computing/NIPS 2022]] - [[Computing/Intelligence/Machine Learning/Reinforcement Learning/Algorithms/Algorithms]] - [[Computing/Using Torrent protocol for AI inference]] - [[Computing/Prediction is compression]] - [[Computing/Intelligence/Machine Learning/Geometric deep learning/Geometric deep learning]] - [[Computing/Intelligence/Machine Learning/8 bit nn]] # Forward-forward algorithm https://arxiv.org/abs/2212.13345 The paper "The Forward-Forward Algorithm: Some Preliminary Investigations" by [[Geoffrey Hinton]] introduces a new learning procedure for neural networks called the Forward-Forward algorithm, which replaces the forward and backward passes of backpropagation by two forward passes, one with positive data and the other with negative data, and shows that it works well enough on a few small problems to be worth further investigation. Key insights and lessons learned from the paper: - The Forward-Forward algorithm is a new learning procedure for neural networks that can replace the forward and backward passes of backpropagation. - The algorithm uses two forward passes, one with positive data and the other with negative data, and each layer has its own objective function to have high goodness for positive data and low goodness for negative data. - The sum of the squared activities in a layer can be used as the goodness, but there are many other possibilities, including minus the sum of the squared activities. - The negative passes could be done offline if the positive and negative passes could be separated in time, which would make the learning much simpler in the positive pass and allow video to be pipelined through the network without ever storing activities or stopping to propagate derivatives. Three questions for the authors: 1. How does the Forward-Forward algorithm compare to other learning procedures for neural networks in terms of performance and computational complexity? 2. Are there any limitations or challenges in applying the Forward-Forward algorithm to larger and more complex problems? 3. How do you see the Forward-Forward algorithm being integrated with other existing techniques in deep learning, such as convolutional neural networks or recurrent neural networks? Three suggestions for future research directions: 1. Investigate the performance and scalability of the Forward-Forward algorithm on larger and more complex datasets and compare it with other state-of-the-art learning procedures for neural networks. 2. Explore the potential of the Forward-Forward algorithm in unsupervised and semi-supervised learning, as well as in generative modeling and reinforcement learning. 3. Develop novel objective functions for each layer that go beyond the sum of the squared activities and take into account other properties of the data, such as sparsity, diversity, or correlation. Five relevant references: 1. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536. 2. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. 3. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (Vol. 1). MIT press. 4. Kingma, D. P., & Welling, M. (2019). An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4), 307-392. 5. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (Vol. 2). MIT press.✏