چكيده به لاتين
In the last two decades, significant increase in computing power and the amount of available data, has pushed researchers in artificial intelligence and data science, towards the use of connectionist systems, and particularly artificial neural networks, in lieu of the older, more hardwired systems.
In this context, neural networks have gained their impressive chain of successes through one learning technique that is the derivative-based optimization, using the backwards propagation of gradients. Despite all the successes that this technique has brought about for the neural networks, it is important to note that the other effect of this trend has been for the other learning techniques to be pushed to the sidelines or even forgotten. In particular, one of lines of research that was pushed to side due to the success of derivative backpropagation, has been research respecting the learning techniques inspired by biological neural networks such Hebbian learning, or techniques based upon those, such as energy-based models.
In this work, first we discuss the general principles regarding the properties of sparse representations, that are binary representations consisting of mostly zero bits and a smaller number of one bits, and continue by discussing some of the interesting properties of energy-based models. Then we propose a new algorithm, inspired by the logic behind energy-based models, that has the ability to learn multi-layered neural networks using a different type of backpropagation that can work with non-differentiable models with binary sparse activations and binary connections.
In the end, we design and implement several types of models, that can learn sequential data, such as natural language sentences, in a multi-layered fashion, using the new algorithm, both in a direct way, similar to a recurrent neural network, and in an indirect attention-based model, similar to an encoder-decoder neural network.