چكيده به لاتين
Abstract:
Deep learning is widely recognized as a powerful tool in solving different problems such as: radar
target recognition, machine vision, direction of arrival estimation, etc. due to its outstanding performance, which can be attributed to the ever-increasing size of deep neural networks (DNNs). Many
state-of-the-art DNNs consist of billions of parameters, which makes them almost impossible to
implement on hardware with power or energy constraints.
An efficient method of reducing computational complexity of DNNs is layer factorization. In this
thesis, FLM-DNN method is proposed, which aims to reduce the required memory and computational complexity of DNNs by applying binary matrix factorization to DNN layers.The performance
of the proposed method is examined using computer simulations.
The networks used for this purpose are: an autoencoder, LeNet-300-100, LeNet-5, VGG, ResNet50, ResNet-164, RNN, GRU, and two LSTM networks. These networks are trained and tested using
MNIST, CIFAR-10, CIFAR-100, ImageNet, and PTB datasets.
The results of our simulations verify the superiority of our proposed method in reducing the computational complexity of DNNs, while causing negligible drop in accuracy. On average, compared to
other DNN compression methods, the required memory and the number of FLOPs for a given DNN
is 15-20% lower when compressed via FLM-DNN.
Keywords: Deep neural netwroks, Matrix factorization, Computation reduction