چكيده به لاتين
Due to the rapid progress of digitization of paintings, massive collections of artwork are available to the public. Therefore, the analysis of paintings and their classification for the archiving, and retrieval of artworks and indexing artistic databases has been very much considered. Considering the successful performance of deep learning methods in machine vision problems, their application in the classification of artwork examined. Compared to the classification of natural objects, the classification of artworks is very challenging, since many artistic categories require the visualization and understanding of abstract concepts and a strong background in the history of art. Furthermore, the distinction between art styles is complex and difficult due to their very similarity. Recent researches have investigated the ability of transfer learning of pretrained neural networks to the classification of artworks. In this thesis, pretrained models and transfer learning used to classify artworks in terms of artistic styles. In addition, a new two-level classification approach introduced, aimed at improving the accuracy of style classification. In this method, artistic styles divided into several groups, and a hierarchical label tree created using confusion matrix of the flat model. The output of the network is determined from two levels; the first level defines the label of the input image group, and the second level specifies the style of the input image. In the learning phase, firstly, the classes, which in same group with the class of the input image are determined, and then the error calculated for this group only. Furthermore, total weight error of two levels used to calculate the overall error. In the evaluation phase, the image group will identified first; then, the image style label will be determined among the members of that group. The main factor in improving the accuracy of this method is to reduce the task of classification into some simpler classification tasks. The proposed method evaluated using two networks, DenseNet and ResNet, pretrained on ImageNet images. The results of the tests performed on the WikiArt datasets showed that the proposed method improves the accuracy of the basic methods for the ResNet network from 49% to 54% and for the DenseNet network from 52% to 57.