چكيده به لاتين
Deep learning solves many problems that other areas of artificial intelligence have failed to solve. Despite all their advantages and positive features, deep neural networks are highly vulnerable to adversarial attacks. This vulnerability has significantly reduced the adversarial accuracy of object classifiers. Despite many efforts and research in this field, the wicked accuracy of deep neural networks against various destructive attacks is still low and has not reached a sufficient number. In addition, with the advent of newer and more adversarial attacks, the robustness of networks to malicious attacks has become much more complex than before. In this thesis, we have reinforced a new method based on previous Transformer networks with deep convolution layers and implicit data augmentation. And we have presented this model in two sizes with different complexities. To evaluate the work, we have compared this model with 5 other famous models of Transformer networks. We used FGSM and PGD white box attacks to assess robust accuracy. Also, to better evaluate the model, we have examined the introduced model using two new black box attacks, Texture PatchAttack and Sparse-RS. These attacks are highly powerful and can evaluate our proposed model's performance. The model's performance presented in this thesis is examined on two databases, GTSRB and Cifar100, which are the most widely used in the field of object classification. We have been able to achieve 99.1% clean accuracy using the proposed model, which is more than all previous work. Also, increase the robust accuracy from 59.4% in the previous best work to 77.8% for the German traffic sign dataset. The results show that the proposed method, in addition to acceptable results in the field of white-box attacks against the new black-box attacks, has acceptable performance. We can hope for the validity of the robustness of this method for the traffic sign classification task.