چكيده به لاتين
Deep learning is the solver of many situations, in which other domains of artificial intelligence are unable to solve. Despite their advantages and positive benefits, deep neural networks are vulnerable to destructive attacks. This vulnerability has caused a significant reduction in the hostile accuracy of object detectors. Despite many efforts and studies in this subject, still, the hostile accuracy of deep neural networks versus several destructive attacks is low and has not reached an acceptable number yet. In addition, with the emersion of newer and more adversarial attacks, the robustness of networks to adversarial attacks has become much more complex than before.
In this thesis, we present a new method based on Gabor filters to make object detectors robust to adversarial attacks. Then we applied the presented method on YOLOv3 with different backbones, SSD with different input sizes, and FRCNN and thus introduced 6 robust object detectors. To evaluate the performance of the presented models, we have trained them adversarial using 3 targeted attacks of TOG fabrication, TOG-vanishing, TOG-mislabeling, and 3 random attacks of DAG, RAP, and UEA. The performance of the models obtained in this thesis is examined on two famous databases PASCAL VOC and MS COCO, which are among the most widely used databases in the field of object identification. Numerous experiments show that this method, despite improving the adversarial accuracy of object detection models, does not affect the performance of models on clean data. Also, to evaluate the model better, we evaluate the presented models once by using the new EBG attack. This attack has high power and can also examine the performance of our defense against combined black-box attacks. The results of this thesis present that this defense, in addition to acceptable results against older attacks and also TOG targeted attacks, also has acceptable performance against new black box combined attacks and we can hope on the credibility of this defense in different situations against different destructive attacks. The best adversarial accuracy achieved by this method in the best model was 54.2%, which was a good improvement compared to the works in the literature.