چكيده به لاتين
Skin cancer is the most widespread type of cancer in the world, with two types of skin cancers called basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) being the newest type of skin cancer. Accurate and timely diagnosis of these two types of cancer plays an important role in their effective treatment and also saving the patient's life. These cancers are usually diagnosed by manual observation and common symptoms recorded by doctors using dermoscopy, but due to the similarities between the two types of lesions, they are very difficult to distinguish.
To solve this problem, computer-aided diagnostic (CAD) methods for classifying these two cancers using dermoscopic images have received much attention. One of the most widely used CAD methods is deep convolutional neural networks (DCNNs), which are known as a new and powerful way to classify images. Today, these neural networks have received a great deal of attention from dermatologists for the classification of BCC and SCC dermoscopic images of skin lesions, but the lack of dermoscopic images of these two cancers, as well as the imbalance in the number of images in the BCC and SCC classes, are challenges that researchers face in using these deep models, which also reduces classification performance.
In this research, to overcome the data shortage, transfer learning and data augmentation techniques is used and to eliminate the effect of imbalance on the importance and number of class data in network training, the focal loss function and data weight balancing and giving more weight to less data (SCC class) and storing the best model by monitoring the new criterion are considered. Also in this study, EfficientNetB0, EfficientNetB1 and EfficientNetB2 deep convolution neural networks are considered for classifying BCC and SCC image data in ISIC2019 data set. The number of parameters of these networks increases from EfficientNetB0 to EfficientNetB2 and also, they have different architectures. As a result, the effect of this increase in number of parameters on classification performance has also been investigated. Finally, the performance of the best model is compared to the Xception deep network, which has more parameters than the EfficientNet models and has a different architecture.
The result shows that by giving more weight to less data (SCC) and increasing network parameters and appropriate data augmentation and storing the best model by monitoring the new criterion, the best result is achieved. Xception network trained with the methods proposed in this study, achieved recall 87.61%, precision 91.37%, F1 89.19% and Area Under the Curve (AUC) 98.05% and compared to other networks As well as other research, The recall and specificity improved by 1.51% and the AUC by 2.05%; therefore, it has the best performance of BCC and SCC classifying.