Contenu principal de l'article
In medical diagnosis, skin diseases classification has become more attention because people are affected by many categories of skin sicknesses in recent years. As a result, a cycle-consistent Generative Adversarial Network (cycle-GAN)-based domain adaptation and a two-step progressive transfer learning based on the fully supervised Deep Convolutional Neural Network (DCNN) pre-trained on ImageNet was designed to classify the skin diseases. But, visual understanding of DCNN was not efficient for skin-like images. Hence, in this paper, a modified SegNet is proposed to segment the training images which are augmented by the cycle-GAN model. It performs the dilated convolution instead of common convolution to systematically extract the multi-scale contextual features without losing resolution. The extracted multi-scale high-resolution features are aggregated by the encoder and passed to the decoder network. Then, a dropout layer using Dynamic Conditional Random Fields (DCRFs) is added after the decoder network to prevent overfitting problem. Also, the output of the dropout layer known as segmented skin images is directly fed to the ResNet18 structure-based DCNN to classify the types of skin diseases. So, this newly proposed model is named as a Segmentation and Classification Network (SegClassNet) model. Finally, the experimental results exhibit that the SegClassNet model achieves a mean accuracy of 91.28% for HAM image dataset compared to the other state-of-the-art classification models.