Contenu principal de l'article
In agricultural operations, one of the main processes is to effectively identify and classify the crop leaf diseases. In the past decades, many deep learning models have been applied to feasibly and efficiently detect and classify the crop leaf diseases. Among many models, a Dual-Attention and Topology-Fusion with Generative Adversarial Network (DATFGAN) has achieved better accuracy to categorize the crop leaf diseases based on the texture features. On the other hand, the GAN aims at training a generator that models a mapping from a prior latent distribution to the real data distribution. The DATFGAN training could be accelerated highly by developing improved algorithm to coordinate generator and discriminator. Thus, it is crucial to learn the spatial relationships across a series of observations. Therefore in this article, Positional-aware DATFGAN (PDATFGAN) model is proposed to learn a coordinate manifold that is orthogonal to the latent distribution manifold. In this model, a Positional-aware GAN (PGAN) is introduced in which the generator creates images by parts according to their spatial coordinates as the condition. Once a latent vector is sampled, the generator conditions on every spatial coordinate and creates patches at every resultant spatial location. Also, the discriminator learns to decide whether neighboring patches are homogeneous and continuous across the edges between many patches. After that, the created high-resolution image patches are combined to get the full leaf image. Further, the leaf images are fed to the Deep Convolutional Neural Network (DCNN) classifier for classifying the crop leaf diseases. So, conditional coordination in DATFGAN can able to generate high-quality images than the quality of DATFGAN only. This enables the low-quality image leaf disease classification more robust. Using the generation by parts property, the PDATFGAN is greatly parallelable and intrinsically inherits the standard divide-and-conquer design paradigm which allows large field-of-view image generation. Finally, the experimental results reveal that the PDATFGAN outperform the state-of-the-art deep learning models.