How to Detect Plant Diseases Using Machine Learning

by Jonathanrjpereira in Circuits > Cameras

4602 Views, 14 Favorites, 0 Comments

How to Detect Plant Diseases Using Machine Learning

Medium Italic-01.png

The process of detecting and recognizing diseased plants has always been a manual and tedious process that requires humans to visually inspect the plant body which may often lead to an incorrect diagnosis. It has also been predicted that as global weather patterns begin to vary due to climate change, crop diseases are likely to become more severe and widespread. Hence, it is important to develop systems that quickly and easily analyze crops and identify a particular disease in order to limit further crop damage.

In this Instructable, we will explore a machine learning concept known as "Transfer Learning" to classify images of diseased rice plants. The same method can be repurposed for any other image classification problem.

Types of Rice Diseases

Rice Diseases.PNG

Rice is one of the most popular staple food crops grown mainly across Asia, Africa and South America but is susceptible to a variety of pests and diseases. Physical characteristics such as decolorization of leaves can be used to identify several diseases that may affect the rice crop. For example, in the case of Brown-Spot, a fungal disease that affects the protective sheath of the leaves, the leaves are covered with several small oval brown spots with gray centers whereas, in the case of Leaf-Blast, the leaves are covered with larger brown lesions. Similarly, the leaves affected by the Rice Hispa pest can be identified by the long trail marks that develop on the surface of the leaf.

How Did Prior Methods Detect Diseases?

Capture.PNG

Prior methods for automatically classifying diseased plant images such as rule-based classifiers as used in [1], rely on a fixed set of rules to segment the leaf into affected and unaffected regions. Some of the rules to extract features involve observing the change in the mean and standard deviation between the color of the affected and unaffected regions. Rules to extract shape features involve individually placing several primitive shapes on top of the affected region and identifying the shape that covers the maximum area of the affected region. Once the features are extracted from the images, a set of fixed rules are used to classify the images depending upon the disease that may have affected the plant. The main drawback of such a classifier is that it will require several fixed rules for each disease which in turn could make it susceptible to noisy data. The above images show how a rule-based decision tree can be used to segment the image into two regions.

1. Santanu Phadikar et al.,“Rice diseases classification using feature selection and rule generation techniques,” Computers and Electronics in Agriculture, vol. 90, Jan. 2013.

Transfer Learning

Landscape-01.png

The image classification technique described in this Instructables uses the basic structure of a CNN that consists of several convolutional layers, a pooling layer, and a final fully connected layer. The convolutional layers act as a set of filters that extract the high-level features of the image. Max-pooling is one of the common methods used in pooling layers to reduce the spatial size of the extracted features thereby reducing the computation power required to calculate the weights for each layer. Finally, the extracted data is passed through a fully connected layer along with a softmax activation function which determines the class of the image.

But training custom CNNs from scratch may not produce the desired results and may have a very long training time.

In order to learn the features of the training images, we use a method called Transfer Learning wherein the ‘top’ layers of a pre-trained model are removed and replaced with layers that can learn the features that are specific to the training dataset. Transfer learning reduces the training time when compared to models that use randomly initialized weights. Our method uses six different pre-trained models namely, AlexNet, GoogLeNet, ResNet-50, Inception-v3, ShuffleNet and MobileNet-v2.

The image shows the GoogLeNet architecture where blue is used for convolutional layers, red for pooling layers, yellow for softmax layers and green for concat layers. You can learn more about the inner working of a CNN here.

The rice disease dataset consists of images of leaves of both healthy and diseased rice plants. The images can be categorized into four different classes namely Brown-Spot, Rice Hispa, Leaf-Blast and Healthy. The dataset consists of 2092 different images with each class containing 523 images. Each image consists of a single healthy or diseased leaf placed against a white background.

We split the image dataset into training, validation and testing image sets. To prevent overfitting, we augment the training images by scaling and flipping the training images to increase the total number of training samples.

The code and dependencies are open-source and can be found here: GitHub Code

For different image classification applications, we can simply change the training image dataset.

Training the Model

Memory Size.PNG
Training Time.PNG
Validation Accuracy.PNG

Depending upon the memory size required by each model, the pre-trained models are categorized into larger and smaller models. The smaller models consume less than 15MB and hence are better suited for mobile applications.

Amongst the larger models, Inception-v3 had the longest training time of approximately 140 minutes whereas AlexNet had the shortest training time of approximately 18 minutes. Amongst the smaller mobile-oriented models, MobileNet-v2 had the longest training time of approximately 73 minutes whereas ShuffleNet had the shortest training time of approximately 38 minutes.

Testing the Model

testing.PNG
mobile.PNG
Inceptionsd.PNG

Amongst the larger models, Inception-v3 had the highest testing accuracy of approximately 72.1% whereas AlexNet had the lowest testing accuracy of approximately 48.5%. Amongst the smaller mobile-oriented models MobileNet-v2 had the highest testing accuracy of 62.5% whereas ShuffleNet had the lowest testing accuracy of 58.1%.

MobileNet-v2 performed significantly well when classifying images of Brown-Spot, Leaf-Blast and Healthy leaves while making several misclassifications for Rice Hispa with an accuracy of only 46.15%.

Inception-v3 showed similar classification results as MobileNet-v2.

Additional Tests

grasss.PNG
adddd.PNG

The image above shows how the MobileNet-v2 model misclassifies an image of a grass leaf against a white background as Rice Hispa.

We also tested the accuracy of MobileNet-v2 on cropped images of Rice Hispa wherein the white background was minimized such that the leaf occupies a maximum area within the image. For cropped images of Rice Hispa, we observed an accuracy of approximately 80.81% i.e. For cropped images of Rice Hispa, we observed a significant increase in the classification accuracy over uncropped test samples. Hence, we propose that real-world implementations of rice disease detection using convolutional neural networks must crop the test images to remove background noise in order to improve accuracy.