Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

You are using software which is blocking our advertisements (adblocker).

As we provide the news for free, we are relying on revenues from our banners. So please disable your adblocker and reload the page to continue using this site.
Thanks!

Click here for a guide on disabling your adblocker.

Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

A multimodal deep learning approach for classifying tomato leaf diseases.

Manual identification of tomato leaf diseases is a time-consuming and laborious process that may lead to inaccurate results without professional assistance. Therefore, an automated, early, and precise leaf disease recognition system is essential for farmers to ensure the quality and quantity of tomato production by providing timely interventions to mitigate disease spread.

In a recent study, researchers have proposed seven robust Bayesian-optimized deep hybrid learning models leveraging the synergy between deep learning and machine learning for the automated classification of ten types of tomato leaves (nine diseased and one healthy). We customized the popular Convolutional Neural Network (CNN) algorithm for automatic feature extraction due to its ability to capture spatial hierarchies of features directly from raw data and classical machine learning techniques [Random Forest (RF), XGBoost, GaussianNB (GNB), Support Vector Machines (SVM), Multinomial Logistic Regression (MLR), K-Nearest Neighbor (KNN)], and stacking for classifications.

Additionally, the study incorporated a Boruta feature filtering layer to capture the statistically significant features. The standard, research-oriented PlantVillage dataset was used for the performance testing, which facilitates benchmarking against prior research and enables meaningful comparisons of classification performance across different approaches. We utilized a variety of statistical classification metrics to demonstrate the robustness of our models. Using the CNN-Stacking model, this study achieved the highest classification performance among the seven hybrid models.

On an unseen dataset, this model achieved average precision, recall, f1-score, mcc, and accuracy values of 98.527%, 98.533%, 98.527%, 98.525%, and 98.268%, respectively. Our study requires only 0.174 s of testing time to correctly identify noisy, blurry, and transformed images. This indicates our approach's time efficiency and generalizability in images captured under challenging lighting conditions and with complex backgrounds.

Based on the comparative analysis, our approach is superior and computationally inexpensive compared to the existing studies. This work will aid in developing a smartphone app to offer farmers a real-time disease diagnosis tool and management strategies.

Read more on Nature.

Publication date: