A Novel Light-Weight DCNN Model for Classifying Plant Diseases on Internet of Things Edge Devices
Abstract
One of the essential aspects of smart farming and precision agriculture is quickly and accurately identifying diseases. Utilizing plant imaging and recently developed machine learning algorithms, the timely detection of diseases provides many benefits to farmers regarding crop and product quality. Specifically, for farmers in remote areas, disease diagnostics on edge devices is the most effective and optimal method to handle crop damage as quickly as possible. However, the limitations posed by the equipment’s limited resources have reduced the accuracy of disease detection. Consequently, adopting an efficient machine-learning model and decreasing the model size to fit the edge device is an exciting problem that receives significant attention from researchers and developers. This work takes advantage of previous research on deep learning model performance evaluation to present a model that applies to both the Plant-Village laboratory dataset and the Plant-Doc natural-type dataset. The evaluation results indicate that the proposed model is as effective as the current state-of-the-art model. Moreover, due to the quantization technique, the system performance stays the same when the model size is reduced to accommodate the edge device.
References
Anh, P. T., and Duc, H. T. M. A benchmark of deep learning models for multi-leaf diseases for edge devices. In 2021 International Conference on Advanced Technologies for Communications (ATC) (2021), IEEE, pp. 318–323.
Baruah, S., Lee, P., Sarathy, P., and Wolf, M. Achieving resiliency and behavior assurance in autonomous navigation: An industry perspective. Proceedings of the IEEE 108, 7 (2020), 1196–1207.
Benos, L., Tagarakis, A. C., Dolias, G., Berruto, R., Kateris, D., and Bochtis, D. Machine learning in agriculture: A comprehensive updated review. Sensors 21, 11 (2021), 3758.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
Eli-Chukwu, N. C. Applications of artificial intelligence in agriculture: A review. Engineering, Technology & Applied Science Research 9, 4 (2019), 4377–4383.
Gong, X., Chang, S., Jiang, Y., and Wang, Z. Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (2019), pp. 3224–3234.
Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., and Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2020), pp. 1580–1589.
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (2019), pp. 1314–1324.
Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016).
Kepuska, V., and Bohouta, G. Nextgeneration of virtual personal assistants (Microsoft cortana, apple siri, amazon alexa and google home). In 2018 IEEE 8th annual computing and communication workshop and conference (CCWC) (2018), IEEE, pp. 99–103.
Liu, D., Kong, H., Luo, X., Liu, W., and Subramaniam, R. Bringing ai to edge:From deep learning’s perspective. Neurocomputing (2021).
Ma, N., Zhang, X., Zheng, H.-T., and Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV) (2018), pp. 116–131.
Mark, R. Ethics of using ai and big data in agriculture: The case of a large agriculture multinational. The ORBIT Journal 2, 2 (2019), 1–27.
Negi, A., Kumar, K., and Chauhan, P. Deep neural network-based multi-class image classification for plant diseases. Agricultural Informatics: Automation Using the IoT and Machine Learning (2021), 117–129.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 3 (2015), 211–252.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 4510–4520.
Schuler, J. P. S., Romani, S., Abdel-Nasser, M., Rashwan, H., and Puig, D. Color-aware two-branch dcnn for efficient plant disease classification. MENDEL Journal 28, 1 (2022), 55–62.
Shi, W., Cao, J., Zhang, Q., Li, Y., and Xu, L. Edge computing: Vision and challenges. IEEE internet of things journal 3, 5 (2016), 637–646.
Sood, S., and Singh, H. Computer vision and machine learning based approaches for food security: A review. Multimedia Tools and Applications 80, 18 (2021), 27973–27999.
Stachurski, J., and Toda, A. A. An impossibility theorem for wealth in heterogeneous-agent models with limited heterogeneity. Journal of Economic Theory 182 (2019), 1–24.
Strubell, E., Ganesh, A., and McCallum, A. Energy and policy considerations for deep learning in nlp. arXiv preprint arXiv:1906.02243 (2019).
Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., and Le, Q. V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 2820–2828.
Tan, M., and Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (2019), PMLR, pp. 6105–6114.
Vangala, A., Das, A. K., Kumar, N., and Alazab, M. Smart secure sensing for iot-based agriculture: Blockchain perspective. IEEE Sensors Journal 21, 16 (2020), 17591–17607.
Yang, J., Sharma, A., and Kumar, R. Iotbased framework for smart agriculture. International Journal of Agricultural and Environmental Information Systems (IJAEIS) 12, 2 (2021), 1–14.
Yang, T.-J., Howard, A., Chen, B., Zhang, X., Go, A., Sandler, M., Sze, V., and Adam, H. Netadapt: Platform-aware neural network adaptation for mobile applications. In Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 285–300.
Yang, Z., Wang, Y., Chen, X., Shi, B., Xu, C., Xu, C., Tian, Q., and Xu, C. Cars: Continuous evolution for efficient neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 1829–1838.
Zhang, X., Zhou, X., Lin, M., and Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 6848–6856.
Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., and Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE 107, 8 (2019), 1738–1762.
Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 8697–8710.
Copyright (c) 2022 MENDEL
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
MENDEL open access articles are normally published under a Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA 4.0) https://creativecommons.org/licenses/by-nc-sa/4.0/ . Under the CC BY-NC-SA 4.0 license permitted 3rd party reuse is only applicable for non-commercial purposes. Articles posted under the CC BY-NC-SA 4.0 license allow users to share, copy, and redistribute the material in any medium of format, and adapt, remix, transform, and build upon the material for any purpose. Reusing under the CC BY-NC-SA 4.0 license requires that appropriate attribution to the source of the material must be included along with a link to the license, with any changes made to the original material indicated.