Classification of rice seed variety using point cloud data combined with deep learning
Keywords:
rice seed, variety classification, point cloud data, deep learning, light field cameraAbstract
Rice variety selection and quality inspection are key links in rice planting. Compared with two-dimensional images, three-dimensional information on rice seeds shows the appearance characteristics of rice seeds more comprehensively and accurately. This study proposed a rice variety classification method using three-dimensional point cloud data of the surface of rice seeds combined with a deep learning network to achieve the rapid and accurate identification of rice varieties. First, a point cloud collection platform was set up with a Raytrix light field camera as the core to collect three-dimensional point cloud data on the surface of rice seeds; then, the collected point cloud was filled, filtered and smoothed; after that, the point cloud segmentation is based on the RANSAC algorithm, and the point cloud downsampling is based on a combination of random sampling algorithm and voxel grid filtering algorithm. Finally, the processed point cloud was input to the improved PointNet network for feature extraction and species classification. The improved PointNet network added a cross-level feature connection structure, made full use of features at different levels, and better extracted the surface structure features of rice seeds. After testing, the improved PointNet model had an average classification accuracy of 89.4% for eight varieties of rice, which was 1.2% higher than that of the PointNet model. The method proposed in this study combined deep learning and point cloud data to achieve the efficient classification of rice varieties. Keywords: rice seed, variety classification, point cloud data, deep learning, light field camera DOI: 10.25165/j.ijabe.20211405.5902 Citation: Qian Y, Xu Q J, Yang Y Y, Lu H, Li H, Feng X B, et al. Classification of rice seed variety using point cloud data combined with deep learning. Int J Agric & Biol Eng, 2021; 14(5): 206–212.References
[1] Qiu Z, Chen J, Zhao Y, Zhu S, He Y, Zhang C. Variety identification of single rice seed using hyperspectral imaging combined with convolutional neural network. Applied Sciences, 2018; 8(2): 212. doi: 10.3390/ app8020212.
[2] Li X H, Ma X, Li Z H, Deng X W, Li H W. Identification of rice variety based on multi-feature fusion and SVM. Journal of Chinese Agricultural Mechanization, 2019; 40(7): 97–102. (in Chinese)
[3] Weng S Z, Tang P P, Yuan H C, Guo B Q, Yu S, Huang L S, et al. Hyperspectral imaging for accurate determination of rice variety using a deep learning network with multi-feature fusion. Spectrochimica Acta Part A, Molecular and Biomolecular Spectroscopy, 2020; 234: 118237. doi: 10.1016/j.saa.2020.118237.
[4] Kuo T Y, Chung C L, Chen S Y, Lin H A, Kuo Y F. Identifying rice grains using image analysis and sparse-representation-based classification. Computers and Electronics in Agriculture, 2016; 127: 716–725.
[5] Golpour I, Parian J A, Chayjan R A. Identification and classification of bulk paddy, brown, and white rice cultivars with colour features extraction using image analysis and neural network. Czech Journal of Food Sciences, 2018; 32(3): 280–287.
[6] Mittal S, Dutta M, Issac A. Non-destructive image processing based system for assessment of rice quality and defects for classification according to inferred commercial value. Measurement, 2019; 148: 106969. doi: 10.1016/j.measurement.2019.106969.
[7] Fabiyi S D, Vu H, Tachtatzis C, Murray P, Harle D, Dao T, et al. Varietal classification of rice seeds using RGB and hyperspectral images. IEEE Access, 2020; 8: 22493–22505.
[8] Qian Y, Yin W Q, Lin X Z, Ding Y Q, Feng X B. Variety identification of rice seed based on three-dimensional reconstruction method of sequence images. Transactions of the CSAE, 2014; 30(7): 190–196. (in Chinese)
[9] Li H, Qian Y, Cao P, Yin W Q, Dai F, Hu F, et al. Calculation method of surface shape feature of rice seed based on point cloud. Computers and Electronics in Agriculture, 2017; 142(Part A): 416–423.
[10] Feng X B, He P J, Zhang H X, Yin W Q, Qian Y, Cao P, et al. Rice seeds identification based on back propagation neural network model. Int J Agric & Biol Eng, 2019; 12(6): 122–128.
[11] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006; 18(7): 1527–1554.
[12] Niu C G, Liu Y J, Li Z M, Li H. 3D object recognition and model segmentation based on point cloud data. Journal of Graphics, 2019; 40(2): 274–281. (in Chinese)
[13] Charles R Q, Su H, Kaichun M, Guibas L J. PointNet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hi, USA: IEEE, 2017; pp.77–85. doi: 10.1109/CVPR.2017.16.
[14] Ma L, Jin S S, Niu B. 3D hand pose estimation method based on improved PointNet. Application Research of Computers, 2020; 37(10): 3188–3192. (in Chinese)
[15] Zhao Z Y, Cheng Y L, Shi X S, Qin X X, Li X. Terrain classification of LiDAR point cloud based on multi-scale features and PointNet. Laser & Optoelectronics Progress, 2019; 56(5): 251–258. (in Chinese)
[16] Johannsen O, Heinze C, Goldluecke B, Perwaß C. On the calibration of focused plenoptic cameras. In: Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications. Lecture Notes in Computer Science, Berlin: Springer, 2013; 8200; 302–317. doi: 10.1007/ 978-3-642-44964-2_15.
[17] Schnabel R, Klein W R. Efficient RANSAC for point-cloud shape detection. Computer Graphics Forum, 2007; 26(2): 214–226.
[18] Dong H W. Study for cell grid methods finding k nearest neighbors. Computer Engineering and Applications, 2007; 43(21): 52-56. (in Chinese)
[19] Zheng B C, Peng W, Zhang Y, Ye X Z, Zhang S Y. A survey on 3D model retrieval techniques. Journal of Computer - Aided Design &Computer Graphics, 2004; 16(7): 873–881. (in Chinese)
[20] Yang Y B, Lin H, Zhu Q. Content-based 3D model retrieval: A survey. Chinese Journal of Computers, 2008; 27(10): 1297–1310. (in Chinese)
[21] Armeni I, Sener O, Zamir A R, Jiang H L, Brilakis I, Fischer M, et al. 3D semantic parsing of large-scale indoor spaces. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016; pp.1534–1543. doi: 10.1109/CVPR.2016.170.
[22] Srivastava N, Hinton G, Krizhevsky A, Sutskever L, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014; 15(1): 1929–1958.
[23] Rusu R B, Cousins S. 3D is here: Point cloud library (PCL). In: IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, 2011; pp.1–4. doi: 10.1109/ICRA.2011.5980567.
[24] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. ar Xiv, 2017; arXiv:1706.02413v1.
[25] Phan A V, Nguyen M L, Nguyen Y L H, Bui L T. DGCNN: A convolutional neural network over large-scale labeled graphs. Neural Networks, 2018; 108: 533–543.
[2] Li X H, Ma X, Li Z H, Deng X W, Li H W. Identification of rice variety based on multi-feature fusion and SVM. Journal of Chinese Agricultural Mechanization, 2019; 40(7): 97–102. (in Chinese)
[3] Weng S Z, Tang P P, Yuan H C, Guo B Q, Yu S, Huang L S, et al. Hyperspectral imaging for accurate determination of rice variety using a deep learning network with multi-feature fusion. Spectrochimica Acta Part A, Molecular and Biomolecular Spectroscopy, 2020; 234: 118237. doi: 10.1016/j.saa.2020.118237.
[4] Kuo T Y, Chung C L, Chen S Y, Lin H A, Kuo Y F. Identifying rice grains using image analysis and sparse-representation-based classification. Computers and Electronics in Agriculture, 2016; 127: 716–725.
[5] Golpour I, Parian J A, Chayjan R A. Identification and classification of bulk paddy, brown, and white rice cultivars with colour features extraction using image analysis and neural network. Czech Journal of Food Sciences, 2018; 32(3): 280–287.
[6] Mittal S, Dutta M, Issac A. Non-destructive image processing based system for assessment of rice quality and defects for classification according to inferred commercial value. Measurement, 2019; 148: 106969. doi: 10.1016/j.measurement.2019.106969.
[7] Fabiyi S D, Vu H, Tachtatzis C, Murray P, Harle D, Dao T, et al. Varietal classification of rice seeds using RGB and hyperspectral images. IEEE Access, 2020; 8: 22493–22505.
[8] Qian Y, Yin W Q, Lin X Z, Ding Y Q, Feng X B. Variety identification of rice seed based on three-dimensional reconstruction method of sequence images. Transactions of the CSAE, 2014; 30(7): 190–196. (in Chinese)
[9] Li H, Qian Y, Cao P, Yin W Q, Dai F, Hu F, et al. Calculation method of surface shape feature of rice seed based on point cloud. Computers and Electronics in Agriculture, 2017; 142(Part A): 416–423.
[10] Feng X B, He P J, Zhang H X, Yin W Q, Qian Y, Cao P, et al. Rice seeds identification based on back propagation neural network model. Int J Agric & Biol Eng, 2019; 12(6): 122–128.
[11] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006; 18(7): 1527–1554.
[12] Niu C G, Liu Y J, Li Z M, Li H. 3D object recognition and model segmentation based on point cloud data. Journal of Graphics, 2019; 40(2): 274–281. (in Chinese)
[13] Charles R Q, Su H, Kaichun M, Guibas L J. PointNet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hi, USA: IEEE, 2017; pp.77–85. doi: 10.1109/CVPR.2017.16.
[14] Ma L, Jin S S, Niu B. 3D hand pose estimation method based on improved PointNet. Application Research of Computers, 2020; 37(10): 3188–3192. (in Chinese)
[15] Zhao Z Y, Cheng Y L, Shi X S, Qin X X, Li X. Terrain classification of LiDAR point cloud based on multi-scale features and PointNet. Laser & Optoelectronics Progress, 2019; 56(5): 251–258. (in Chinese)
[16] Johannsen O, Heinze C, Goldluecke B, Perwaß C. On the calibration of focused plenoptic cameras. In: Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications. Lecture Notes in Computer Science, Berlin: Springer, 2013; 8200; 302–317. doi: 10.1007/ 978-3-642-44964-2_15.
[17] Schnabel R, Klein W R. Efficient RANSAC for point-cloud shape detection. Computer Graphics Forum, 2007; 26(2): 214–226.
[18] Dong H W. Study for cell grid methods finding k nearest neighbors. Computer Engineering and Applications, 2007; 43(21): 52-56. (in Chinese)
[19] Zheng B C, Peng W, Zhang Y, Ye X Z, Zhang S Y. A survey on 3D model retrieval techniques. Journal of Computer - Aided Design &Computer Graphics, 2004; 16(7): 873–881. (in Chinese)
[20] Yang Y B, Lin H, Zhu Q. Content-based 3D model retrieval: A survey. Chinese Journal of Computers, 2008; 27(10): 1297–1310. (in Chinese)
[21] Armeni I, Sener O, Zamir A R, Jiang H L, Brilakis I, Fischer M, et al. 3D semantic parsing of large-scale indoor spaces. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016; pp.1534–1543. doi: 10.1109/CVPR.2016.170.
[22] Srivastava N, Hinton G, Krizhevsky A, Sutskever L, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014; 15(1): 1929–1958.
[23] Rusu R B, Cousins S. 3D is here: Point cloud library (PCL). In: IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, 2011; pp.1–4. doi: 10.1109/ICRA.2011.5980567.
[24] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. ar Xiv, 2017; arXiv:1706.02413v1.
[25] Phan A V, Nguyen M L, Nguyen Y L H, Bui L T. DGCNN: A convolutional neural network over large-scale labeled graphs. Neural Networks, 2018; 108: 533–543.
Downloads
Published
2021-10-13
How to Cite
Qian, Y., Xu, Q., Yang, Y., Lu, H., Li, H., Feng, X., & Yin, W. (2021). Classification of rice seed variety using point cloud data combined with deep learning. International Journal of Agricultural and Biological Engineering, 14(5), 206–212. Retrieved from https://ijabe.migration.pkpps03.publicknowledgeproject.org/index.php/ijabe/article/view/5902
Issue
Section
Information Technology, Sensors and Control Systems
License
IJABE is an international peer reviewed open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).