Detection of green apples in natural scenes based on saliency theory and Gaussian curve fitting
Keywords:
image processing, green apple, natural scene, machine vision, object detection, saliency theory, Gaussian curve fittingAbstract
Green apple targets are difficult to identify for having similar color with backgrounds such as leaves. The primary goal of this study was to detect green apples in natural scenes by applying saliency detection and Gaussian curve fitting algorithm. Firstly, the image was represented as a close-loop graph with superpixels as nodes. These nodes were ranked based on the similarity to background and foreground queries to generate the final saliency map. Secondly, Gaussian curve fitting was carried out to fit the V-component in YUV color space in salient areas, and a threshold was selected to binarize the image. To verify the validity of the proposed algorithm, 55 images were selected and compared with the common used image segmentation algorithms such as k-means clustering algorithm and FCM (Fuzzy C-means clustering algorithm). Four parameters including recognition ratio, FPR (false positive rate), FNR (false negative rate) and FDR (false detection rate) were used to evaluate the results, which were 91.84%, 1.36%, 8.16% and 4.22%, respectively. The results indicated that it was effective and feasible to apply this method to the detection of green apples in nature scenes. Keywords: image processing, green apple, natural scene, machine vision, object detection, saliency theory, Gaussian curve fitting DOI: 10.25165/j.ijabe.20181101.2899 Citation: Li B R, Long Y, Song H B. Detection of green apples in natural scenes based on saliency theory and Gaussian curve fitting. Int J Agric & Biol Eng, 2018; 11(1): 192–198.References
[1] Wang Y D, Zhang X Z. Segmentation algorithm of muskmelon fruit with complex background. Transactions of the CSAE, 2014; 30(2): 176–181. (in Chinese)
[2] Ji W, Meng X L, Tao Y, Xu B, Zhao D A. Fast segmentation of colour apple image under all-weather natural conditions for vision recognition of picking robots. International Journal of Advanced Robotic Systems, 2016; 13(1): 24–32.
[3] Luo L F, Zou X J, Xiong J T, Zhang Y, Peng H X, Lin G H. Automatic positioning for picking point of grape picking robot in natural environment. Transactions of the CSAE, 2015; 31(2): 14–21. (in Chinese)
[4] Chen Y L, Park S-K, Ma Y D, Ala R. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Transactions on Neural Networks, 2011; 22(6): 880–892.
[5] Jia W K, Zhao D A, Liu X Y, Tang S P, Ruan C Z, Ji W. Apple recognition based on k-means and GA-RBF-LMS neural network applicated in harvesting robot. Transactions of the CSAE, 2015; 31(18): 175–183. (in Chinese)
[6] Xu L M, Lv J D. Bayberry image segmentation based on homomorphic filtering and k-means clustering algorithm. Transactions of the CSAE, 2015; 31(14): 202–208. (in Chinese)
[7] Peng H X, Zou X J, Chen Y, Yang L, Xiong J T, Chen Y. Fruit image segmentation based on evolutionary algorithm. Transactions of the CSAE, 2014; 30(18): 294–301. (in Chinese)
[8] Cheng M M, Mitra N J, Huang X L, Torr P H S, Hu S M. Salient object detection and segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013; 37(3): 1.
[9] Rutishauser U, Walther D, Koch C, Perona P. Is bottom-up attention useful for object recognition? IEEE Computer Society Conference on Computer Vision & Pattern Recognition, 2004; 2: II-37- II-44.
[10] Itti L. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2004; 13: 1304-1318.
[11] Chen T, Cheng M M, Tan P, Shamir A, Hu S M. Sketch2Photo: internet image montage. Acm Transactions on Graphics, 2009; 28: 89–97.
[12] Liu T, Sun J, Zheng N N, Tang X, Shum H Y. Learning to detect a salient object. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011; 33: 353–367.
[13] Harel J, Koch C, Perona P. Graph-based visual saliency. Advances in Neural Information Processing Systems, 2006; 19: 545–552.
[14] Sun J, Lu H, Li S. Saliency detection based on integration of boundary and soft-segmentation. 19th IEEE International Conference on Image Processing (ICIP), 2012; pp.1085–1088.
[15] Lu Y, Zhang W, Lu H, Xue X. Salient object detection using concavity context. IEEE International Conference on Computer Vision, 2011; pp.233–240.
[16] Cheng M, Mitra N J, Huang X, Torr P H S, Hu S. Global contrast based salient region detection. IEEE Conference on Computer Vision and Pattern Recognition, 2011; pp.409–416.
[17] Xie Y L, Lu H C, Yang M-H. Bayesian saliency via low and mid level cues. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2013; 22(5): 1689–1698.
[18] Yang C, Zhang L, Lu H, Xiang R, Yang M H. Saliency detection via graph-based manifold ranking. IEEE Conference on Computer Vision and Pattern Recognition, 2013; pp.3166-3173.
[19] Arora P, Deepali Dr, Varshney S. Analysis of k-means and k-medoids algorithm for big data. Procedia Computer Science, 2016; 78: 507–512.
[20] Khalid N E A, Noor N M, Ariff N M. Fuzzy c-means (FCM) for optic cup and disc segmentation with morphological operation. Procedia Computer Science, 2014; 42: 255–262.
[21] Biswas S, Ghoshal D, Hazra R. A new algorithm of image segmentation using curve fitting based higher order polynomial smoothing. Optik-International Journal for Light and Electron Optics, 2016; 127(20): 8916–8925.
[2] Ji W, Meng X L, Tao Y, Xu B, Zhao D A. Fast segmentation of colour apple image under all-weather natural conditions for vision recognition of picking robots. International Journal of Advanced Robotic Systems, 2016; 13(1): 24–32.
[3] Luo L F, Zou X J, Xiong J T, Zhang Y, Peng H X, Lin G H. Automatic positioning for picking point of grape picking robot in natural environment. Transactions of the CSAE, 2015; 31(2): 14–21. (in Chinese)
[4] Chen Y L, Park S-K, Ma Y D, Ala R. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Transactions on Neural Networks, 2011; 22(6): 880–892.
[5] Jia W K, Zhao D A, Liu X Y, Tang S P, Ruan C Z, Ji W. Apple recognition based on k-means and GA-RBF-LMS neural network applicated in harvesting robot. Transactions of the CSAE, 2015; 31(18): 175–183. (in Chinese)
[6] Xu L M, Lv J D. Bayberry image segmentation based on homomorphic filtering and k-means clustering algorithm. Transactions of the CSAE, 2015; 31(14): 202–208. (in Chinese)
[7] Peng H X, Zou X J, Chen Y, Yang L, Xiong J T, Chen Y. Fruit image segmentation based on evolutionary algorithm. Transactions of the CSAE, 2014; 30(18): 294–301. (in Chinese)
[8] Cheng M M, Mitra N J, Huang X L, Torr P H S, Hu S M. Salient object detection and segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013; 37(3): 1.
[9] Rutishauser U, Walther D, Koch C, Perona P. Is bottom-up attention useful for object recognition? IEEE Computer Society Conference on Computer Vision & Pattern Recognition, 2004; 2: II-37- II-44.
[10] Itti L. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2004; 13: 1304-1318.
[11] Chen T, Cheng M M, Tan P, Shamir A, Hu S M. Sketch2Photo: internet image montage. Acm Transactions on Graphics, 2009; 28: 89–97.
[12] Liu T, Sun J, Zheng N N, Tang X, Shum H Y. Learning to detect a salient object. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011; 33: 353–367.
[13] Harel J, Koch C, Perona P. Graph-based visual saliency. Advances in Neural Information Processing Systems, 2006; 19: 545–552.
[14] Sun J, Lu H, Li S. Saliency detection based on integration of boundary and soft-segmentation. 19th IEEE International Conference on Image Processing (ICIP), 2012; pp.1085–1088.
[15] Lu Y, Zhang W, Lu H, Xue X. Salient object detection using concavity context. IEEE International Conference on Computer Vision, 2011; pp.233–240.
[16] Cheng M, Mitra N J, Huang X, Torr P H S, Hu S. Global contrast based salient region detection. IEEE Conference on Computer Vision and Pattern Recognition, 2011; pp.409–416.
[17] Xie Y L, Lu H C, Yang M-H. Bayesian saliency via low and mid level cues. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2013; 22(5): 1689–1698.
[18] Yang C, Zhang L, Lu H, Xiang R, Yang M H. Saliency detection via graph-based manifold ranking. IEEE Conference on Computer Vision and Pattern Recognition, 2013; pp.3166-3173.
[19] Arora P, Deepali Dr, Varshney S. Analysis of k-means and k-medoids algorithm for big data. Procedia Computer Science, 2016; 78: 507–512.
[20] Khalid N E A, Noor N M, Ariff N M. Fuzzy c-means (FCM) for optic cup and disc segmentation with morphological operation. Procedia Computer Science, 2014; 42: 255–262.
[21] Biswas S, Ghoshal D, Hazra R. A new algorithm of image segmentation using curve fitting based higher order polynomial smoothing. Optik-International Journal for Light and Electron Optics, 2016; 127(20): 8916–8925.
Downloads
Published
2018-01-31
How to Cite
Li, B., Long, Y., & Song, H. (2018). Detection of green apples in natural scenes based on saliency theory and Gaussian curve fitting. International Journal of Agricultural and Biological Engineering, 11(1), 192–198. Retrieved from https://ijabe.migration.pkpps03.publicknowledgeproject.org/index.php/ijabe/article/view/2899
Issue
Section
Information Technology, Sensors and Control Systems
License
IJABE is an international peer reviewed open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).