Recognition of the gonad of Pacific oysters via object detection
Keywords:
pacific oyster gonad, unapparent object detection, target segmentation, deep learning, MRI, R-SINetAbstract
Oyster is the largest cultured shellfish in the world, and it has high economic value. The plumpness of the Pacific oyster gonad has important implications for the quality and breeding of subsequent parents. At present, only the conventional method of breaking their shells allows for the observation and study of the interior tissues of Pacific oysters. It is an important task to use computer technology for non-destructive sex detection of oysters and to select mature and full oysters for breeding. In this study, based on the multi-effect feature fusion network R-SINet algorithm, a CF-Net algorithm was designed through a boundary enhancement algorithm to detect inconspicuous objects that appear to be seamlessly embedded in the surrounding environment in nuclear magnetic resonance (NMR) images, effectively solving the problem of difficulty in distinguishing Pacific oyster gonads from background images. In addition, calculations were performed on the segmented gonadal regions to obtain a grayscale value difference map between male and female oysters. It was found that there were significant differences in grayscale values between females and males. This task allows for non-destructive detection of the gender of oysters. Firstly, a small animal magnetic resonance imaging (MRI) system was used to perform MRI on Pacific oysters, and a dataset of oyster gonads was established. Secondly, a gonadal segmentation model was created, and the Compact Pyramid Refinement Module and Switchable Excitation Model were applied to the R-SINet algorithm model to achieve multi-effect feature fusion. Then, the Convformer encoder, Token Reinforcement Module, and Adjacent Transfer Module were used together to form the CF-Net network algorithm, further improving the segmentation accuracy. The experimental results on the oyster gonad dataset have demonstrated the effectiveness of this method. Based on the segmentation results, it is possible to calculate the grayscale values of the gonadal region and obtain the distribution map of the grayscale value difference between male and female oysters. The results can provide a technical methodology for the non-destructive discrimination of oyster gender and later reproduction. Keywords: pacific oyster gonad, unapparent object detection, target segmentation, deep learning, MRI, R-SINet DOI: 10.25165/j.ijabe.20241706.8478 Citation: Chen Y F, Yue J, Wang W J, Yang J M, Li Z B. Recognition of the gonad of Pacific oysters via object detection. Int J Agric & Biol Eng, 2024; 17(6): 230–237.References
[1] Purdon A, Mole M A, Selier J, Kruger J, Mafumo H, Olivier P I. Using the Rao’s Q diversity index as an indicator of protected area effectiveness in conserving biodiversity. Ecological Informatics, 2022; 72: 101920.
[2] Xia Y, Liu W X, Meng J W, Hu J H, Liu W B, Kang J, et al. Principles, developments, and applications of spatially resolved spectroscopy in agriculture: A review. Frontiers in Plant Science, 2024; 14: 1324881.
[3] Yang L. Introduction to the management of hospital Bruker BioSpec94/30 USR type small animal MRI research equipment. China Equipment Engineering, 2022; 4: 51–52. (in Chinese)
[4] Zhang Z N, Zheng Y, Wang X M. Application of 7.0T small animal MRI to study the progress of Alzheimer’s disease. Chinese Journal of Medical Imaging Technology, 2019; 35(6): 930–933. (in Chinese)
[5] Hang K B, Su W W, Huang J, Bao G J, Liu W H, Li S P. 7.0T small animal MR instrumentation to observe brain injury in a rat model of classic pyrexia. Chinese Journal of Medical Imaging Technology, 2022; 38(4): 481–485. (in Chinese)
[6] Gilchrist S, Kinchesh P, Kersemans V, Beech J, Allen D, Brady M, et al. A simple, open and extensible gating control unit for cardiac and respiratory synchronisation control in small animal MRI and demonstration of its robust performance in steady-state maintained CINE-MRI. Magnetic Resonance Imaging, 2021; 81: 1–9.
[7] Liu W L, Li J H, Li L, Zhang Y H, Yang M G, Liang S C, et al. Enhanced medial prefrontal cortex and hippocampal activity improves memory generalization in APP/PS1 mice: A multimodal animal MRI study. Frontiers in Cellular Neuroscience, 2022; 16: 848967.
[8] Baskaya F, Lemainque T, Klinkhammer B, Koletnik S, von Stillfried S, Talbot S R, et al. Pathophysiologic mapping of chronic liver diseases with longitudinal multiparametric MRI in animal models. Investigative Radiology, 2024; 59(10): 699–710.
[9] Fan D P, Ji G P, Sun G L, Cheng M M, Shen J B, Shao L. Camouflaged object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle: IEEE, 2020; pp.2774–2784. doi: 10.1109/CVPR42600.2020.00285.
[10] Lv Y Q, Zhang J, Dai Y C, Li A X, Liu B W, Barnes N, et al. Simultaneously localize, segment and rank the camouflaged objects. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville: IEEE, 2021; pp.11591–11601. doi: 10.1109/CVPR46437.2021.01142.
[11] Chen S H, Tan X L, Wang B, Hu X L. Reverse attention for salient object detection. In: Computer Vision – ECCV 2018, 2018; 11213: 236–252.
[12] Zhai Q, Li X, Yang F, Jiao Z C, Luo P, Cheng H, et al. MGL: Mutual graph learning for camouflaged object detection. IEEE Transactions on Image Processing, 2023; 32: 1897–1910.
[13] Fan D P, Ji G P, Cheng M M, Shao L. Concealed object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021; 44(10): 6024–6042. https://arxiv.org/pdf/2102.10274.
[14] Jia Q, Yao S L, Liu Y, Fan X, Liu R S, Luo Z X. Segment, magnify and reiterate: Detecting camouflaged objects the hard way. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans: IEEE, 2022; pp.4703-4712. doi: 10.1109/CVPR52688.2022.00467.
[15] Li L P, Shi F P, Wang C X. Fish image recognition method based on multi-layer feature fusion convolutional network. Ecological Informatics, 2022; 72: 101873.
[16] Xia Y, Che T C, Meng J W, Hu J H, Qiao G L, Liu W B, et al. Detection of surface defects for maize seeds based on YOLOv5. Journal of Stored Products Research, 2024; 105: 102242.
[17] Fu K R, Fan D P, Ji G P, Zhao Q J, Shen J B, Zhu C. Siamese network for RGB-D salient object detection and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021; 44(9): 5541–5559.
[18] Li C Y, Cong R M, Piao Y R, Xu Q Q, Loy C C. RGB-D salient object detection with cross-modality modulation and selection. In: Computer Vision - ECCV, 2020; 12353: 225–241.
[19] Gao S H, Cheng M M, Zhao K, Zhang X Y, Yang M H, Torr P. Res2Net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019; 43(2): 652–662.
[20] Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint, 2017; arXiv: 1704.04861.
[21] Kingma D P, Ba J L. Adam: A method for stochastic optimization. In: ICLR 2015. 2015.
[22] Fan D P, Cheng M M, Liu Y, Li T, Borji A. Structure-measure: A new way to evaluate foreground maps. In: 2017 IEEE International Conference on Computer Vision (ICCV), 2021; pp.4558–4567. doi: 10.1109/ICCV.2017.487.
[23] Fan D P, Gong C, Cao Y, Ren B, Cheng M M, Borji A, et al. Enhanced alignment measure for binary foreground map evaluation. In: IJCAI, 2018: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018; pp.698–704. doi: 10.24963/ijcai.2018/97.
[24] Margolin R, Zelinik-Manor L, Tal A. How to evaluate foreground maps. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus: IEEE, 2014; pp.248–255. doi: 10.1109/CVPR.2014.39.
[25] Perazzi F, Krähenbühl P, Pritch Y, Hornung A. Saliency filters: Contrast based filtering for salient region detection. 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 733–740. doi: 10.1109/CVPR.2012.6247743.
[26] Hand D, Christen P. A note on using the F-measure for evaluating record linkage algorithms. Statistics and Computing, 2018; 28: 539–547.
[2] Xia Y, Liu W X, Meng J W, Hu J H, Liu W B, Kang J, et al. Principles, developments, and applications of spatially resolved spectroscopy in agriculture: A review. Frontiers in Plant Science, 2024; 14: 1324881.
[3] Yang L. Introduction to the management of hospital Bruker BioSpec94/30 USR type small animal MRI research equipment. China Equipment Engineering, 2022; 4: 51–52. (in Chinese)
[4] Zhang Z N, Zheng Y, Wang X M. Application of 7.0T small animal MRI to study the progress of Alzheimer’s disease. Chinese Journal of Medical Imaging Technology, 2019; 35(6): 930–933. (in Chinese)
[5] Hang K B, Su W W, Huang J, Bao G J, Liu W H, Li S P. 7.0T small animal MR instrumentation to observe brain injury in a rat model of classic pyrexia. Chinese Journal of Medical Imaging Technology, 2022; 38(4): 481–485. (in Chinese)
[6] Gilchrist S, Kinchesh P, Kersemans V, Beech J, Allen D, Brady M, et al. A simple, open and extensible gating control unit for cardiac and respiratory synchronisation control in small animal MRI and demonstration of its robust performance in steady-state maintained CINE-MRI. Magnetic Resonance Imaging, 2021; 81: 1–9.
[7] Liu W L, Li J H, Li L, Zhang Y H, Yang M G, Liang S C, et al. Enhanced medial prefrontal cortex and hippocampal activity improves memory generalization in APP/PS1 mice: A multimodal animal MRI study. Frontiers in Cellular Neuroscience, 2022; 16: 848967.
[8] Baskaya F, Lemainque T, Klinkhammer B, Koletnik S, von Stillfried S, Talbot S R, et al. Pathophysiologic mapping of chronic liver diseases with longitudinal multiparametric MRI in animal models. Investigative Radiology, 2024; 59(10): 699–710.
[9] Fan D P, Ji G P, Sun G L, Cheng M M, Shen J B, Shao L. Camouflaged object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle: IEEE, 2020; pp.2774–2784. doi: 10.1109/CVPR42600.2020.00285.
[10] Lv Y Q, Zhang J, Dai Y C, Li A X, Liu B W, Barnes N, et al. Simultaneously localize, segment and rank the camouflaged objects. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville: IEEE, 2021; pp.11591–11601. doi: 10.1109/CVPR46437.2021.01142.
[11] Chen S H, Tan X L, Wang B, Hu X L. Reverse attention for salient object detection. In: Computer Vision – ECCV 2018, 2018; 11213: 236–252.
[12] Zhai Q, Li X, Yang F, Jiao Z C, Luo P, Cheng H, et al. MGL: Mutual graph learning for camouflaged object detection. IEEE Transactions on Image Processing, 2023; 32: 1897–1910.
[13] Fan D P, Ji G P, Cheng M M, Shao L. Concealed object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021; 44(10): 6024–6042. https://arxiv.org/pdf/2102.10274.
[14] Jia Q, Yao S L, Liu Y, Fan X, Liu R S, Luo Z X. Segment, magnify and reiterate: Detecting camouflaged objects the hard way. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans: IEEE, 2022; pp.4703-4712. doi: 10.1109/CVPR52688.2022.00467.
[15] Li L P, Shi F P, Wang C X. Fish image recognition method based on multi-layer feature fusion convolutional network. Ecological Informatics, 2022; 72: 101873.
[16] Xia Y, Che T C, Meng J W, Hu J H, Qiao G L, Liu W B, et al. Detection of surface defects for maize seeds based on YOLOv5. Journal of Stored Products Research, 2024; 105: 102242.
[17] Fu K R, Fan D P, Ji G P, Zhao Q J, Shen J B, Zhu C. Siamese network for RGB-D salient object detection and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021; 44(9): 5541–5559.
[18] Li C Y, Cong R M, Piao Y R, Xu Q Q, Loy C C. RGB-D salient object detection with cross-modality modulation and selection. In: Computer Vision - ECCV, 2020; 12353: 225–241.
[19] Gao S H, Cheng M M, Zhao K, Zhang X Y, Yang M H, Torr P. Res2Net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019; 43(2): 652–662.
[20] Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint, 2017; arXiv: 1704.04861.
[21] Kingma D P, Ba J L. Adam: A method for stochastic optimization. In: ICLR 2015. 2015.
[22] Fan D P, Cheng M M, Liu Y, Li T, Borji A. Structure-measure: A new way to evaluate foreground maps. In: 2017 IEEE International Conference on Computer Vision (ICCV), 2021; pp.4558–4567. doi: 10.1109/ICCV.2017.487.
[23] Fan D P, Gong C, Cao Y, Ren B, Cheng M M, Borji A, et al. Enhanced alignment measure for binary foreground map evaluation. In: IJCAI, 2018: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018; pp.698–704. doi: 10.24963/ijcai.2018/97.
[24] Margolin R, Zelinik-Manor L, Tal A. How to evaluate foreground maps. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus: IEEE, 2014; pp.248–255. doi: 10.1109/CVPR.2014.39.
[25] Perazzi F, Krähenbühl P, Pritch Y, Hornung A. Saliency filters: Contrast based filtering for salient region detection. 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 733–740. doi: 10.1109/CVPR.2012.6247743.
[26] Hand D, Christen P. A note on using the F-measure for evaluating record linkage algorithms. Statistics and Computing, 2018; 28: 539–547.
Downloads
Published
2024-12-24
How to Cite
Chen, Y., Yue, J., Wang, W., Yang, J., & Li, Z. (2024). Recognition of the gonad of Pacific oysters via object detection. International Journal of Agricultural and Biological Engineering, 17(6), 230–237. Retrieved from https://ijabe.migration.pkpps03.publicknowledgeproject.org/index.php/ijabe/article/view/8478
Issue
Section
Information Technology, Sensors and Control Systems
License
IJABE is an international peer reviewed open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).