Frontiers of Data and Computing ›› 2024, Vol. 6 ›› Issue (6): 85-96.
CSTR: 32002.14.jfdc.CN10-1649/TP.2024.06.009
doi: 10.11871/jfdc.issn.2096-742X.2024.06.009
Previous Articles Next Articles
Received:
2023-11-27
Online:
2024-12-20
Published:
2024-12-20
Contact:
LUO Ze
E-mail:hwt0316@cnic.cn;luoze@cnic.cn
HE Wentong,LUO Ze. Object Detection with Federated Learning for Wildlife Camera Trap Images[J]. Frontiers of Data and Computing, 2024, 6(6): 85-96, https://cstr.cn/32002.14.jfdc.CN10-1649/TP.2024.06.009.
Table 4
The experimental results of traditional object detection for per class"
Class | 第一种 | BOL | GTM | ECU | VEN | PRY | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
F1 | AP50 | F1 | AP50 | F1 | AP50 | F1 | AP50 | F1 | AP50 | F1 | AP50 | ||||||
0 | 0.964 | 0.981 | 0.957 | 0.972 | 0.909 | 0.972 | 0.934 | 0.947 | 0.893 | 0.947 | 0.17 | 0.995 | |||||
1 | 0.911 | 0.926 | 0.921 | 0.919 | 0.916 | 0.93 | 0 | 0.995 | 0.697 | 0.861 | - | - | |||||
2 | 0.922 | 0.939 | 0.91 | 0.943 | 0.88 | 0.995 | 0.83 | 0.812 | 0.507 | 0.612 | 0.035 | 0.995 | |||||
3 | 0.906 | 0.937 | 0.908 | 0.953 | 0.723 | 0.714 | 0.359 | 0.454 | 0.728 | 0.726 | - | - | |||||
4 | 0.936 | 0.981 | 0.935 | 0.954 | - | - | 0 | 0.552 | 0.899 | 0.942 | - | - | |||||
5 | 0.954 | 0.96 | 0.956 | 0.979 | - | - | 0 | 0 | 0.815 | 0.768 | - | - | |||||
6 | 0.949 | 0.939 | 0.665 | 0.913 | 0.895 | 0.936 | 0.866 | 0.962 | 0.927 | 0.955 | - | - | |||||
7 | 0.893 | 0.937 | 0.852 | 0.881 | - | - | 0.622 | 0.503 | 0.915 | 0.995 | 0 | 0 |
[1] | 刘雪华, 武鹏峰, 何祥博, 等. 红外相机技术在物种监测中的应用及数据挖掘[J]. 生物多样性, 2018, 26(8): 850-861. |
[2] | NEWEY S, DAVIDSON P, NAZIR S, et al. Limitations of recreational camera traps for wildlife management and conservation research: A practitioner’s perspective[J]. Ambio, 2015, 44, 624-635. |
[3] | ROVERRO F, ZIMMERMANN F, Berzi D, et al. “Which camera trap type and how many do I need?” A review of camera features and study designs for a range of wildlife research applications[J]. Hystrix, the Italian Journal of Mammalogy, 2013, 24(2): 148-156. |
[4] | 李学友, 胡文强, 普昌哲, 等. 西南纵向岭谷区兽类及雉类红外相机监测平台: 方案、进展与前景[J]. 生物多样性, 2020, 28(9): 1090-1096. |
[5] | ROWCLIFFE J M, CARBONE C. Surveys using camera traps: Are we looking to a brighter future?[J]. Animal Conservation, 2008, 11: 185-186. |
[6] | O’CONNEL A F, NICHOLS J D, KARANTH K U. Camera Traps in Animal Ecology[M]. Springer: New York, NY, USA, 2011. |
[7] | MCCALLUM J. Changing use of camera traps in mammalian field research: Habitats, taxa and study types[J]. Mammal Review, 2013, 43: 196-206. |
[8] | STEENWEG R, HEBBLEWHITE M, KAYS R, et al. Scaling-up camera traps: monitoring the planet’s biodiversity with networks of remote sensors[J]. Frontiers in Ecology and the Environment, 2017,15: 26-34. |
[9] | TUIA D, KELLENBERGER B, BEERY S, et al. Perspectives in machine learning for wildlife conservation[J]. Nature Communications, 2021, 13: 792. |
[10] | TABAK M A, NOROUZZADEH M S, WOLFSON D W, et al. Machine learning to classify animal species in camera trap images: Applications in ecology[J]. Methods in Ecology And Evolution, 2019, 10, 585-590. |
[11] | 冷佳旭, 刘莹. 基于深度学习的小目标检测与识别[J]. 数据与计算发展前沿, 2020, 2(2): 120-135. |
[12] | MIAO Z, GAYNOR K M, WANG J, et al. Insights and approaches using deep learning to classify wildlife[J]. Scientific Reports, 2019, 9: 8137. |
[13] | WILLI M, PITMAN R T, CARDOSO A W, et al. Identifying animal species in camera trap images using deep learning and citizen science[J]. Methods in Ecology and Evolution, 2018, 10: 80-91. |
[14] | NOROUZZADEH M S, NGUYEN A N, KOSMALA M, et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning[J]. Proceedings of the National Academy of Sciences of the United States of America, 2017, 115: E5716-E5725. |
[15] | 刘生智, 李春蓉, 刘同金, 等. 基于YOLO V3模型的奶牛目标检测[J]. 塔里木大学学报, 2019, 31(2): 85-90. |
[16] | 朱高兴, 于瓅. 基于YOLOv5-CA算法的野生动物目标检测研究[J]. 信息技术与信息化, 2022(6): 32-35. |
[17] | HE W, LUO Z, TONG X Y, et al. Long-Tailed Metrics and Object Detection in Camera Trap Datasets[J]. Applied Sciences, 2023, 13: 6029. |
[18] | BUTCHART S H M, WALPOLE M, COLLEN B, et al. Global biodiversity: indicators of recent declines[J]. Science, 2010, 328: 1164-1168. |
[19] | DIRZO R, YOUNG H.S, GALETTI M, et al. Defaunation in the Anthropocene[J]. Science 2014, 345: 401-406. |
[20] | SCBD(Secretariat of the Convention on Biological Diversity). 2014. Strategic plan for biodiversity 2011—2020[Z]. www.cbd.int/sp/targets. Viewed 20 Apr 2015. |
[21] | BURTON A C, NEILSON E, MOREIRA D, et al. Wildlife camera trapping: a review and recommendations for linking surveys to ecological processes[J]. Journal of Applied Ecology, 2015, 52: 675-85. |
[22] | YAO A C. Protocols for secure computations[C]. Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982). IEEE Computer Society, USA, 1982: 160-164. |
[23] | YAO A C. How to generate and exchange secrets[C]. Proceedings of the 27th Annual Symposium on Foundations of Computer Science (SFCS 1986). IEEE Computer Society, USA, 1986: 162-167. |
[24] | DWORK C. Differential privacy: A survey of results[C]. Theory and Applications of Models of Computation 2008: 1-19. |
[25] | MCMAHAN H B, MOORE E, AMAGE D, et al. Federated learning of deep networks using model averaging[J]. ArXiv 2016, abs/1602.05629. |
[26] | ZHAO Z Q, ZHENG P, XU S T, et al. Object detection with deep learning: A review[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30: 3212-3232. |
[27] | ZOU Z, SHI Z, GUO Y, et al. Object detection in 20 years: A survey[J]. ArXiv 2019, 1905.05055. |
[28] | CARRANZAa-GARcCIAía M, TORRES-MATEO J, LARA-BENITEZ P, et al. On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data[J]. Remote Sensing, 2021, 13: 89. |
[29] | TIAN Z, SHEN C, CHEN H, et al. Fcos: Fully convolutional one-stage object detection[C]. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 9626-9635. |
[30] | REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]. You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 779-788. |
[31] | REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6517-6525. |
[32] | REDMON J, FARHADI A. YOLOv3: an incremental improvement[J]. Arxiv 2018, abs/1804.02767. |
[33] | BOCHKOVSKIY A, WANF C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[J]. Arxiv 2020, abs/2004.10934. |
[34] | WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors[J]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023: 7464-7475. |
[35] | TERVEN J R, CORDOVA-ESPARAZA D M. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond[J]. ArXiv 2023, abs/2304.00501. |
[36] | JOCHER G, CHAURASIA A, QIU J. YOLO by Ultralytics[EB/OL]. [2023-3-30]. https://github.com/ultralytics/ultralytics. |
[37] | ZHENG Z, WANG P, LIU W, et al. Distance-iou loss: Faster and better learning for bounding box regression[C]. Proceedings of the AAAI conference on artificial intelligence, 2020, 34: 12993-13000. |
[38] | LI X, WANG W, LIU L, et al. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection[J]. Advances in Neural Information Processing Systems, 2020, 33: 21002-21012. |
[39] | KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and Open Problems in Federated Learning[M]. Foundations and Trends in Machine Learning,2019: 1-10. |
[40] | 林伟伟, 石方, 曾岚, 等. 联邦学习开源框架综述[J]. 计算机研究与发展, 2023, 60(7): 1551-1580. |
[41] | YANG Q, LIU Y, CHEN T, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19. |
[42] | VAIDYA J, CLIFTON C. Privacy preserving association rule mining in vertically partitioned data[C]. Proceedings of the 8th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. 2002: 639-644. |
[43] | WCS Camera Traps[EB/OL]. [2022-11-28]. https://lila.science/datasets/wcscameratraps. |
[44] | BEUTEL J, TOPAL T, MATHUR A, et al. Flower: A friendly federated learning research framework[J]. ArXiv 2020, abs/2007.14390. |
[45] | STEFAN S, GREENBERG S, TATLOR G W, et al. Three critical factors affecting automated image species recognition performance for camera traps[J]. Ecology and Evolution, 2020: 3503-3517. |
[46] | 梁文雅, 刘波, 林伟伟, 等. 联邦学习激励机制研究综述[J]. 计算机科学, 2022, 49(12): 46-52. |
[1] | LU Chenghao,CHEN Xiuhong. IPDFF: Reconstructed Surface Network Based on Implicit Partition Learning Deep Feature Fusion [J]. Frontiers of Data and Computing, 2024, 6(6): 19-31. |
[2] | WEI Yijin,FAN Jingchao. Classification Model of Agricultural Science and Technology Policies Based on Improved BERT-BiGRU-Attention [J]. Frontiers of Data and Computing, 2024, 6(6): 53-61. |
[3] | YAN Zhiyu, RU Yiwei, SUN Fupeng, SUN Zhenan. Research on Video Behavior Recognition Method with Active Perception Mechanism [J]. Frontiers of Data and Computing, 2024, 6(5): 66-79. |
[4] | LIAO Libo, WANG Shudong, SONG Weimin, ZHANG Zhaoling, LI Gang, HUANG Yongsheng. The Study of Jet Tagging Algorithm Based on DeepSets at CEPC [J]. Frontiers of Data and Computing, 2024, 6(3): 108-115. |
[5] | YAN Jin, DONG Kejun, LI Hongtao. A Deep Web Tracker Detection Method with Coordinated Semantic and Co-Occurrence Features [J]. Frontiers of Data and Computing, 2024, 6(3): 127-138. |
[6] | KOU Dazhi. Automatic Teeth Segmentation on Dental Panoramic Radiographs with Deep Learning [J]. Frontiers of Data and Computing, 2024, 6(3): 162-172. |
[7] | CAI Chengfei, LI Jun, JIAO Yiping, WANG Xiangxue, GUO Guanchen, XU Jun. Progress and Challenges of Medical Multimodal Data Fusion Methods Based on Deep Learning in Oncology [J]. Frontiers of Data and Computing, 2024, 6(3): 3-14. |
[8] | ZHENG Yinuo, SUN Muyi, ZHANG Hongyun, ZHANG Jing, DENG Tianzheng, LIU Qian. Application of Deep Learning in Dental Implant Imaging: Research Progress and Challenges [J]. Frontiers of Data and Computing, 2024, 6(3): 41-49. |
[9] | LI Lin, WANG Jiahua, ZHOU Chenyang, KONG Siman, SUN Jianzhi. An Overview of Object Detection Datasets [J]. Frontiers of Data and Computing, 2024, 6(2): 177-193. |
[10] | YUAN Jialin, OUYANG Rushan, DAI Yi, LAI Xiaohui, MA Jie, GONG Jingshan. Assessing the Clinical Utility of a Deep Learning-Based Model for Calcification Recognition and Classification in Mammograms [J]. Frontiers of Data and Computing, 2024, 6(2): 68-79. |
[11] | XIU Hanwen, LI He, CAO Rongqiang, WAN Meng, LI Kai, WANG Yangang. Global Model and Personalized Model of Federated Learning:Status and Prospect [J]. Frontiers of Data and Computing, 2024, 6(1): 113-124. |
[12] | WANG Ziyuan, WANG Guozhong. Application of Improved Lightweight YOLOv5 Algorithm in Pedestrian Detection [J]. Frontiers of Data and Computing, 2023, 5(6): 161-172. |
[13] | JU Jiaji, HUANG Bo, ZHANG Shuai, GUO Ruyan. A Dual-Channel Sentiment Analysis Model Integrating Sentiment Lexcion and Self-Attention [J]. Frontiers of Data and Computing, 2023, 5(4): 101-111. |
[14] | LI JunFei, XU LiMing, WANG Yang, WEI Xin. Review of Automatic Citation Classification Based on Deep Learning Technology [J]. Frontiers of Data and Computing, 2023, 5(4): 86-100. |
[15] | LI Yan,HE Hongbo,WANG Runqiang. A Survey of Research on Microblog Popularity Prediction [J]. Frontiers of Data and Computing, 2023, 5(2): 119-135. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||