[1] |
刘雪华, 武鹏峰, 何祥博, 等. 红外相机技术在物种监测中的应用及数据挖掘[J]. 生物多样性, 2018, 26(8): 850-861.
|
[2] |
NEWEY S, DAVIDSON P, NAZIR S, et al. Limitations of recreational camera traps for wildlife management and conservation research: A practitioner’s perspective[J]. Ambio, 2015, 44, 624-635.
|
[3] |
ROVERRO F, ZIMMERMANN F, Berzi D, et al. “Which camera trap type and how many do I need?” A review of camera features and study designs for a range of wildlife research applications[J]. Hystrix, the Italian Journal of Mammalogy, 2013, 24(2): 148-156.
|
[4] |
李学友, 胡文强, 普昌哲, 等. 西南纵向岭谷区兽类及雉类红外相机监测平台: 方案、进展与前景[J]. 生物多样性, 2020, 28(9): 1090-1096.
|
[5] |
ROWCLIFFE J M, CARBONE C. Surveys using camera traps: Are we looking to a brighter future?[J]. Animal Conservation, 2008, 11: 185-186.
|
[6] |
O’CONNEL A F, NICHOLS J D, KARANTH K U. Camera Traps in Animal Ecology[M]. Springer: New York, NY, USA, 2011.
|
[7] |
MCCALLUM J. Changing use of camera traps in mammalian field research: Habitats, taxa and study types[J]. Mammal Review, 2013, 43: 196-206.
|
[8] |
STEENWEG R, HEBBLEWHITE M, KAYS R, et al. Scaling-up camera traps: monitoring the planet’s biodiversity with networks of remote sensors[J]. Frontiers in Ecology and the Environment, 2017,15: 26-34.
|
[9] |
TUIA D, KELLENBERGER B, BEERY S, et al. Perspectives in machine learning for wildlife conservation[J]. Nature Communications, 2021, 13: 792.
|
[10] |
TABAK M A, NOROUZZADEH M S, WOLFSON D W, et al. Machine learning to classify animal species in camera trap images: Applications in ecology[J]. Methods in Ecology And Evolution, 2019, 10, 585-590.
|
[11] |
冷佳旭, 刘莹. 基于深度学习的小目标检测与识别[J]. 数据与计算发展前沿, 2020, 2(2): 120-135.
|
[12] |
MIAO Z, GAYNOR K M, WANG J, et al. Insights and approaches using deep learning to classify wildlife[J]. Scientific Reports, 2019, 9: 8137.
|
[13] |
WILLI M, PITMAN R T, CARDOSO A W, et al. Identifying animal species in camera trap images using deep learning and citizen science[J]. Methods in Ecology and Evolution, 2018, 10: 80-91.
|
[14] |
NOROUZZADEH M S, NGUYEN A N, KOSMALA M, et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning[J]. Proceedings of the National Academy of Sciences of the United States of America, 2017, 115: E5716-E5725.
|
[15] |
刘生智, 李春蓉, 刘同金, 等. 基于YOLO V3模型的奶牛目标检测[J]. 塔里木大学学报, 2019, 31(2): 85-90.
|
[16] |
朱高兴, 于瓅. 基于YOLOv5-CA算法的野生动物目标检测研究[J]. 信息技术与信息化, 2022(6): 32-35.
|
[17] |
HE W, LUO Z, TONG X Y, et al. Long-Tailed Metrics and Object Detection in Camera Trap Datasets[J]. Applied Sciences, 2023, 13: 6029.
|
[18] |
BUTCHART S H M, WALPOLE M, COLLEN B, et al. Global biodiversity: indicators of recent declines[J]. Science, 2010, 328: 1164-1168.
|
[19] |
DIRZO R, YOUNG H.S, GALETTI M, et al. Defaunation in the Anthropocene[J]. Science 2014, 345: 401-406.
|
[20] |
SCBD(Secretariat of the Convention on Biological Diversity). 2014. Strategic plan for biodiversity 2011—2020[Z]. www.cbd.int/sp/targets. Viewed 20 Apr 2015.
|
[21] |
BURTON A C, NEILSON E, MOREIRA D, et al. Wildlife camera trapping: a review and recommendations for linking surveys to ecological processes[J]. Journal of Applied Ecology, 2015, 52: 675-85.
|
[22] |
YAO A C. Protocols for secure computations[C]. Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982). IEEE Computer Society, USA, 1982: 160-164.
|
[23] |
YAO A C. How to generate and exchange secrets[C]. Proceedings of the 27th Annual Symposium on Foundations of Computer Science (SFCS 1986). IEEE Computer Society, USA, 1986: 162-167.
|
[24] |
DWORK C. Differential privacy: A survey of results[C]. Theory and Applications of Models of Computation 2008: 1-19.
|
[25] |
MCMAHAN H B, MOORE E, AMAGE D, et al. Federated learning of deep networks using model averaging[J]. ArXiv 2016, abs/1602.05629.
|
[26] |
ZHAO Z Q, ZHENG P, XU S T, et al. Object detection with deep learning: A review[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30: 3212-3232.
|
[27] |
ZOU Z, SHI Z, GUO Y, et al. Object detection in 20 years: A survey[J]. ArXiv 2019, 1905.05055.
|
[28] |
CARRANZAa-GARcCIAía M, TORRES-MATEO J, LARA-BENITEZ P, et al. On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data[J]. Remote Sensing, 2021, 13: 89.
|
[29] |
TIAN Z, SHEN C, CHEN H, et al. Fcos: Fully convolutional one-stage object detection[C]. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 9626-9635.
|
[30] |
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]. You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 779-788.
|
[31] |
REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6517-6525.
|
[32] |
REDMON J, FARHADI A. YOLOv3: an incremental improvement[J]. Arxiv 2018, abs/1804.02767.
|
[33] |
BOCHKOVSKIY A, WANF C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[J]. Arxiv 2020, abs/2004.10934.
|
[34] |
WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors[J]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023: 7464-7475.
|
[35] |
TERVEN J R, CORDOVA-ESPARAZA D M. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond[J]. ArXiv 2023, abs/2304.00501.
|
[36] |
JOCHER G, CHAURASIA A, QIU J. YOLO by Ultralytics[EB/OL]. [2023-3-30]. https://github.com/ultralytics/ultralytics.
|
[37] |
ZHENG Z, WANG P, LIU W, et al. Distance-iou loss: Faster and better learning for bounding box regression[C]. Proceedings of the AAAI conference on artificial intelligence, 2020, 34: 12993-13000.
|
[38] |
LI X, WANG W, LIU L, et al. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection[J]. Advances in Neural Information Processing Systems, 2020, 33: 21002-21012.
|
[39] |
KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and Open Problems in Federated Learning[M]. Foundations and Trends in Machine Learning,2019: 1-10.
|
[40] |
林伟伟, 石方, 曾岚, 等. 联邦学习开源框架综述[J]. 计算机研究与发展, 2023, 60(7): 1551-1580.
|
[41] |
YANG Q, LIU Y, CHEN T, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19.
|
[42] |
VAIDYA J, CLIFTON C. Privacy preserving association rule mining in vertically partitioned data[C]. Proceedings of the 8th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. 2002: 639-644.
|
[43] |
WCS Camera Traps[EB/OL]. [2022-11-28]. https://lila.science/datasets/wcscameratraps.
|
[44] |
BEUTEL J, TOPAL T, MATHUR A, et al. Flower: A friendly federated learning research framework[J]. ArXiv 2020, abs/2007.14390.
|
[45] |
STEFAN S, GREENBERG S, TATLOR G W, et al. Three critical factors affecting automated image species recognition performance for camera traps[J]. Ecology and Evolution, 2020: 3503-3517.
|
[46] |
梁文雅, 刘波, 林伟伟, 等. 联邦学习激励机制研究综述[J]. 计算机科学, 2022, 49(12): 46-52.
|