数据与计算发展前沿 ›› 2025, Vol. 7 ›› Issue (6): 68-76.

CSTR: 32002.14.jfdc.CN10-1649/TP.2025.06.007

doi: 10.11871/jfdc.issn.2096-742X.2025.06.007

• 专刊:第40次全国计算机安全学术交流会征文 • 上一篇    下一篇

人工智能安全防护体系的层次化模型研究

陈长松*(),吴跃顺,梅广   

  1. 公安部第三研究所,上海 201204
  • 收稿日期:2025-08-25 出版日期:2025-12-20 发布日期:2025-12-17
  • 通讯作者: 陈长松
  • 作者简介:陈长松,博士,公安部第三研究所,研究员,主要研究方向为信息网络安全、网络犯罪侦查、互联网管理对策。
    本文承担工作为:安全模型设计及各维度安全要求研究。
    CHEN Changsong, Ph.D., is a research fellow at the the Third Research Institute of Ministry of Public Security. His main research interests are cyber security, cybercrime investigation, and Internet management governance.
    In this work, he is responsible for design of the security model, research on security requirements of various dimensions.
    E-mail: chenchangsong@gass.ac.cn
  • 基金资助:
    国家重点研发计划(2023YFB3107105)

Research on the Hierarchical Model of Artificial Intelligence Security Protection Systems

CHEN Changsong*(),WU Yueshun,MEI Guang   

  1. The Third Research Institute of Ministry of Public Security, Shanghai 201204, China
  • Received:2025-08-25 Online:2025-12-20 Published:2025-12-17
  • Contact: CHEN Changsong

摘要:

【目的】本文分析了人工智能安全面临的风险和挑战,针对现有防护体系在系统性结构、生命周期管理及安全能力评估方面的不足,构建了多维层次化的人工智能安全防护体系模型。【方法】模型将防护技术体系重新梳理为网络安全、数据安全、信息安全和应用安全四个方面,融入人工智能安全的生命周期管理,提出了基础防护、感知监测、主动防御、协同共治四级能力体系。【结果】模型在人工智能安全评估与测评、人工智能安全围栏设计以及安全能力生命周期管理中得到应用验证,相比传统单一维度防护体系,实现了人工智能系统全周期、多维度的安全覆盖,且防护针对性与可操作性显著提升。【局限】目前跨平台跨组织的人工智能威胁情报共享及人工智能模型协同演进机制尚未建立,可能影响人工智能系统对安全风险的主动防御和协同共治能力。【结论】研究成果为人工智能安全治理提供了理论框架和技术路径,对促进人工智能安全发展具有重要参考价值。

关键词: 人工智能, 安全风险, 安全防护, 模型

Abstract:

[Objective] This paper analyzes the risks and challenges faced by artificial intelligence (AI) security. Aiming at the deficiencies of existing protection systems in systematic structure, the lifecycle management, and security capability assessment, it constructs a multi-dimensional and hierarchical AI security protection system model. [Methods] The model reorganizes the protection technology system into four aspects: network security, data security, information security, and application security. It integrates the lifecycle management of AI security and proposes a four-level capability system consisting of basic protection, perception and monitoring, active defense, and collaborative governance. [Results] The model is verified in applications such as AI security assessment and testing, AI safety guardrails design, and security capability lifecycle management. Compared with the traditional single-dimensional protection system, it realizes the full-cycle and multi-dimensional security coverage of AI systems, and significantly improves the pertinence and operability of protection. [Limitations] At present, the mechanisms for cross-platform and cross-organizational AI threat intelligence sharing and AI model collaborative evolution have not yet been established, which may affect the active defense and collaborative governance capabilities of AI systems against security risks. [Conclusions] The research results provide a theoretical framework and technical path for AI security governance, and have important reference value for promoting the secure development of AI.

Key words: artificial intelligence, security risk, security protection, model