人类和人工智能伙伴关系:合作增强网络安全

信息安全
作者: 杰伊·艾伦,ctp, MBCS
发表日期: 2023年10月6日

在过去十年中, artificial intelligence (AI) has transitioned from an emerging technology to one firmly embedded in daily life. 像Siri和Alexa这样的人工智能助手很常见, and AI drives many of the product recommendations and social media feeds people encounter online. However, AI is also poised to play an increasingly important role in cybersecurity. Both generative AI and precision AI hold promise in helping organizations defend against ever more sophisticated cyberattacks.1

生成式AI是指能够生成新内容的AI系统, 比如文本, code, 图片或视频, 基于他们的训练数据.2 今天最突出的例子是ChatGPT, 哪一种可以根据提示生成类似人类的文本. This type of AI could be invaluable for cybersecurity teams such as by automatically generating threat intelligence reports, policies and other documentation that security analysts must manually write today. 生成式AI也可能有防御用途, such as automatically generating benign content to confuse and divert attackers leveraging AI themselves.

精密人工智能旨在为人工智能系统提供更高的精度, 一致性和适用性与传统人工智能方法相比. The use of these capabilities could significantly enhance threat detection and response capabilities. Rather than simply producing a score indicating how likely an activity is malicious, precision AI systems can provide explanations and evidence to justify their outputs. This enables security teams to verify the logic behind AI verdicts rather than unquestioningly trusting in opaque models. Explainable AI models may also uncover biases or gaps in training data that could lead to improved performance over time.

在一起, the synergies between generative and precision AI can automate significant portions of cybersecurity workflows and drastically expand security teams' capabilities. 分析师可以利用人工智能来处理繁琐的工作, repetitive tasks such as report writing and reviewing log files for anomalies. It allows them to focus on higher-value investigations and strategic initiatives to improve cyberdefenses. 人工智能还可以使安全团队更加积极主动. 例如, 生成式人工智能可以识别政策漏洞或差距, while precision AI models could preemptively detect insider threats based on early warnings.

However, there are understandable concerns regarding the responsible use of AI in cybersecurity. 像ChatGPT这样的生成模型有时会产生有害的结果, 有偏见或误导性的内容, 哪一个 is a serious issue if such AI were to generate flawed cyberplans and procedures. 与此同时, 精密人工智能严重依赖训练数据, 哪一个, 如果没有充分的策划, 会导致歧视性的结果吗. There are also worries that over-reliance on AI may cause organizations to become complacent, 主要是人工智能提供了一种虚假的安全感.

尽管人工智能很有希望, 信任和透明度对于网络安全的采用至关重要. Organizations must carefully evaluate generative AI outputs for accuracy and be able to ascertain the reasoning behind precision AI verdicts. AI models should be continuously monitored and refined based on feedback from security teams leveraging them. Processes should ensure security analysts retain active oversight and decision authority when AI is deployed operationally.

With prudent governance and collaboration between technologists and security experts, AI could usher in a new era of enhanced protection against the growing threats faced in cyberspace.

A symbiotic partnership between humans and AI may be key to transforming cybersecurity in an age of increasingly cunning adversaries and sophisticated attacks.3 网络安全 leaders must view AI as a complement to, not a replacement for, human insight. 生成式人工智能可以扩展人类的创造力和能力, 而精确人工智能带来了更大的透明度和焦点. Yet responsible oversight and continuous improvement fuelled by human insight remain essential to fulfilling AI's promise.

Security teams must be actively involved in curating the training data used by AI systems and continuously monitoring their performance after deployment. 通过评估现实世界的结果, 分析人员可以提供反馈以改进算法逻辑, 识别培训数据中的差距,纠正不公平的偏见或盲点. Ongoing collaboration and communication between technologists and security experts will help develop AI that augments human analysts as trusted partners rather than merely automating rote tasks.

组织还应该设计让人参与监督的流程, 人工智能的决策和控制, including implementing frameworks for continuously monitoring and improving AI systems based on feedback from security teams.4 Although the recommendations of generative and predictive AI systems can inform human judgment, 最后的决定权应该留给安全小组. 实地的分析师可以评估背景, 利用直觉并建立人工智能目前缺乏的联系. Keeping a human in the loop for consequential actions can act as a check against potential AI failures.

以智慧和远见, the power of AI can be harnessed to make the digital world a safer place for all. 随着人工智能进一步嵌入网络安全工作流程, a renewed focus on judicious governance and human-machine collaboration will be essential. 尽管人工智能有望改变网络安全,但它仍处于早期阶段. 谨慎采用这些新兴技术, 而不是盲目的自动化, while retaining active human insight will be vital to fulfilling that promise responsibly.

Although the recommendations of generative and predictive AI systems can inform human judgment, 最后的决定权应该留给安全小组.

尾注

1 霍沃思,R.; “人工智能: Generative AI In Cyber Should Worry Us, Here’s Why,” 《澳门赌场官方下载》, 2023年8月4日
2 J.P. 摩根。”生成式人工智能会改变游戏规则吗2023年3月20日
3 破折号,B.; Ansari, M. F.; Sharma, P.; Ali, A.; “Threats and Opportunities With AI-Based Cyber Security Intrusion Detection: A Review,” 国际软件工程与应用杂志,卷. 13日,国际空间站. 2022年9月5日
4 Thuraisingham B.; “人工智能 and Data Science 治理: Roles and Responsibilities at the C-Level and the Board2020电气与电子工程师学会(IEEEst International Conference on Information Reuse and Integration for Data Science (IRI), 拉斯维加斯, NV, 美国, 2020, p. 314-318

杰伊·艾伦,ctp, MBCS

Is a seasoned technical leader with a rich background in steering multinational teams and global pre-sales efforts in cybersecurity. He has 20 years of experience within the IT industry across both vendors and private organizations. 艾伦热衷于推动网络安全领域的发展.