In the Technology Age, You Can’t Always Trust What You Hear and See

Sourya Biswas
作者: Sourya Biswas, CISSP, CISA, CISM, CCSP, CRISC, CGEIT, Technical Director, NCC Group
发表日期: 2021年4月27日

It seems like any mention of technological advancement also can unearth a case of misuse. From airplanes that revolutionized global business being used for terrorist attacks to gene technology that can be used both for targeted medicines and bioweapons, 科技往往是一把双刃剑.

的 internet itself is a living example of the perils of technology misuse. Originally built with the intention to facilitate communications, its very architecture has opened it up to exploitation by malicious actors. 一个恰当的例子:网络钓鱼. 在互联网和网络邮件出现之前, a conman would have to physically mail letters out to get enough people to respond and ultimately defraud them, 需要付出巨大努力的冒险, 费用和风险. 今天, all it takes is a mailing list and mail merge to automate the generation of messages. 类似的, 一个抢银行的强盗不得不冒着卫兵的危险, guns and safes to get his hands on anything substantial. 今天, a hacker can compromise a bank’s systems from a non-extradition country on the other side of the world and net hundred times that amount.

人工智能(AI)是一种 新兴技术 that can well define this century of human development, and yet another technology 敞开 误用.

的 use of AI and Deepfake technology in business email compromise attacks
While we’re still far away from Hollywood doomsday scenarios like Terminator’s 天网, the ability of technology to mimic humans has already been leveraged by criminals with regard to business email compromise (BEC) attacks. A BEC is a sophisticated scam targeting both businesses and individuals performing wire transfer payments, 一个网了一个惊人的 US$12 billion from 2013 to 2018, according to the Federal Bureau of Investigations (FBI).

从历史上看, this is how your typical BEC attack worked: An attacker sends an email (purportedly from a senior executive) to a company’s accounting department, asking them to wire money to a fraudulent account. Accounting has no reason to suspect the email is illegitimate, and therefore sends the wire. 它发生得那么快,如 《澳门赌场官方下载》女星芭芭拉·科科伦的案子. 的 same scenario can be replicated for a real-estate escrow firm. 在这种情况下, 黑客冒充代管员工, sending fraudulent payment instructions to the property buyer. Since the buyer is expecting to wire a payment for the impending purchase, he or she may not confirm before transferring funds to a fraudster’s account. This is an attack that has been successful time and time again. 然而, if the recipient of the email requesting payment spoke with the executive or escrow firm employee over the phone before the transaction was initiated, 欺诈行为可能会被发现. With AI in a cyberattacker’s arsenal, that’s no longer the case.

2019年8月, 《澳门赌场官方下载》报道 the case of a UK energy company’s CEO receiving an email (supposedly from his boss, the CEO of the German parent company) asking that €220,000 ($243,给一个匈牙利供应商汇了000美元. 的 email was immediately followed by a call that reiterated the instructions 在CEO的声音里. 的 voice was later found to have been mimicked using an AI software that could “imitate the voice, 不仅是声音,还有调性, 标点符号, 德国口音.”

《澳门赌场官方软件》报道 这不是一次孤立的袭击. According to Symantec, this same type of attack has happened 最近至少有三次. Considering the general reluctance for many organizations to disclose cyberattacks, 实际数字可能更高.

防范Deepfake语音欺诈的一些建议
这是一个不断发展的形势, but there are steps that can be taken to combat such use of “Deepfake” voice fraud:

  • 的 recipient should initiate the call and not take a call received at face value. Unless the impersonated person’s phone was compromised, a call to that person can uncover the truth. 事实上, 在上面的例子中, a second fraudulent fund transfer request was avoided when the UK CEO actually called his boss.
  • 的 recipient should insist on a video conversation. 请注意,这不是万无一失的. While Deepfake voice technology is currently more mature than its video counterpart, the 后者正在迎头赶上.
  • 类似于多因素身份验证, mechanisms should be established to allow independent channels of verification. 一些选项是内部聊天(例如.g., 松弛, Skype) predetermined code words or phrases (every fund transfer request must include them), 等.

随着这种威胁变得越来越主流, I expect the good guys to step up and devise effective countermeasures. Similar to anti-malware that can detect malicious code, specialized software should be able to detect Deepfakes. 事实上,几家顶级科技公司都这么做了 已经开始合作了 在这个区域. 然而, one thing is certain—the human ear and eye can and will be fooled as AI becomes more advanced. 的refore, be aware that you cannot always trust what you hear or see.

ISACA年度报告

2023
复选标记

2022
复选标记

2021
复选标记

2020
复选标记

2019
复选标记