轻松盗走 2500 万美元 我们该如何防范人工智能造假危机?

币圈资讯 阅读:38 2024-04-22 09:47:03 评论:0
美化布局示例

欧易(OKX)最新版本

【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   全球官网 大陆官网

币安(Binance)最新版本

币安交易所app【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   官网地址

火币HTX最新版本

火币老牌交易所【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   官网地址

近几个月来,人工智能接连浮现,我们频繁在新闻标题中看到如下字眼:AI 泄露了公司代码,还根本就删不掉;AI 软件侵犯了用户的人脸信息隐私;某大模型产品被曝泄露私密对话……等等。

然而,更为深刻的问题或许早在大型模型热潮爆发之前就已经悄然存在:名为 Deepfake 的深度造假技术崭露头角,通过深度学习将世界上的任何人无缝拼接到他们从未真正参与过的视频或照片中,导致了"AI 换脸"这个词汇在 2018 年的火爆。

直至 2024 年的今天,Deepfake 的潜在威胁仍未得到有效遏制。

人工智能诈骗已成现实,引人忧虑。

1995 年上映的动画电影《攻壳机动队》描绘过这么一段剧情:名为“傀儡师”的人工智能懂得冒充其他身份,迷惑和控制人类的思想和行为,这种能力使得它在网络和虚拟空间中变得无迹可寻。虽然这种情节只出现在电影中,但倘若你长久以来深受埃隆·马斯克(Elon Musk)或萨姆·奥尔特曼(Sam Altman)等人的警告影响,难免会设想一旦人工智能放任自流,便能演变成对人类的“天网”式威胁。

《攻壳机动队》傀儡师

庆幸的是,这一担忧并未得到广大人工智能开发者们的普遍认同。许多学术界的人工智能巨头,如图灵奖得主 Joseph Sifakis 就曾在采访中透露自己对这种想法的嗤之以鼻。目前,我们尚不具备能在马路上安全行驶的人工智能,更不必说与人类智慧相匹敌了。

但这绝不意味人工智能无害。

AI 完全有可能成为迄今为止人类所创造出的最具破坏力的工具之一。关于这一点,近期一名不幸遭遇人工智能诈骗的财务高管深有体会,其损失高达 2500 万美元,案发地不是在大洋彼岸,而是在离我们很近的香港特别行政区。面对此类人工智能犯罪,我们是否有应对之策呢?

偷天换日

2 月 5 日消息,据港媒香港电台报道,某匿名财务总监接收到一封自称源自其英国同僚的邮件,提出一项隐秘交易。这位财务总监并非轻信之人,初见即疑其为网络钓鱼行为。

然而,发件人坚持安排一场 Zoom 视频会议,当这位香港财务总监接入会议时,他看到并听到了几位熟识同事的面容与声音。疑虑消除后,他们开始筹备交易,并按英国同僚指示,最终分 15 次汇款 2 亿港币(当前约 1.84 亿元人民币),转账至 5 个本地户口。

直至另一位员工与总部核实时才发现异常,原来英国办公室未曾发起过此项交易,亦未收到任何款项。试想该财务总监得知被骗后的极度懊悔与愧疚之情。调查后发现,该次视频会议中唯一真实的参与者正是那位香港财务总监,其余人员皆是由 Deepfake 人工智能技术模拟的英国财务总监及其他同事形象。

仅凭公开可获取的少量数据、一套基础的人工智能程序以及狡猾的骗术,这群骗子就成功骗取了足以购置一艘豪华超级游艇的巨额现金。

此刻或许有人会对这个故事嗤之以鼻,断言这位财务总监过于马虎,毕竟我们平时在抖音、b 站上看到的 Deepfake “整活视频”都易辨真假,很难做到以假乱真。

然而,实际情况可能并不如此简单,最新研究表明,人们实际上无法可靠地识别 Deepfake,这意味着大多数人在很多场合仅凭直觉判断自己看到和听到的东西。更令人不安的是,研究还揭示了人们更容易误将 Deepfake 视频当作真实情况而非虚假。换言之,如今的 Deepfake 技术已十分高明,足以轻易欺骗观众。

Deep-learning + Fake

Deepfake 这个词是“深度学习”(Deep-learning)和“假冒”(Fake)两个词的组合。

因此,Deepfake 技术的核心要素是机器学习,这使得以更低的成本快速制作出深度伪造内容成为可能。若要制作某人的 Deepfake 视频,创作者首先会利用大量该人的真实视频片段对神经网络进行训练,使其获得对该人从多个角度和不同光照条件下外观的逼真“理解”。随后,创作者将训练好的网络与计算机图形技术相结合,将此人的复制品叠加到另一人身上。

许多人认为,一类称为生成对抗网络(GANs)的深度学习算法将在未来成为 Deepfake 技术发展的主要推动力。GAN 生成的面部图像几乎无法与真人面孔区分开来。首次针对 Deepfake 领域的调查报告专门用一个章节讨论了 GANs,预示着它们将使任何人都能制作出精巧的 Deepfake 内容。

2017 年底,名为“deepfakes”的 Reddit 用户将许多知名影星甚至政要的面孔用于色情视频转换生成,并将其发布到网络上。这类造假视频瞬间传遍社交网络,此后大量新的 Deepfake 视频开始出现,相应的检测技术却直到几年后才逐渐出现。

不过,也有正面例子。Deepfake 第一次在银幕上被观众所熟知是因为已故演员保罗·沃克在《速度与激情 7》中的“复活”。2023 年的贺岁电影《流浪地球 2》也用 CG 技术复现了广受大众喜爱的“达叔”吴孟达。但在以前,需要一整个工作室的专家们耗费一年的时间才能创造出这些效果。

如今,这个过程比以前更快,哪怕是对技术一窍不通的普通人,也可以在几分钟之内创造一个“AI 换脸”的视频。

对 Deepfake 的担忧导致了对策的激增。早在 2020 年,包括 Meta 和 X 在内的社交媒体平台禁止在其网络中使用类似的 Deepfake 产品。各大计算机视觉和图形会议上也邀请着专家上台演讲防御方法。

既然 Deepfake 犯罪分子已经开始大规模地蒙蔽世人,我们又该如何反击呢?

多重手段,防御 Deepfake 风险

首要策略是对人工智能进行“投毒”。要制造出逼真的 Deepfake,攻击者需要海量数据,包括目标人物众多的照片、展示其面部表情变化的大量视频片段及清晰的语音样本。大部分 Deepfake 技术利用个人或企业在社交媒体上公开分享的资料作为素材。

不过,现在已有诸如 Nightshade 和 PhotoGuard 之类的程序,能够在不影响人类感知的情况下修改这些文件,从而让 Deepfake 失效。比如,Nightshade 通过误导性处理,使得人工智能误判照片中的面部区域,这种错误识别恰恰能扰乱构建 Deepfake 背后的人工智能学习机制。

其次,对自身或所在企业在网上发布的所有照片和视频进行此类防护处理,可有效阻止 Deepfake 克隆。然而,这种方法并非万无一失,它更像是场持久战。随着人工智能逐渐提高对篡改文件的识别能力,反 Deepfake 程序也需要不断研发新的方法以保持其有效性。

想建立起更为坚固的防线,我们就需要不再单一依赖易于被攻破的身份验证方式。在本文的案例中,涉事的财务总监将视频通话视为确认身份的绝对依据,却未能采取如拨打总部或英国分支机构其他人员电话进行二次身份验证等额外措施。实际上,已有部分程序利用私钥加密技术来确保线上身份验证的可靠性。采用多重身份验证步骤,大大降低此类欺诈的可能性,是所有企业应当立即着手实施的措施。

因此,当下次你身处视频会议之中,或是接到同事、亲友的来电时,请务必谨记,屏幕对面可能并非真人。特别是在对方要求你将 2500 万美元秘密转账至一组闻所未闻的陌生银行账户时,我们至少可以先给上司打个电话。


In recent months, artificial intelligence has emerged one after another. We frequently see the following words in news headlines: the company code has been leaked, and the software can't be deleted at all. A large model product has been exposed to reveal private conversations, etc. However, a deeper problem may have quietly emerged before the outbreak of the large model craze. Deep forgery technology has emerged, and anyone in the world has been seamlessly spliced into videos or photos that they have never really participated in through deep learning. It led to the popularity of the word "face changing" in 2000, and the potential threat has not been effectively curbed until today in 2000. Artificial intelligence fraud has become a reality, which is worrying. The animated film ghost in the shell released in 2000 described such a story. The artificial intelligence named Puppet Master knows how to impersonate other identities to confuse and control human thoughts and behaviors. This ability makes it untraceable in the network and virtual space. Although this kind of plot only appears in movies, if you have been deeply loved by Elon Musk or for a long time, The warning influence of Sam altman and others will inevitably assume that once artificial intelligence is left unchecked, it will turn into a skynet threat to human beings. Fortunately, this concern has not been widely recognized by the vast number of artificial intelligence developers. Many academic artificial intelligence giants such as Turing Prize winners have revealed their disdain for this idea in interviews. At present, we do not have artificial intelligence that can drive safely on the road, let alone compete with human wisdom. However, This by no means means that artificial intelligence is harmless and may become one of the most destructive tools created by human beings so far. In this regard, a financial executive who was unfortunately defrauded by artificial intelligence recently realized that his loss was as high as 10,000 US dollars. The case occurred not on the other side of the ocean but in the Hong Kong Special Administrative Region, which is very close to us. Do we have any countermeasures to deal with such artificial intelligence crimes? According to Hong Kong media Radio Television Hong Kong, an anonymous financial controller received a letter from. It is said that the email from his British colleague proposed a secret transaction. The financial controller was not credulous and suspected it was phishing at first sight. However, the sender insisted on arranging a video conference. When the Hong Kong financial controller accessed the conference, he saw and heard the faces and voices of several familiar colleagues. After the doubts were dispelled, they began to prepare the transaction and finally remitted HK$ 100 million in installments according to the instructions of their British colleagues. At present, about RMB 100 million was transferred to a local account until another employee checked with the headquarters. It turns out that the British office has never initiated this transaction and has not received any money. Imagine the extreme regret and guilt of the financial controller after learning that he was cheated. After investigation, it was found that the only real participant in the video conference was the Hong Kong financial controller, and the rest of the staff were the British financial controller and other colleagues simulated by artificial intelligence technology. With only a small amount of publicly available data, a set of basic artificial intelligence programs and cunning deception, these scammers successfully defrauded enough. At the moment, some people may scoff at this story and assert that the financial controller is too careless. After all, the whole live video we usually see on the Tik Tok station is easy to distinguish between true and false, and it is difficult to confuse the false with the true. However, the actual situation may not be so simple. The latest research shows that people can't actually identify it reliably, which means that most people judge what they see and hear by intuition on many occasions. What's more disturbing is that the research also reveals that people are more. It is easy to mistake the video for the real situation rather than the falsehood. In other words, today's technology is smart enough to deceive the audience easily. This word is a combination of deep learning and counterfeiting. Therefore, the core element of technology is machine learning, which makes it possible to quickly produce deeply forged content at a lower cost. If a video creator wants to make someone, he will first use a large number of real video clips of the person to train the neural network to get the appearance of the person from multiple angles and different lighting conditions. Later, the creator combined the trained network with computer graphics technology and superimposed this person's copy on another person. Many people think that a kind of deep learning algorithm called generating confrontation network will become the main driving force of technological development in the future, and the generated facial image can hardly be distinguished from the real face. The first field-specific investigation report devoted a chapter to it, which indicates that they will enable anyone to produce exquisite content. At the end of the year, the user named will be allowed to. Many well-known movie stars and even politicians' faces were used to generate pornographic videos and post them on the Internet. Such fake videos spread all over social networks instantly. Since then, a large number of new videos began to appear, but the corresponding detection technology did not appear until a few years later. However, there are also positive examples that were first known to the audience on the screen because the late actor Paul Walker was resurrected in speed and passion. The New Year's Eve movie Wandering the Earth also reappeared the popular uncle Ng Man Tat with technology, but in the past. It takes a whole studio of experts a year to create these effects. Now this process is faster than before, and even ordinary people who know nothing about technology can create a face-changing video in a few minutes. Concerns about it have led to a surge in countermeasures. As early as in, social media platforms including and banned the use of similar products in their networks, and experts were invited to speak on the stage at major computer vision and graphics conferences. Since criminals have started on a large scale, How can we fight back if we blindly deceive the world? The first strategy of multiple means to defend against risks is to poison artificial intelligence. To create realistic attackers, we need massive data, including numerous photos of the target characters, a large number of video clips showing their facial expressions and clear voice samples. Most technologies use information shared by individuals or enterprises on social media as material, but now there are programs such as and that can modify these files without affecting human perception. However, it is more like a protracted war. With the gradual improvement of artificial intelligence's ability to identify tampered files, the anti-program also needs to constantly develop new methods to maintain its effectiveness. In order to establish a stronger defense line, we need to stop cloning effectively by carrying out such protective treatment on all photos and videos published by ourselves or our enterprises on the Internet. 比特币今日价格行情网_okx交易所app_永续合约_比特币怎么买卖交易_虚拟币交易所平台

文字格式和图片示例

注册有任何问题请添加 微信:MVIP619 拉你进入群

弹窗与图片大小一致 文章转载注明 网址:https://netpsp.com/?id=60801

美化布局示例

欧易(OKX)最新版本

【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   全球官网 大陆官网

币安(Binance)最新版本

币安交易所app【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   官网地址

火币HTX最新版本

火币老牌交易所【遇到注册下载问题请加文章最下面的客服微信】永久享受返佣20%手续费!

APP下载   官网地址
可以去百度分享获取分享代码输入这里。
声明

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

发表评论
平台列表
美化布局示例

欧易(OKX)

  全球官网 大陆官网

币安(Binance)

  官网

火币(HTX)

  官网

Gate.io

  官网

Bitget

  官网

deepcoin

  官网
关注我们

若遇到问题,加微信客服---清歌

搜索
排行榜
扫一扫,加我为微信好友加我为微信好友