以下为一些 AI 诈骗的案例分析:
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S.government.In accordance with the Defense Production Act,the Order will require that companies developing any foundation model that poses a serious risk to national security,national economic security,or national public health and safety must notify the federal government when training the model,and must share the results of all red-team safety tests.These measures will ensure AI systems are safe,secure,and trustworthy before companies make them public.Develop standards,tools,and tests to help ensure that AI systems are safe,secure,and trustworthy.The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure,as well as chemical,biological,radiological,nuclear,and cybersecurity risks.Together,these are the most significant actions ever taken by any government to advance the field of AI safety.Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.Agencies that fund life-science projects will establish these standards as a condition of federal funding,creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
5.2.1诈骗罪目前世界范围内有据可查的第一起利用“AI换脸”技术的电信诈骗案发生在2019年。2019年3月,某国际能源公司首席执行官接到了一个未知来电,很快,他就认出了电话那头的人——德国母公司的CEO,也是他的顶头上司。在这通电话中,母公司CEO表示当前公司出现运营危机,急需世界各地分公司紧急提供220000欧元或243000美元的资金支援,随后这位母公司CEO提供了一个匈牙利的银行账户,并宣称该笔资金需要直接支付给某匈牙利供应商,母公司将在资金周转过来后为分公司报销这笔支出。虽然该指示听起来不是既不合规又不靠谱,又涉及如此大的一笔资金流转,但被害人考虑再三还是选择了执行命令,原因很简单:电话里的声音确实是自己的顶头上司无误,甚至连那标志性的轻微德国口音和语气习惯都与自己老板平时说话时毫无区别。至此,虽然受害人还有所怀疑,但依然向“老板”电话中提供的匈牙利账户完成了转账。在这起案件中,被转出的资金从匈牙利流向墨西哥,然后再被分散到其他地方,截至2023年5月,涉案资金仍未追回。
|本质|对信息的扭曲|人类认知偏差_大脑在处理信息时,为了节省认知资源而采取的“捷径”,这些捷径虽然可以提高效率,但也容易导致对信息的扭曲和误判|AI幻觉_模型对训练数据中统计模式的过度依赖,导致其在面对新情况时,无法准确地理解和生成信息,最终输出与现实世界不符的内容||-|-|-|-||表现形式|多种多样且难以察觉|确认偏误(只关注支持自己观点的信息)、可得性偏差(更容易回忆起最近或印象深刻的信息)、锚定效应(过分依赖最初获得的信息)|生成不存在的人物、地点、事件,或者对已知事实进行错误的描述。||产生原因|都与经验和知识有关|与个人的成长经历、文化背景、知识结构等等有关。不同的经验和知识会塑造不同的认知模式,导致人们对相同的信息做出不同的解读|与训练数据的质量、模型的结构和训练策略有关。如果训练数据存在偏差或错误,模型就会学习到这些偏差和错误,并将其体现在生成的内容中||影响|可能导致错误的决策|可能导致我们在生活中做出错误的判断和选择。例如,一个投资者如果受到可得性偏差的影响,可能会高估近期股市上涨的趋势,从而做出错误的投资决策|可能会误导用户、传播虚假信息、甚至引发安全事故。例如,一个用于医疗诊断的AI系统,如果出现幻觉,可能会给出错误的诊断结果,从而延误患者的治疗|