驯化 AI 可以从以下几个方面考虑:
开发这些具有潜在空间层次结构的堆叠AI模型——复杂数据的简化地图,以帮助AI模型理解模式和关系——将反映对每个基本元素的理解或预测能力。我相信,这最初可能会平行于人类教育和教育范例,但随着时间的推移,它可能会专门发展,以在AI学习中培养新型的专业知识。这些堆叠模型可能会以与人脑皮层类似的方式发展。但是,与人类拥有视觉皮层和运动皮层不同,AI可能会拥有生物皮层和药物设计皮层——在这两种情况下,都是针对特定任务专门设计的神经架构。具有讽刺意味的是,创建专门从事诸如医疗保健这样的特定领域的AI可能比创建更接近HAL 9000的东西——具有跨领域的典型人类水平知识——更容易。实际上,我们更需要特定领域的专家AI,而不是一个能做任何普通人能做的事情的全能AI。我预计不仅会创造一个专家AI,而且会创造许多专家AI,它们在编码、数据和测试方面采用多样化的方法,以便在需要时这些模型可以提供第二个(或第三个、第四个)意见。同时,我们必须将AI从其在线基础上摘下,并将其投入到原子的世界中。我们应该让我们最熟练的人类专家配备可穿戴设备,以收集微妙的、现实世界的互动,供AI学习,就像我们即将崭露头角的学术和行业明星一样。解决健康和医学领域最复杂和不确定的问题在位元的世界中根本不存在。必须让这些专家AI接触到顶级从业人员的多样化视角,以避免复制危险的偏见。但AI的黑盒性远不如大众想象中的那么强;我们今天依赖的人类决策,正如我以前[指出的](https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html),可以说更加不透明。我们不能因为对传播人类偏见的恐惧而限制我们探索AI如何帮助我们民主化我们的人类专家知识的意愿,而这些专家是不幸地无法扩展的。
1.31.Regulation can increase innovation by giving businesses the incentive to solve important problems while addressing the risk of harm to citizens.For example,product safety legislation has increased innovation towards safer products and services.68 In the case of AI,a contextbased,proportionate approach to regulation will help strengthen public trust and increase AI adoption.692.32.The National AI Strategy set out our aim to regulate AI effectively and support innovation.70 In line with the principles set out in the Plan for Digital Regulation,71 our approach to AI regulation will be proportionate;balancing real risks against the opportunities and benefits that AI can generate.We will maintain an effective balance as we implement the framework by focusing on the context and outcomes of AI.3.33.Our policy paper proposed a pro-innovation framework designed to give consumers the confidence to use AI products and services,and provide businesses the clarity they need to invest in AI and innovate responsibly.72 This approach was broadly welcomed–particularly by industry.Based on feedback,we have distilled our aims into three objectives that our framework is designed to achieve:o Drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty.This will encourage investment in AI and support its adoption throughout the economy,creating jobs and helping us to do them more efficiently.To achieve this objective we must act quickly to remove existing barriers to innovation and prevent the emergence of new ones.This will allow AI companies to capitalise on early development successes and achieve long term market advantage.73 By acting now,we can give UK innovators a headstart in the global race to convert the potential of AI into long term advantages for the UK,maximising the economic and social value of these technologies and strengthening our current position as a world leader in AI.7468 The impact of regulation on innovation,Nesta,2012.69 Public expectations for AI governance(transparency,fairness and accountability),Centre for Data Ethics and Innovation,2023.
AI是什么?作为一个不具备理工科背景的文科生,要搞清楚“AI”其实是一件很困难的事情(什么Agents、AIGC、LLM,什么符号主义、什么语义规则傻傻分不清楚),所以最好的处理方式是就把AI当成一个黑箱,我们只需要知道AI是某种模仿人类思维可以理解自然语言并输出自然语言的东西就可以。至于AI如何去理解,其实不重要。于是我们可以发现驱动AI工具和传统道教的驱神役鬼拘灵遣将有奇妙的相似之处,都是通过特定的文字、仪轨程式来引用已有资源,驱使某种可以一定方式/程度理解人类文字的异类达成自己预设的效果,且皆需要面对工具可能突破界限(发疯)的情况。当然,不熟悉道教的朋友可以把这东西理解成某种可以理解人类文字但不是人的魔法精灵/器灵之类的东西——总之,AI的生态位就是一种似人而非人的存在。AI技术再爆炸一万倍,AI的生态位也还是一种似人而非人的存在。由此,我们可以从人类各个文明的传说中,从那些古老哲人们的智慧里寻找到当下和AI、神、精灵、魔鬼这种似人非人存在相处的原则:1.当你想让祂实现愿望时,基于祂的“非人”一面,你需要尽可能的通过语言文字(足够清晰的指令)压缩祂的自由度——(1)你不仅要清晰的告诉祂需要干什么,还需要清晰的告诉祂边界在哪里。(2)你不仅要清晰的告诉祂目标是什么,还需要清晰的告诉祂实现路径方法是哪一条。(3)你不仅要清晰的告诉祂实现路径,最好还直接给到祂所需的正确的知识。