Fallenstein谈论2015年3月会议

||消息

Fallenstein APS谈话Miri研金宝博娱乐究员Benja Fallenstein最近举行了邀请的谈话2015年3月美国物质社会会议在圣安东尼奥,德克萨斯州。她的谈话是一个中的四个人工智能特别会议

Fallenstein的头衔是“有益令人聪明的智慧:挑战和前进的道路。”幻灯片可用这里。抽象的:

如今,人级机器智能仍处于未来主义领域,但是每一个原因都希望它最终会开发。A generally intelligent agent as smart or smarter than a human, and capable of improving itself further, would be a system we’d need to design for safety from the ground up: There is no reason to think that such an agent would be driven by human motivations like a lust for power; but almost any goals will be easier to meet with access to more resources, suggesting that most goals an agent might pursue, if they don’t explicitly include human welfare, would likely put its interests at odds with ours, by incentivizing it to try to acquire the physical resources currently being used by humanity. Moreover, since we might try to prevent this, such an agent would have an incentive to deceive its human operators about its true intentions, and to resist interventions to modify it to make it more aligned with humanity’s interests, making it difficult to test and debug its behavior. This suggests that in order to create a beneficial smarter-than-human agent, we will need to face three formidable challenges: How can we formally specify goals that are in fact beneficial? How can we create an agent that will reliably pursue the goals that we give it? And how can we ensure that this agent will not try to prevent us from modifying it if we find mistakes in its initial version? In order to become confident that such an agent behaves as intended, we will not only want to have a practical implementation that seems to meet these challenges, but to have a solid theoretical understanding of why it does so. In this talk, I will argue that even though human-level machine intelligence does not exist yet, there are foundational technical research questions in this area which we can and should begin to work on today. For example, probability theory provides a principled framework for representing uncertainty about the physical environment, which seems certain to be helpful to future work on beneficial smarter-than-human agents, but standard probability theory assumes omniscience about逻辑事实;对于代表确定性计算的输出的不确定性没有类似的原则框架,尽管任何易于易于人类的代理人肯定需要处理这种类型的不确定性。我将讨论正在进行的基础工作的其他例子。

UC Berkeley的Stuart Russell也在本次会议上发表了谈话AI的长期未来