约翰·福克斯(John Fox)

||Conversations

John Fox是一名跨学科科学家,具有AI和计算机科学的理论利益,并且是医学和医学软件工程的专注。在达勒姆(Durham)和剑桥大学(Cambridge Universities)接受实验心理学培训以及在美国和英国的CMU和康奈尔(MRC)的博士后奖学金之后,他于1981年加入了帝国癌症研究基金(现为英国癌症研究基金),担任医学AI研究员。金宝博娱乐该小组的研究是明确的多学金宝博娱乐科,随后在基本的计算机科学,AI和医学信息学方面做出了重大贡献,并开发了许多已商业化的成功技术。

In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the European Federation of Medical Informatics for the development of PROforma, arguably the first formal computer language for modeling clinical decision and processes. Fox has published widely in computer science, cognitive science and biomedical engineering, and was the founding editor of theKnowledge Engineering Review(Cambridge University Press). Recent publications include a research monographSafe and Sound: Artificial Intelligence in Hazardous Applications(MIT Press, 2000) which deals with the use of AI in safety-critical fields such as medicine.

Luke Muehlhauser: You’ve spent many years studying AI safety issues, in particular in medical contexts, e.g. in your 2000 book with Subrata Das,Safe and Sound: Artificial Intelligence in Hazardous Applications。在过去的十年左右的时间里,您将重点关注哪些AI安全挑战?


John Fox:从我的第一份研究工作中,作为A金宝博娱乐I创始人Allen Newell和CMU的Herb Simon的大约一项工作,我一直对高水平认知的计算理论感兴趣。作为一名认知科学家,我对从感知和推理到自主决策中知识的用途的一系列认知功能的理论感兴趣。1975年我回到英国后,我开始将我的理论利益与设计和部署医学中的AI系统的实际目标相结合。金宝博官方

Since our book was published in 2000 I have been committed to testing the ideas in it by designing and deploying many kind of clinical systems, and demonstrating that AI techniques can significantly improve quality and safety of clinical decision-making and process management. Patient safety is fundamental to clinical practice so, alongside the goals of building systems that can improve on human performance, safety and ethics have always been near the top of my research agenda.


Luke Muehlhauser:Was it straightforward to address issues like safety and ethics in practice?


John Fox: While our concepts and technologies have proved to be clinically successful we have not achieved everything we hoped for. Our attempts to ensure, for example, that practical and commercial deployments of AI technologies should explicitly honor ethical principles and carry out active safety management have not yet achieved the traction that we need to achieve. I regard this as a serious cause for concern, and unfinished business in both scientific and engineering terms.

我们现在正在处理的下一代基于知识的系统和软件代理将更聪明,并且比当前系统具有更大的自主功能。金宝博官方人类安全和AI道德使用的挑战意味着这意味着奇异假设所提出的挑战。我们可以从奇异研究人员那里学习很多东西,也许我们在人类医疗保健中部署自主代理商金宝博娱乐的经验也将提供一些机会,也可以理解一些奇异性辩论。


Luke:您写道,您的“试图确保……([] AI技术的商业部署应该……进行主动安全管理”尚未获得您想要的那么多的吸引力。您能对此进行更多详细介绍吗?您尝试在没有其他人采用或没有实施的方面完成什么?


John:从七十年代初期开始在医疗AI工作,我一直都意识到,尽管AI可以帮助减轻人为错误的影响,但也有潜在的缺点。人工智金宝博官方能系统可能会错误地编程,或者他们的知识可以规定不适当的做法,或者他们可能会产生最终对患者负责的人类专业人士的影响。尽管人类认知的局限性众所周知,但人们仍然是地球上最广泛,最具创造力的问题解决者的局限性。

In the early ‘nineties I had the opportunity to set up a project whose goal was to establish a rigorous framework for the design and implementation of AI systems for safety critical applications. Medicine was our practical focus but the RED project1was aimed at the development of a general architecture for the design of autonomous agents that could be trusted to make decisions and carry out plans as reliably and safely as possible, certainly to be as competent and hence as trustworthy as human agents in comparable tasks. This is obviously a hard problem but we madesufficient progresson theoretical issues and design principles that I thought there was a good chance the techniques might be applicable in medical AI and maybe even more widely.

I thought AI was like medicine, where we all take it for granted that medical equipment and drug companies have a duty of care to show that their products are effective and safe before they can be certificated for commercial use. I also assumed that AI researchers would similarly recognize that we have a “duty of care” to all those potentially affected by poor engineering or misuse in safety critical settings but this was naïve. The commercial tools that have been based on the technologies derived from AI research have to date focused on just getting and keeping customers and safety always takes a back seat.

In retrospect I should have predicted that making sure that AI products are safe is not going to capture the enthusiasm of commercial suppliers. If you compare AI apps with drugs we all know that pharmaceutical companies have to be firmly regulated to make sure they fulfill their duty of care to their customers and patients. However proving drugs are safe is expensive and also runs the risk of revealing that your new wonder-drug isn’t even as effective as you claim! It’s the same with AI.

I continue to be surprised how optimistic software developers are – they always seem to have supreme confidence that worst-case scenarios wont happen, or that if they do happen then their management is someone else’s responsibility. That kind of technical over-confidence has led to countless catastrophes in the past, and it amazes me that it persists.

这有另一块,担忧roles and responsibilities of AI researchers. How many of us take the risks of AI seriously so that it forms a part of our day-to-day theoretical musings and influences our projects? MIRI has put one worst case scenario in front of us – the possibility that our creations might one day decide to obliterate us – but so far as I can tell the majority of working AI professionals either see safety issues as irrelevant to the pursuit of interesting scientific questions or, like the wider public, that the issues are just science fiction.

I think experience in medical AI trying to articulate and cope with human risk and safety may have a couple of important lessons for the wider AI community. First we have a duty of care that professional scientists cannot responsibly ignore. Second, the AI business will probably need to be regulated, in much the same way as the pharmaceutical business is. If these propositions are correct then the AI research community would be wise to engage with and lead on discussions around safety issues if it wants to ensure that the regulatory framework that we get is to our liking!


Luke: Now you write, “That kind of technical over-confidence has led to countless catastrophes in the past…” What are some example “catastrophes” you’re thinking of?


John: Psychologists have known for years that human decision-making is flawed, even if amazingly creative sometimes, and overconfidence is an important source of error in routine settings. A large part of the motivation for applying AI in medicine comes from the knowledge that, in the words of the Institute of Medicine, “To err is human” and overconfidence is an established cause of clinical mistakes.2

过度信心及其许多亲戚(自满,乐观,傲慢等)对我们的个人成功和失败以及我们的集体未来都有巨大的影响。美国和英国最近在世界各地的冒险经历的结果很容易被确定为过度自信的后果,在我看来,关于全球变暖和行星灾难的两极分化立场都在相反的方向上表达了过度自信。


Luke: Looking much further out… if one day we can engineerAGIs,您认为我们可能会弄清楚如何使它们安全吗?


John: History says that making any technology safe is not an easy business. It took quite a few boiler explosions before high-pressure steam engines got their iconic centrifugal governors. Ensuring that new medical treatments are safe as well as effective is famously difficult and expensive. I think we should assume that getting to the point where an AGI manufacturer could guarantee its products are safe will be a hard road, and it is possible that guarantees are not possible in principle. We are not even clear yet what it means to be “safe”, at least not in computational terms.

It seems pretty obvious that entry level robotic products like the robots that carry out simple domestic chores or the “nursebots” that are being trialed for hospital use, have such a simple repertoire of behaviors that it should not be difficult to design their software controllers to operate safely in most conceivable circumstances. Standard safety engineering techniques like HAZOP3are probably up to the job I think, and where software failures simply cannot be tolerated software engineering techniques like formal specification and model-checking are available.

There is also quite a lot of optimism around more challenging robotic applications like autonomous vehicles and medical robotics. Moustris et al.4say that autonomous surgical robots are emerging that can be used in various roles, automating important steps in complex operations like open-heart surgery for example, and they expect them to become standard in – and to revolutionize the practice of – surgery. However at this point it doesn’t seem to me that surgical robots with a significant cognitive repertoire are feasible and a human surgeon will be in the loop for the foreseeable future.


Luke: So what might artificial intelligence learn from natural intelligence?


As a cognitive scientist working in medicine my interests are co-extensive with those of scientists working on AGIs. Medicine is such a vast domain that practicing it safely requires the ability to deal with countless clinical scenarios and interactions and even when working in a single specialist subfield requires substantial knowledge from other subfields. So much so that it is now well known that even very experienced humans with a large clinical repertoire are subject to significant levels of error.5An artificial intelligence that could be helpful across medicine will require great versatility, and this will require a general understanding of medical expertise and a range of cognitive capabilities like reasoning, decision-making, planning, communication, reflection, learning and so forth.

如果人类专家不安全,是否有可能确保AGI(无论多么复杂)会吗?我认为很明显,当前可用于确保系统安全的技术范围对于使专家AI系统可靠,并最大程度地减少其人类设计师可以预期的错误的可能性。金宝博官方但是,具有通用情报的人工金宝博官方智能系统将有望解决我们目前无法解决的情况和危害,甚至超出了设计师的预期。我很乐观,但是目前,我没有看到任何令人信服的理由相信我们拥有足以保证临床超级智能的技术,更不用说可以在许多领域中部署的AGI了。


Luke: Thanks, John!


  1. Rigorously Engineered Decisions
  2. 主要灾难中的过度自信:

    • D. Lucas.了解灾难中的人为因素。Interdisciplinary Science Reviews. Volume 17 Issue 2 (01 June 1992), pp. 185-190.
    • “Nuclear safety and security.

    Psychology of overconfidence:

    过度自信。
    • C. Riordan.过度自信可以使您愚弄福布斯领导论坛。

    Overconfidence in medicine:

    • R. Hanson.Overconfidence Erases Doc Advantage.Overcoming Bias, 2007.
    • E. Berner, M. Graber.过度自信是诊断错误的一个原因edicine.美国医学杂志。第121卷,第5期,补充,S2– S23,2008年5月。
    •阿克曼。研究发现,即使在最困难的情况下,医生都过度自信。Houston Chronicle, 2013.

    General technology example:

    •J。Vetter,A。Benlian,T。Hess。Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time.ICIS 2011 Proceedings. Paper 4. December 6, 2011.

  3. 危害和可操作性研究
  4. Int J Med Robotics Comput Assist Surg 2011; 7: 375–39
  5. A. Ford.Domestic Robotics – Leave it to Roll-Oh, our Fun loving Retrobot。Institute for Ethics and Emerging Technologies, 2014.