Kristinn Thórisson on constructivist AI

||Conversations

kris博士Kristinn R. Thórissonis anIcelandic人工智能研究员,创始人金宝博娱乐冰岛智能机器研究所(IIIM) and co-founder and former co-director ofCADIA:智能代理分析和设计中心。Thórisson是人工智能系统集成金宝博官方。Other proponents of this approach are researchers such as马文·明斯基,亚伦·斯洛曼(Aaron Sloman)andMichael A. Arbib。Thórisson is a proponent of Artificial General Intelligence (AGI) (also referred to as强大的AI),并提出了一种新的方法实现artificial general intelligence. A demonstration of this constructivist AI methodology has been given in the FP-7 funded HUMANOBS projectHUMANOBS project, where an artificial system autonomously learned how to do spoken multimodal interviews by observing humans participate in a TV-style interview. The system, called AERA, autonomously expands its capabilities through self-reconfiguration. Thórisson has also worked extensively on systems integration for artificial intelligence systems in the past, contributing architectural principles for infusing dialogue and human-interaction capabilities into the Honda ASIMO robot.

KristinnR.Thórisson目前是冰岛智能机器研究所的董事总经理,也是雷克雅未克大学计算机科学学院的副教授。他是联合创始人semantic web初创公司Radar Networks,并担任其首席技术官2002-2003。

Luke Muehlhauser: In some recent articles (1,2,3你对比“宪法解释者”和“constructivist” approaches in AI. Constructionist AI builds systems piece by piece, by hand, whereas constructivist AI builds and grows systems largely by automated methods.

Constructivist AI seems like a more general form of the earlier concept of “seed AI。”您如何看待两个概念之间的关系?


Kristinn Thorisson:当我们描述自己在做什么时,我们有时会使用“种子AI”,甚至是“发展性AI”,这通常是一项跨学科研究计划的好任期,因为每个术语都会提出各种事情金宝博娱乐人们的思想取决于他们的背景。这些术语的含义和历史之间都存在细微的差异,每个术语都为每个术语带来了几个利弊。

I had been working on integrated constructionist systems for close to two decades, where the main focus was on how to integrate many things into a coherent system. When my collaborators and I started to seriously think about how to achieve artificial general intelligence we tired to explain, among other things, how transversal functions – functions of mind that seem to touch pretty much everything in a mind, such as attention, reasoning, and learning – could efficiently and sensibly be implemented in a single AI system. We also looked deeper into autonomy than I had done previously. This brought up all sorts of questions that were new to me, like: What is needed for implementing a system that can act relatively autonomously *after it leaves the lab*, without the constant intervention of its designers, and is capable of learning a pretty broad range of relatively unrelated things, on its own, and deal with new tasks, scenarios and environments – that were relatively unforeseen by the system’s designers?

My Constructionist Design Methodology (CDM) was conceived over a decade ago as a way to help researchers build big *whole* systems integrating a large number of heterogeneous cognitive functions. In the past 10 years CDM had already proven excellent for building complex advanced systems – from AI architecture for interactive agents such as the Honda ASIMO humanoid robot to novel economic simulations. Combining methodology and a software system for implementing large distributed complex systems with heterogeneous components and data, we naturally started by asking how the CDM could be extended to address the above issues. But no matter how I tried to tweak and re-design this framework/methodology there seemed to be no way to do that. Primarily due to my close collaboration with Eric Nivel – I soon saw how the CDM could not address the issues at hand. But it went further than that: It wasn’t only the CDM but *all methodology of that kind* that was problematic, and it wasn’t simply ‘mildly laking’ in power, or ‘suboptimal’, but in fact *grossly insufficient* – along with the underlying assumptions that our past research approaches were based on, as imported relatively wholesale from the field of computer science. As the CDM inherited all the limitations of existing software methodologies and engineering methodologies that are commonly taught in universities and used in industry, no methodology existed to our best knowledge that could move us toward AGI at something I considered an acceptable speed.

A new methodology was needed. And since we could see so clearly that the present alonomic methodologies – methods that assume a designer outside the system – are essentially ‘constructionist’, putting the system designer/researcher in the role of a construction worker, where each module/class/executable is implemented by hand by a human – our sights turned to self-constructive systems, producing the concept of constructivism. A self-constructive system is capable of bootstrapping itself to some extent, in a new environment, and learn new tasks that its designer did not anticipate. Such a system must of course be supplied by a “seed”, since without a seed there can be no growth, and the implication is then that the system develops on its own, possibly going though cognitive stages in the process. What we do is therefore seed AI, developmental AI, and constructivist AI. The principal concept here is that there are self-organizing principles at play, such that the system-environment couple allows the AI to grow in a reasonably predictable way from a small seed, according to the drives (top-level goals) that were contained in the beginning. I had been introduced to Piaget’s ideas in my early career, and the concept of constructivism seemed to me to capture the idea very well.

我们所做的是* *建构主义,这米ay or may not overlap with the meaning of how others use that term – the association with Piaget’s work is at an abstract level, as a nod in his direction. One important difference with how others use the term, as far as I can see, is that while we agree that intelligent systems must be able to acquire their knowledge autonomously (as was Piaget’s main point) our emphasis is on *methodology*: We have very strong reasons to believe that at a high level there are (at least) to *kinds* of methodologies for doing AI, which we could call ‘constructionist’ and ‘constructivist’. Our hypothesis is that only if you pick the latter will you have a shot at producing an AGI worthy of the “G”. And at present, *all* the approaches proposed in AI, from subsumption to GOFAI, from production systems to reasoning systems to search-and-test, from BDI to sub-symbolic – whatever they are called and however you slice the field and methodological and philosophical approaches – are of the constructionist kind. Our constructivist AI methodology – CAIM – is our current proposal for breaking free from this situation.


Luke:What is the technical content of CAIM, thus far?


Kris: As a methodology a great deal of the CAIM is perhaps closer to philosophy than tech-speak – but there are some fairly specific implications as well, which logically result from these more general concerns. Let’s go from the top down. I have already mentioned where our work on CAIM originated; where the motivation for a new methodology came from: We asked ourselves what a system would need to be capable of to be more or less (mostly more) independent of its designer after it left the lab – to be more or less (mostly more) *autonomous*. Clearly the system would then need to take on *at least* all the tasks that current machine learning and cognitive architectures require their designers to do after they have been implemented and released – but probably a lot more too. The former is a long list of things such as identifying worthy tasks, identifying and defining the necessary and sufficient inputs and outputs for tasks, training for a new task, and more. The latter – the list of *new* features that such a system would need and which virtually no system to date deals with – includes e.g. how to re-use skills (transfer of knowledge), how to do ampliative reasoning (unified deduction, induction, abduction), how to identify the need for sub-goal generation, how to properly generate sub-goals, etc., and ultimately: how to evaluate one’s methods for doing all of this and improve them. Obviously none of this is trivial.

因此,我们提出了一些高级原则,我将提到的第一个是需要 *整体上的认知体系结构。这比听起来要困难得多,这就是为什么没有人真正愿意接受这一点的原因,以及为什么计算机科学仍然可以摆脱它的原因。但是,由于复杂系统的性质,必须通过大量的异质互动连接结合使用复杂功能的复杂系统的性质:当您扰动时,金宝博官方这种系统以非常复杂的方式行事,如果您试图发现他们workings via standard experimental designs, by tweaking x and observing the effect, tweaking y and observing again, etc. As famously noted by Newell, you can’t play 20 questions with nature and win, in his paper with that title. When trying to understand how to build a system with a lot of complex interacting functions (‘function’ having the general meaning, not the mathematical one) you must take all the major factors, operations and functions into account from the outset, because if you leave any of them out the whole thing may in fact behave like a different (inconsistent, dysfunctional) system entirely. One such thing that typically is ignored – not just in computer science but in AI as well – is time itself: In view of CAIM, you cannot and must not ignore such a vital feature of reality, as time is in fact one of the key reasons why intelligence exists at all. At the high level CAIM tells you to make a list of the *most* important features of (natural) intelligences – including having to deal with time and energy, but also with uncertainty, lack of processing power, lack of knowledge – and from this list you can derive an outline for the requirements for your system.

Now, turning our attention at the lower levels, one of the things we – and others – realized is that you need to give a generally intelligent system a way to inspect its own operation, to make it capable of *reflection*, so that it can monitor its own progress as it develops its processes and skills. There are of course programming languages that allow you to implement reflection – Lisp and Python being two examples – but all of these are severely lacking in other important aspects of our quest for general intelligence, a primary one being that they do not make time a first-class citizen. This is where adoption of CAIM steers you in a bit more technical than many other methodologies: It proposes new principles for programming such reflective systems, where time is at the core of the language’s representation, and the granularity of an “executable semantic chunk” must be what we refer to as “pee-wee size”: small enough so that the execution time is highly consistent and predictable, and flexible enough so that larger programs can be built up using such chunks. We have built one proto-architecture with this approach, the Autocatalytic Endogenous Reflective Architecture (AERA). These principles have carried us very far in that effort – much further than I would have predicted based on my experience in building and re-building any other software architecture – and it has been a pleasant surprise how easy it is to expand the current framework with more features. It really feels like we are on to something. To take an example, the concept of curiosity was not a driving force or principle of our efforts, yet when we tried to expand AERA to incorporate such functionality at its core – in essence, the drive to explore one’s acquired knowledge, to figure out “hidden implications” among other things – it was quite effortless and natural. We are seeing very similar things – although this work is not quite as far along yet – with implementing advanced forms of analogy.


Luke: Among AI researchers who think regularly not just about narrow AI applications but also about the end-goal ofAGI,我观察到那些以“自上而下”方式思考的人和以“自下而上”方式思考的研究人员之间的鸿沟。金宝博娱乐您已经描述了自上而下的方法:考虑一下AGI需要具有什么功能,然后从那里进行后链,以找出您应该为工程设计而应努力的哪些子可容纳的能力,以最终达到AGI。Others think a bottom-up approach may be more productive: just keep extending and iterating on the most useful techniques we know today (e.g. deep learning), and that will eventually get us to AGI via paths we couldn’t have anticipated if we had tried to guess what was needed from a top-down perspective.

Do you observe this divide as well, or not so much? If you do, then how do you defend the efficiency and productivity of your top-down approach to those who favor bottom-up approaches?


Kris: For any scientific goal you may set yourself you must think about the scope of your work, the hopes you have for finding general principles (induction is, after all, a key tenet of science), and the time it may take you to get there, because this has an impact on the tools and methods you choose for the task. Like in any endeavor, it is a good idea to set yourself milestones, even when the expected time for your research may be years or decades – some would say that is an even greater reason for putting down milestones. We could say that CAIM addresses the top-down and middle-out in that spectrum: First, it helps with assessing the scope of the work by highlighting some high-level features of the phenomenon to be researched / engineered (intelligence), and proposing some reasons for why one approach is more likely to succeed than others. Second, it proposes mid-level principles that are more likely to achieve the goals of the research program than others – such as reflection, system-wide resource management, and so on. With our work on AERA we have now a physical incarnation of those principles, firmly grounding CAIM in a control-theoretic context.

自上而下 /自下而上的维度只是许多对AI社区具有重要历史的史的之一相对于基于连接的人,狭窄和深度与宽阔和摇摆,很少键的原理与霍奇普德(Hodgepodge of Techniques)(“大脑是骇客”,必须外观的纽带,与任何can-进行工程,依此类推。所有这些都在对人们的观点进行分类方面的效用各不相同,并且关于它们对主题的重要性,我们可以肯定地说,其中一些人不如其他主题重要。但是,他们中的大多数人都像过时的政治类别:它们缺乏真正帮助我们思考的精致,细节和精确性。在我脑海中,关于自上而下的与自下而上的鸿沟最重要的是,它是一种自下而上的方法,没有任何范围,方向或原始理论,与盲目搜索基本上没有什么不同。没有任何经验基础的任何自上而下的方法都是哲学,而不是科学。极端都不是很糟糕的,但让我们不要彼此混淆,或与知情的科学研究混淆。金宝博娱乐大多数时候,现实都介于两者之间。

所有可能的方法,盲目的搜索只是一个bout the most time-consuming and least promising way to do science. Some would in fact argue that it is for all practical purposes impossible. Einstein and Newton did not come up with their theories through blind, bottom-up search, they formulated a rough guideline in their heads about how things might hang together, and then they “searched” that very limited space of possibilities thus carved out. You could call this proto-theories, meta-theories, high-level principles, or assumptions: the guiding principles that a researcher has in mind when he/she tries to solve unsolved problems and answer unanswered questions. In theory it is possible to discover how complex things work by simply studying their parts. But however you slice this, eventually someone must put the descriptions of these isolated parts together, and if the system you are studying is greater than the sum of its parts, well, then someone must come up with the theory for how and why they fit together like they do.

当我们尝试通过研究大脑来学习智力时,这本质上是我们得到的:这是最糟糕的情况之一curse of holism– that is, when there is no theory or guiding principles the search is more or less blind. If the system you are studying is large (the brain/mind is) and has principles operating on a broad range of timescales (the brain/mind does) based on a multitude of physical principles (like the mind/brain does) then you will have a hell of a time putting all of the pieces together, for a coherent explanation of the macroscopic phenomenon you are trying to figure out, when you have finished studying the pieces you originally chose to study in isolation. There is another problem that is likely to crop up: How do you know when you have figured outallthe pieces when you don’t really know what are the pieces? So – you don’t know when to stop, you don’t know how to look, and you don’t know how to put your pieces together into sub-systems. The method is slowed down even further because you are likely to get sidetracked, and worse, you don’t actually know when you are sidetracked because you couldn’t know up front whether the sidetrack is actually a main track. For a system like the mind/brain – of which intelligence is a very holistic emergent property – this method might take centuries to deliver something along the lines of explaining intelligence and human thought.

这就是为什么方法论很重要的原因。您选择的方法必须检查其可能性的可能性,以帮助您实现研究目标 - 以帮助您回答希望回答的问题。金宝博娱乐在AI中,许多人似乎不在乎。他们可能对通用情报,类似人类的智力或这种智力的风味感兴趣,但是他们选择了最近几十年来计算机科学界生产的最近的可用方法 - 并越过他们手指。然后他们观看了十年的AI进步,并感到有明显的进步迹象:在90年代,它是深蓝色的,在00年代,它是机器人真空吸尘器,在10年代,IBM Watson是IBM Watson。他们想自己:“是的,我们最终会到达那里 - 我们确保并稳定进步”。就像拉尔森(Larson)的笑话一样,牛练习撑杆跳,其中一个大声喊道:“很快我们就会为月球做好准备!”。

Anyway, to get back to the question, I do believe in the dictum “whatever works” – i.e. bottom-up, top-down, or a mix – if you have a clear idea of your goals, have made sure you are using the best methodology available, for which you must have some idea of the nature of the phenomenon you are studying, and take steps to ensure you won’t get sidetracked too much. If no methodology exists that promises to get you to your final destination you must define intermediate goals, which should be based on rational estimates of where you think the best available methodology is likely to land you. As soon as you find some intermediate answers that can help you identify what exactly are the holes in your methodology you should respond in some sensible way, by honing it or even creating a brand new one; whatever you do, by all means don’t simply fall so much in love with your (in all likelihood, inadequate) methodology that you give up on your original goals, like much of the AI community seems to have done!

In our case what jerked us out of the old constructionist methodology was the realization that to get to general intelligence you’d have to have a system that could more or less self-bootstrap, otherwise it could not handle what we humans refer to as全新situations, tasks, or environments. Self-bootstrapping requires introspection and self-programming capabilities, otherwise your system will not be capable of cognitive growth. Thorough examination of these issues made it clear that we needed a new methodology, and a new top-level proto-theory, that allowed us to design and implement a system with such features. It is not known at present how exactly these features are implemented in either human or animal minds, but this was one of the breadth-first items on our “general intelligence requires” list. Soon following this came realizations that it’s difficult to imagine a system with those features that doesn’t have some form of attention – we also call it resource management – and a very general way of learning pretty much anything, including about its own operation.

This may seem like an impossible list of requirements to start with, but I think in our favor is the “inventor’s paradox”: Sometimes piling more constraints makes what used to seem complex suddenly simpler. We started to look for ways to create the kind of a controller that was amenable to be imbued with those features, and we found one by taking a ‘pure engineering’ route: We don’t limit ourselves to the idea that “it must map to the way the brain (seems to us now) to do it”, or any other such constraint, because we put engineering goals first, i.e. we targeted creating something with potential for practical applications. Having already very promising results that go far beyond state of the art in machine learning, we are still exploring how far this new approach will take us.

So you see, even though my concerns may seem to be top-down, my there is much more to it, and my adoption of a radically different top-level methodology has much more to do with clarifying the scope of the work, trying to set realistic goals and expectations, and going from there, looking at the building blocks as well as the system as a whole – and creating something with practical value. In one sentence, our approach is somewhat of a simultaneous “breadth-first” and “top-to-bottom” – all at once. Strangely enough this paradoxical and seemingly impossible approach is is working quite well.


Luke: What kinds of security and safety properties are part of your theoretical view of AGI? E.g. MIRI’sEliezer Yudkowskyseems to share your broad methodology in some ways, but he强调the need for AGI designs to be “built from the ground up” for security and safety, like today’s safety-critical systems are — for example autopilot software that is written very differently from most software so that it is (e.g.) amenable to formal verification. Do you disagree with the idea that AGI designs should be built from the ground up for security and safety, or… what’s your perspective on that?


Kris:我对将科学知识应用于这个星球的生活的所有领域,是安全性的主要支持者。知识就是力量;科学知识可以与邪恶一起使用 - 我认为每个人都同意。我认为,由于我们不知道和理解的知识还不多,因此在社会上任何科学知识应用中,谨慎都应该是一种天然成分。Sometimes we have a very good idea of the technological risks, while suspecting certain risks in how it will be managed by people, as is the case with nuclear power plants, and sometimes we really don’t understand either the technological implications nor social management processes, as when a genetically engineered self-replicating systems (e.g. plants) are released into the wild – as the potential interactions of such a technology with the myriads of existing biological systems out there that we don’t understand is staggering, and whose outcome is thus impossible to predict. Since there is generally no way for us to grasp even a tiny fraction of the potential implications of releasing e.g. self-replicating agent into the wild, genetic engineering is a greater potential threat to our livelihood than nuclear power plants. However, both have associated dangers, and both have their pros and cons.

Some people have suggested banning certain kinds of research or exploration of certain avenues and questions, to directly block off the possibility of creating dangerous knowledge in the first place. The argument goes, if no one knows it cannot be used to do harm. This purported solution is not practical, however, as the research avenue in question must be blocked everywhere to be effective. Even if we could instantiate such a ban in every country on Earth, compliance could be difficult to ensure. And since people are notoriously bad at foreseeing which avenues of research turn out to bring benefits, a far better approach is to give scientists the freedom to select the research question they want to try to answer – as long as they observe general safety measures, of course, as appropriate to their field of inquiry. Encouraging disclosure of research results funded by public money, e.g. the European competitive research grants, NIH grants, etc., is a sensible step to help ensure that knowledge does not sit exclusively within a small group of individuals, which generally increases opportunity for (mis)use in favor of one group of people over another.

Rather than ban certain research questions to be pursued, the best way to deal with the dangers resulting from knowledge is to focus on its应用, making for instance use of certain explosives illegal or strictly conditional, making production facilities for certain chemicals, uranium, etc. conditional on the right permits, regulation, inspection, and so on. This may require strong governmental monitoring and supervision, and effective law enforcement, and this has associated cost, but this approach has built-in transparency and is already proven to be practical.

我不认为人工智能在maturity stage of either nuclear power or generically engineered beings. The implications of applying AI in our lives is thus fairly far from the kinds of dangers posed by either of those, or comparable technologies. The dangers of applying current and near-future AI in some way in society are thus of the same nature as the dangers inherent of firearms, power tools, explosives, armies, computer viruses, and the like. Current AI technology can be used (and misused) in acts of violence, in breaking the law, for invasion of privacy, or for violating human rights and waging war. If the knowledge of how to use AI technology is distributed unevenly among arguing parties, e.g. those at war – even that available now – it could give the knowledgeable party an upper hand. But knowledge and application of present-day AI technology is unlikely to count as anything other than one potential make-or-break factor among many. That being said, of course this may change, even radically, in the coming decades.

我和我的合作者相信科学家we should take any sensible opportunity to ensure that our own research results are used responsibly. At the very least we should make the general populous, political leaders, etc., aware of any potential dangers that we believe new knowledge may entail. I created a software license to this end, that I prefer to stick on any and all software that I make available, which states that the software may not be used for invasion of privacy, violation of human rights, causing bodily or emotional distress, or for purposes of committing or preparing for any act of war. The clause, called the CADIA Clause (after the AI lab I co-founded), can be appended to any software license by anyone – it is available on the CADIA Website. As far as I know it is one of a very few, if not the only one, of its kind. It is a clear and concise ethical statement on these matters. While seemingly a small step in the direction of ensuring the safe use of scientific results, it is in my mind quite odd that more such statements and license extensions don’t exist; one would think that large groups of scientists would be taking steps in this direction already all over the planet.

有些人推测,其中著名的天体物理学家斯蒂芬·霍金(Stephen Hawking)认为,未来的AI系统,尤其是那些拥有超人认知能力的系统,很可能会对任何发明或科学知识具有的对人类的最金宝博官方大威胁构成最大的威胁。论点是这样的:由于超人AIS必须能够自动生成子目标,当然,超人AIS的目标将不会被手工编码(几乎与当今创建的所有软件不同,包括存在的所有AI系统,包括存在的所有AI系统金宝博官方),我们无法直接控制它们可能产生的子目标;因此,我们无法确保它们以安全,明智或任何可预测的方式行事。这种论点可能与目前正在进行的某些系统开发方法有一些相关性。金宝博官方但是,我认为 - 基于我到目前为止的证据 - 恐惧源于我所说的基于不正确前提的合理归纳。AI中的当前方法(除了我的小组使用的方法)生产的软件,其性质与笔记本电脑和手机上的操作系统相同:它是一种手工制作的人工制品,没有内置的弹性能力来扰动,很大程度上是金宝博官方对不可预见和不熟悉的投入,没有自我管理机制,没有认知增长的能力等。必须be. The point is that existing methods for building any kinds of safeguards into these systems are very primitive, to say the least, along the lines of the “safeguards” built into nuclear power plant software: These are safeguards invented and implemented by human minds. These kinds of safeguards are limited by human that designs them. And as we well know, it is difficult for us to truly trust systems built this way. So people tend to think that future superhuman AIs will inherit this trait. But I don’t think so, and my collaborators and I are working on an alternative and pretty interesting new premise for speculating about nature of future superhuman intelligences, and their inherent pros and cons.

尽管我们的方法具有许多与有机过程共同的低级特征,但它基于明确的确定性逻辑,而不是在很大程度上不可穿透的亚符号网络。它并没有遭受相同的不可预测性的困扰,例如,一种新的基因工程植物被释放到野外,或者是在部署时所需的一小部分训练的人造神经网。我们系统金宝博官方的知识(我现在谈论的是AERA)在与程序员赋予的顶级目标或驱动器的指导下与环境的互动过程中增长。它具有内置的自我校正机制,这些机制远远超出了日常软件系统中所实施的任何内容,甚至还属于属于最先进的“自主系统”类别的实验室中的任何东西。金宝博官方我们的系金宝博官方统能够根据手工编码的元目标进行元级操作,基于自组织原理,这些原则与以前所做的非常不同。在我们的方法中,自主系统能够在生物系统中看到类似的高级指导,以金宝博官方帮助它们生存。当变成“颠倒”时,这些机制会导致各种成本的自我保护的倒数,以一种环境保护,使它们保持保守和值得信赖,以至于没有核电站或基因工程生物学的生物学野外的代理商可以使用当前的工程方法到达。因此,我们发明了第一个基于种子的AI系统,但也可能是确保自我扩展AIS的可预测性的新范式,金宝博官方因为我们认为更悲观的研究人员与我们的工作没有关注的关注。金宝博娱乐话虽如此,我应该强调,我们正处于这项研究的中间,尽管我们手上有一个看似可预测的,自我管理的,自主的系统,但探索这些问题以及其他相关问题仍有许多工作要做。金宝博娱乐金宝博官方 of importance. Whether our system can reach superhuman, or even human-levels of intelligence, is completely unclear – most would probably say that our chances are slim, based on progress in AI so far, which would be a fair assessment. But it cannot be completely precluded at this stage. The software resulting from our work, by the way, is released under a BSD-like CADIA Clause license.


Luke: You write that “the fear [由霍金和其他人表达] stems from… incorrect premises.” But I couldn’t follow which incorrect premises you were pointing to. Which specific claim(s) do you think are incorrect?


Kris:请记住,这种讨论仍然是高度投机的;我们的知识中有很多差距必须填补,以便想象我们认为将来可能会栩栩如生的超人智慧。

一个基本和不正确的前提是,认为实施超人类智能的必要系统和足够的系统将受到与当今方法相同的局限性和问题的诅咒。金宝博官方

The allonomic methodologies used for all software running on our devices today produce systems that are riddled with problems, primarily fragility, brittleness, and unpredictability, stemming from their strict reliance on allonomically infused semantics, that is, their operational semantics coming strictly from outside of the system – from the human designer. This results in system unpredictability of two kinds.

First, large complex systems designed and written by hand are bound to contain mistakes in both design and in implementation. Such potential inherent failure points, most of which have to do with syntax rather than semantics, will only be seen if the system itself is in a particular state in the context of a particular environmental state where it is operating. And since these points of failure can be found at any level of detail – many of them will in fact be at very low levels of detail – the values of the system-environment state pair may be very specific, and thus the number of system state – environment state failure pairs may be enormous. To ensure reliability of a system of this nature our only choice is to expose it to every potential environmental state it may encounter, which for a complex system in a complex environment is prohibitive due the combinatorial explosion. We do this for airplanes and other highly visible and obviously fatal systems, but for most software this is not only cost prohibitive but virtually impossible. In fact, we cannot predict beforehand all the ways an allonomic system may fail partly because the system’s fragility is so much due to syntactic issues, which in turn are an unavoidable side effect of any allonomic methodology.

The other kind of unpredictability also stems from exogenous operational semantics, the fact that the runtime operation of the system is a “blind” one and hence the achievement of the system’s goal(s) is rendered inherently opaque and impenetrable to the system itself. A system that cannot analyze how it achieves its goals cannot propose or explore possible ways of improving itself. Such systems are truly blindly executed mechanical algorithms. If the software has no sensible robust way to self-inspect – as no hand-written constructionist system to date can since their semantics are strictly exogenous – it cannot create a model of itself. Yet a self-model is necessary to a system if it is to continuously improve in achieving its highest-level goals; in other words, self-inspection is a major way to improve the coherence of the system’s operational semantics.

因此,自我检查和建模可以在语义和句法水平上提高系统可预测性。金宝博官方Autonomous knowledge acquisition – constructivist style knowledge creation, as opposed to hand-coded expert-system style – coupled with self-modeling ensures that the system’s operational semantics are native to the system itself, bringing its operation to another level of meaningfulness not yet seen in any modern software.

This is how all natural intelligences operate. Because you have known your grandmother your whole life you can predict, with a given certainty, that she would not rob a bank, and that if she did, she would be unlikely to harm people unnecessarily while doing it, and however unlikely, you can conjure up potential explanations why she might do either of those, e.g. if she were to “go crazy”, the likelihood of which can in part be predicted by her family history, medication, etc.: The nature of the system referred to as “your grandmother” is understood in the historical and functional context of systems like it – that is, other humans – and in light of her history as an individual.

现代软件系统不是那样的。金宝博官方因为我们从未真正看到过这种人造系统,所以我们很难想象这种软件可能存在。金宝博官方我们并没有真正看到任何在人工制品中的元控制或自组织或具有明确顶级目标的种子-AI系统的良好证明。金宝博官方因此,我们可能倾向于认为超人类人工智能可能就像是混乱的人类,具有现代软件的所有危险,甚至可能是新软件的危险。没有一种自我意识的未来软件当然仍然是软件,但是它不会像一个“黑盒外星人”从外太空来到地球上,或者是一个具有灿烂但扭曲的想法的疯狂人类,因为我们可以 - 与人类不同 - 打开引擎盖,看看内部。而且内部不太可能像现代软件一样,因为它们将以完全不同的原则运行。因此,与其表现得像是一个富有幻想的疯子,或者是一个渴望确保自己的力量和生存的恶心独裁者,而不是完全独立和自主的自我保护的实体,而是未来的超人软件可能更像是自主锤:下一代tool with an added layer of possible constraints, guidelines, and limitations, that give its human designers yet another level of control over the system, one that allows them to predict the system’s behavior at lower levels of detail, and more importantly at much higher levels than can be done with today’s software.


Luke:我不确定Hawking等。在该前提下运行。Given their professional association with organizations largely influenced by the Bostrom/Yudkowsky lines of thought on machine superintelligence, I doubt they’re worried about AGIs that are like “deranged humans with all the perils of modern software” — instead, they’re probably worried about problems arising from “五个论文“-style reasons (which also motivate Bostrom’s forthcomingSuperintelligence)。还是您认为您上面提出的要点也降低了推理?


Kris: 是的,一点没错。


Luke:谢谢,克里斯!