A reply to Francois Chollet on intelligence explosion

||分析

这是对Tensorflow和Teneano深度学习系统的Keras包装器的发明者Francois Chollet的答复,他的论文“金宝博官方情报爆炸的不可能。”

为了回应批评他的论文,乔勒特在推特上发了推文:

如果您在线发布论点,而您得到的唯一反对派是Braindead的争论和侮辱,它是否确认您是对的?还是只是在网上争论的人的自我选择?

他早些时候发推文:

不要过分地贴在您的观点上;其中一些可能是不正确的。智力超级大国是能够考虑每个新想法的能力,就好像它可能是正确的,而不仅仅是检查它是否确认/与您当前的观点相矛盾。

Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.

As a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.

我对Chollet的答复并没有试图随身携带整个智力爆炸的案例。我只会讨论我对Chollet特定论点的有效性的看法。即使“情报爆炸是不可能的”陈述是真的,我们仍然不想接受任何无效的论点来支持这一结论。

Without further ado, here are my thoughts in response to Chollet.

基本前提是,在不久的将来,将创建第一个“种子AI”,一般解决问题的能力略有超过人类的能力。这个种子AI将开始设计更好的AIS,启动递归的自我改善回路,该回路将立即将人类的智力留在灰尘中,并在短时间内通过数量级超过它。

我同意这或多或少是我在1998年创建这个词时的“种子AI”的意思。今天,十九年后,我会谈论一个“能力增长”的一般问题或认知系统的力量如何金宝博官方随着资源的增加和进一步的优化范围。递归自我改善的想法只是对能力增长的一般问题的投入。例如,我们最近看到了一些令人印象深刻的快速缩放能力,而没有任何我认为涉及种子AI的东西。也就是说,我认为Chollet关于“自我完善”的许多问题都与能力增强有关,因此我不会反对对话的主题。

该理论的支持者还将智力视为一种超级大国,以几乎超自然的能力授予其持有者来塑造其环境 

A good description of a human from the perspective of a chimpanzee.

From a certain standpoint, the civilization of the year 2017 could be said to have “magic” from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn’t recognize as a “solution” if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as “a good strategy for cooling your house” in advance of observing the outcome, because they don’t yet know about the temperature-pressure relation. A fancy term for this would be “strong cognitive uncontainability”;一个隐喻的术语将是“魔术”,尽管我们当然没有做任何真正的超自然事物。人与较小的大脑之间存在类似但较大的差距(aka a aka a chimpanzee)。

It’s not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.

- 例如,从科幻电影《超越》(2014年)中看到。

I agree that public impressions of things are things thatsomeoneought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, “Oh, like Skynet!” This is an understandable reason to be annoyed. But if we’re trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it’s important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn’t matter if Deepak Chopra’s writing on quantum mechanics has a larger readership than any actual physicist.

值得庆幸的是,Chollet并没有尤其是在攻击Kurzweil的剩余文章中,因此我将其留下。

情报爆炸的叙述将情报与个人智能代理人所表现出的一般问题解决能力(当前人类的大脑或未来的电子大脑)等同。

I don’t see what work the word “individual” is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it’s fair to say that I think about “agencies” which from our outside perspective seem to behave in a coherent goal-directed way.

我从情报爆炸理论中看到的第一个问题是未能意识到智力必然是更广泛的系统的一部分 - 智力的愿景是可以独立于其处境独立于任意智能的“罐子中的大脑”。金宝博官方

我不知道自己,尼克·博斯特罗姆(Nick Bostrom)或该领域的其他主要技术声音,声称解决问题可以独立于处境/环境中。

That said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example,过去的经验goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that “situation”—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.

特别是,没有“一般”智能之类的东西。从抽象的角度来看,我们知道这是通过“无免费午餐”定理的事实,指出解决问题的算法在所有可能的问题上都无法超越随机的机会。

斯科特·亚伦森的反应: “Citing the ‘No Free Lunch Theorem’—i.e., the (trivial) statement that you can’t outperform brute-force search on随机的instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.”

似乎值得在数学细节上阐明这一点的简单特殊案例,因为考虑到Chollet的其余文章,它看起来像是一个中心问题。我希望这项数学对Chollet并不陌生,而是在这里重新定居以建立通用语言和其他所有人阅读的好处。

拉普拉斯的继承规则正如托马斯·贝叶斯(Thomas Bayes)发明的那样,我们为基于先前观察到的元素预测二进制序列的未来元素提供了一个简单的规则。让我们将这个二进制序列作为一系列“头”和“尾巴”,由某些序列发生器产生,称为“硬币”,不公平。在产生继承规则的标准问题设置中,我们先前的无知状态是,我们认为硬币出现的频率是\(\ theta \),而我们所知道的\(\ theta \)是同等的。可能在\(0 \)和\(1 \)之间使用任何实际值。我们可以做一些贝叶斯的推论,并得出结论,在看到\(m \)头和\(n \)尾巴后,我们应该预测头部的几率:下一个coinflip的尾巴是:

$$\frac{M + 1}{M + N + 2} : \frac{N + 1}{M + N + 2}$$

(看拉普拉斯的继承规则为了证明。)

该规则会产生诸如:“如果您还没有观察到任何偶数,将50-50分配给头和尾巴”或“如果您看过四个头,没有尾巴,请分配1/6的概率rather than 0 probability下一个翻转是尾巴”或“如果您看到硬币的头部出现了150次,尾巴又有75次,下次将硬币分配到大约2/3的概率上。”

Now this rule does not do super-well in any possible kind of environment. In particular, it doesn’t do any better than the maximum-entropy prediction “the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously” if the environment is in fact a fair coin. In general, there is “no free lunch” on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better一般than maximum entropy, even if that algorithm luckily does better in one particular random draw.

另一方面,如果您从先前开始每个二进制序列都可能同样可能,那么您永远不会注意到人类会考虑明显的模式。如果您从Maxentropy先验开始,那么在观察一枚硬币的头将是一千次,而尾巴永远不会,您仍然可以预测下一个平局中的50-50;因为在Maxentropy先验上,序列“一千头紧随其后的尾巴”的可能性与“一千头紧随其后的头”一样可能。

拉普拉斯(Laplace)继承规则实例化的推论规则在通用的低渗透宇宙中会更好。它不是从特定知识开始的;它不是从假设硬币是偏见的头部或有偏见的尾巴开始的。如果硬币是有偏见的,拉普拉斯的规则将了解到这一点。如果硬币是偏见的尾巴,拉普拉斯的规则也将很快从观察中得知。如果硬币实际上是公平的,那么拉普拉斯的规则将迅速收敛于50-50区域中的概率,而每co依夫的概率并不比我们从Max-Max-entropy Prior开始。

您能比Laplace的继承规则做得更好吗?当然;如果环境产生头的可能性等于0.73,并且您开始知道这一点,那么您可以在第一轮中猜测看到头的概率为73%。但是,即使内置了这种非传播和高度特定的知识,您也不会非常much better than Laplace’s Rule of Succession unless the first coinflips are very important to your future survival. Laplace’s Rule will probably figure out the answer is somewhere around 3/4 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer不是0.73它也可以处理这种情况。

拉普拉斯的规则是推断二进制序列的最一般规则吗?很明显不是;例如,如果您看到了初始序列…

$$ hththththththt…$$

…那你可能会很高的猜测不是无限的下一个生成的元素是\(h \)的概率。这是因为您有能力识别一种模式,而拉普拉斯(Laplace)规则没有(即交替的头部和尾巴)。当然,您识别这种模式的能力只能在有时会产生类似模式的环境中有助于您有时会产生这种模式。如果我们把你扔进一个宇宙同样经常在观察一千个完美的交替对之后,向您介绍了“尾巴”,就像“头”一样,您的模式识别能力将是没有用的。当然,类似的最大渗透宇宙通常不会在初始序列中为您提供一千个完美的交替!

One extremely general but utterly intractable inference rule is所罗诺夫诱导, A普遍先验它将概率分配给每个可计算序列(或序列上的可计算概率分布)与成比例algorithmic simplicity也就是说,与指定计算所需的程序大小的指数相反。所罗诺夫诱导可以从观察中学习任何可以由紧凑的程序,相对于通用计算机的选择,该计算机最多对所需的证据数量或错误的数量产生有限的影响。当然,在假设结构 - 避免算法可压缩序列的假设结构范围内,所罗门诺夫的电感器将比最大 - 凝集的先验稍微稍微做,尽管不介意。较少的可能值得庆幸的是,我们不在这样的宇宙中生活。

然后,似乎不认识到,对于足够大的里程碑,我们可以看到从不太一般的推论规则到更一般的推理规则,那些在越来越广泛且复杂的环境中做得很好的秩序,现实世界中的环境越来越宽。有责任生成:

始终将概率为0.73的规则在每个回合上的头部,在每个翻转具有独立的0.73概率的环境中表现出色。

继承拉普拉斯的统治将开始做方程lly well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace’s Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.

A human is more general and can also spot patterns like \(HTTHTTHTTHTT\) where Laplace’s Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.

如果任何人都发生在超输入设备上并从中构建了一个所罗门诺夫电感器,那么所罗门诺夫电感器将比人类更一般,并且在任何具有程序化描述的环境中做得很好,大大比所罗门诺夫电感器可以观察到的数据量要小得多。

在环境实际上是最大渗透率的情况下,这些预测因子都不需要比最大透镜预测更糟糕。它可能不是免费的午餐,但即使按照假设的随机宇宙的标准,也不是那么昂贵。这对任何事情都不重要,因为我们不住在最大渗透宇宙中,因此我们不在乎我们会做的事情变得更糟。

关于这一点的一些较早的非正式讨论可以在No-Free-Lunch Theorems Are Often Irrelevant

如果智能是解决问题的算法,则只能在特定问题上理解。

有些问题比其他问题更笼统,而不是相对于最大值先验,它可以在同等的基础上对待所有问题子类,但是相对于我们实际生活的低渗透宇宙,一百万个观察到的头部均在该宇宙中。下一轮比T比T更容易产生H。相似地,相对于我们低渗透宇宙中抛弃的问题类别,“找出简单计算生成此顺序的内容”比人类更一般,该人比“弄清楚”更一般的人该序列中的头或尾巴的频率是多少。”

人类智能是解决问题的算法问题类从务实的意义上讲,这可能是非常非常广泛的。

In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

The problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can’t. We aren’t absolutely general—the Moon still hasa certain somethingin common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solvesgeneral enough to pass from oxygen environments to the vacuum.

如果我们要把新鲜创造的人脑放在章鱼的身体中,并活在海底的底部,将会发生什么?它甚至会学会使用八足的身体吗?它可以在几天里生存吗?…大脑具有用手可以抓住的身体的硬编码概念,可以吮吸的嘴,将眼睛安装在移动的头上,可用于视觉上关注物体(前桥 - 眼睛反射),并且需要这些先见之见才是人类的智力开始控制人体。

从这个意义上说,人的运动皮层类似于推论规则,该规则总是在每轮上始终以0.73的概率预测头部,而不能学会预测0.07。也可能是我们的运动皮层更像是一种拉普拉斯电感器,它以72头和26个尾巴的预先观察开始,偏向于该特定比例,但最终可以在另外一千轮观察后学习0.07。

这是一个经验的问题,但我不确定为什么这是一个非常相关的问题。人类运动皮层可能是专门的,而不是在先验的知识中开始开始,而是在祖先的环境中,我们从来没有随机地将其随机地插入章鱼身体。但是什么呢?如果您将一些人放在游戏机上,并给他们一个像章鱼一样奇怪的机器人来学习控制,我希望他们在这方面的全心全意学习能力比原始运动皮层做得更好。人类使用整个智能以及一些简单的控件,即使在我们的祖先环境中没有这些飞机,也可以学会驾驶汽车和飞机。

我们也没有理由相信人类运动cortex is the limit of what’s possible. If we sometimes got plopped into randomly generated bodies, I expect we’d already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.

Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.

没有视觉输入,人类视觉皮层的发展不佳。这并不意味着我们的Visual Cortex是一个简单的空白板,并且所有要处理视觉的信息都存储在环境中,而Visual Cortex仅从空白的板岩中适应了。如果那是真的,我们希望它可以轻松控制章鱼的眼睛。视觉皮层需要视觉输入,因为进化生物学的逻辑:如果使X成为环境常数,则该物种有可能获得假设存在X的基因。它没有理由不这样做。预期的结果是,视觉皮层包含大量的遗传复杂性,使其比通用的大脑皮层更好地进行视觉,但是某些复杂性需要在儿童期间正确展开视觉输入。

But if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.

人类的孩子可靠地在其他人群周围长大,因此,如果人类以假定环境包含此信息的方式建立基本的智力控制过程并不奇怪。因此,我们不能推断在环境中“存储”多少信息,或者智力控制过程将是无法遗传存储的信息。这不是一个问题进化有理由尝试解决的理由,因此我们不能从缺乏进化的解决方案中推断出这种解决方案是不可能的。

And even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there’s no easy incremental pathway there through a series of smaller changes, so those designs aren’t very evolvable; but human engineers still build skyscrapers and cars, etcetera.

Among humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro.

But AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters.

And then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no ‘instinct’ in the form of precomputed features.

Now one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn’t learn from aliens. More to the point, I don’t see a sharp and utter distinction between Laplace’s Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don’t see a strong reason why AGI Zero can’t waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.

This point could perhaps be counterargued, but it hasn’t yet been counterargued to my knowledge, and it certainly isn’t settled by any theorem of computer science known to me.

如果智能从根本上与特定的感觉运动方式联系在一起通过加速输送带,通过工厂线的吞吐量。智力的扩展只能来自心灵的共同发展,其感觉运动方式及其环境。

It’s not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what?

也就是说:我们同意,这是一个数学真理,您需要“一定数量”的经验才能从特定问题之前从广义上讲。这并不意味着所需的经验对于实用重要的问题来说很大,或者需要三十年而不是三天。我们不能随便从“证明:需要一定数量的x”传递给“因此:需要大量x”或“因此:需要大量x,以至于它会使事情放慢很多”。(也可以看看:无害的超新星谬误:有限,因此无害。

If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do.

“冯·诺伊曼?牛顿?爱因斯坦?”-Scott Aaronson

更重要的是:爱因斯坦等人。没有大脑大100倍的大脑,也没有快10,000倍。Einstein等人通过性生殖物种中的性重组逻辑。不可能有大量de novo标准人脑中不存在的软件。(也就是说:一种具有10个必要部分的适应,每个部分在该物种中只有50%的盛会,只能完全组装1000次中的1个,这不足以在组分基因上呈现尖锐的选择梯度;复杂的相互依存机械在性繁殖物种中必然是普遍的,只是有时可能无法完全组装它。您不会获得X战警的全新复杂能力的“突变体”。)

在广阔的设计空间中,人类被隐喻地压缩成一个小点。我们都是相同的汽车和型号的汽车和型号,以略有不同的尺寸和略有不同的装饰品,有时丢失了零碎的尺寸,略有不同。即使对于其他灵长类动物,我们可能会因整个复杂的适应而有所不同,我们与黑猩猩共享95%的遗传物质。人类之间的差异并不是在智力上可能建立界限的界限,除非您导入此处未描述的一些进一步的假设。

The standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility ofAgi是要问:“为什么您的论点不排除人类?”

同样,一个对超人通用情报可能性的论点的人需要回答的标准问题是:“为什么您的论点不排除人类表现出务实的智力表现比黑猩猩更大的知识表现?”

我们专门研究这种情况,我们会问:“为什么最聪明的黑猩猩没有建造火箭的事实让我们推断,没有人能在月球上行走?”

没有人,甚至没有约翰·冯·诺伊曼(John von Neumann),也无法重塑自己的游戏玩法,并继续踩踏世界上最伟大的大师。Alphago Zero在三天内做到了。显然,“我们可以从人类变异的界限中推断出认知能力的界限”是错误的。如果应该有一些特殊情况,这是真实而不是错误的,并且禁止超人AGI,则需要阐明该特殊情况。

情报不是超级大国。杰出的情报本身并没有为您与您的情况相称地提供特殊的权力。

…说智人, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.

10,000年前,一个高潜力的人会在低复杂的环境中提出,这可能会说出一种单一的语言,少于5,000个单词,从来没有被教导要读或写作,将会受到有限的知识以及很少的认知挑战。对于大多数当代人类而言,情况有些好一些,但是没有迹象表明我们目前的环境机会超过了我们的认知潜力。

这是否意味着技术比今天的技术不应比今天高100年?如果没有,我们从什么意义上抓住了我们环境中的一切机会?

是否只能逐个接一个地进行机会,以便今天的技术仅提供当今进步的可能性?那么,为什么更强大的情报不能更快地贯穿其中,并迅速建立这些机会呢?

在丛林中抚养的聪明人只是无毛猿。同样,一个具有超人大脑的人工智能在我们现代世界中掉进了人体,可能不会比聪明的当代人更大。如果可以的话,那么异常高的人类将已经表现出相称的个人成就水平。他们将实现对环境的高度控制,并解决主要的杰出问题,这在实践中没有。

它不能吃互联网吗?它不能吃market? It can’t crack the protein folding problem and deploy arbitrary biological systems? It can’t get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?

我在这里看不到强有力的贝叶斯证据。It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann’s, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?

我们知道,通过查看人类最大值来推断认知界限的规则对零不起作用。为什么还可以推断“ AGI因为没有人类吃过股票市场而无法食用股票市场”?

However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence…

Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

前提是,运行特定类型的软件(人的大脑)的特定大小和组成的大脑只能解决问题X(在这种情况下,如果它们在某个组中合作,则等于“构建AGI”)n尺寸n并运行一定的时间,并建立z数量的外部认知假体。好的。通过自然选择,人类并不是特别专门研究AI建造问题。Why wouldn’t an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren’t, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.

To sharpen this argument:

我们可能会从前提开始:“对于所有问题X,如果人类文明在X上付出了很多努力,并且到达w,则没有一个机构能够比W越来越大”,从这个前提中推断出来在第一个AGI构建后不久,没有一个AGI能够构建新的AGI。

但是,这个前提显然是错误的,甚至深蓝bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the ‘build an AGI’ problem which it does not say about the ‘win a chess game’ problem? Then what is that special case and why should we believe it?

Also relevant: In the game of Kasparov vs. The World, the world’s best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov’s brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don’t agglomerate very efficiently.

但是,未来的AI,就像人类和迄今为止我们生产的其他智能系统一样,将为我们的文明做出贡献,而我们的文明反过来将使用它们来不断扩大其金宝博官方产生的AIS的能力。

This takes in the premise “AIs can only output a small amount of cognitive improvement in AI abilities” and reaches the conclusion “increase in AI capability will be a civilizationally diffuse process.” I’m not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support “AI cannot output much AI”, it just tries to reason further from that as a premise.

我们解决问题的能力(尤其是我们设计AI的能力)已经在不断提高,因为这些能力主要不存在于我们的生物学大脑中,而是在我们的外部集体工具中。递归循环已经进行了很长时间,“更好的大脑”的兴起不会在定性上影响它 - 不比以前增强情报的技术更重要。

来自Arbital的Harmless supernova fallacypage:

  • 先例,因此无害:“真的,我们周围已经有了超新星while: there are already devices that produce ‘super’ amounts of heat by fusing elements low in the periodic table, and they’re called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there’s no reason the same shouldn’t be true of supernovas.” (Noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn’t make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)

在AI设计过程中,我们的大脑本身从来都不是重要的瓶颈。

令人震惊的断言。假设我们可以在某些虚拟上传的环境中加快AI-研究者的大脑100金宝博娱乐0倍,不允许他们进行新的物理学或生物学实验,而是让他们访问虚拟世界内的计算机。我们是否认为AI开发会花费相同的恒星时间?我期望下一个版本的TensorFlow会更早出现,即使考虑到大多数单独的AI实验将会变得更加宏伟,因为加速的研究人员将需要这些实验来更快地完成并使用较少的计算能力。金宝博娱乐缩放损失将小于总数,就像在当前的研究环境中添加一千千倍的CPU可能会加快进度的速度最多增加5倍,而不是1000倍。同样,所有这些加速金宝博娱乐brains we might see progress increase only by a factor of 50 instead of 1000, but I’d still expect it to go a lot faster.

那么,从什么意义上讲,我们不是在人类大脑的速度上瓶颈以建立我们对AI的理解呢?

至关重要的是,文明级别的情报改善循环仅导致我们解决问题的能力随着时间的推移而实现的线性进步。

我显然不认为自己是Kurzweilian,但即使我不得不反对这似乎是过去10,000年来做出的奇怪的断言。

递归地改善X数学上会导致X呈指数增长吗?否 - 简而言之,因为没有复杂的现实世界系统可以建模为`x(t + 1)= x(t) * 金宝博官方a,a> 1)`。

This seems like areally奇怪的断言,一眼驳斥世界GDP。Note that this can’t be an isolated observation, because it also implies that everynecessary对世界GDP的投入正在设法跟上,并且至少在最近的历史上,所有无法跟上跟上的意见都在经济上绕过。

我们不必推测智能系统开始优化自己的智能时,是否会发生“爆炸”。金宝博官方碰巧的是,大多数系统都是自我提高的。金宝博官方我们被它们包围了……机电一体化正在递归自我改善 - 更好的制造机器人可以制造更好的制造机器人。军事帝国递归地自我膨胀 - 您的帝国越大,您的军事意义就越大。个人投资是自我提高的 - 您拥有的钱越多,您可以赚钱的钱就越多。

如果我们将“递归自我完善”定义为仅意味着“至少包含一个正循环的因果过程”,那么这个世界就会充满这种情况。仍然值得将某些反馈循环区分为比其他反馈要快得多:例如,核武器中的中子级联或假设种子AI晶体管内的信息级联。这似乎是无害的超新星谬论中“因此无害”的另一个实例。

软件在更大的过程中只是一个齿轮(我们的经济,我们的生活),就像您的大脑只是一个更大过程中的一个齿轮 - 人类文化。这种背景严格限制了软件的最大潜在有用性,就像我们的环境一样,即使有超人的大脑才能使任何人的智能变得有多么聪明。

“黑猩猩在更大的过程中只是一个齿轮,即生态学。为什么要假设某种怪异的超级琴,可以以比当前生态学产生的黑猩猩量更高的速度扩大其超琴经济?”

具体而言,假设一种药物足够聪明,可以破解逆蛋白结构的预测,即,它可以建立自己的生物学,并且物理定律允许使用任何数量的生物后分子机械。从什么意义上讲,它仍然取决于人类其他文化的大多数经济成果?为什么不只是开始构建von Neumann机器?

除了上下文硬限制之外,即使系统的一个部分具有递归自我突破的能力,系统的其他部分也将不可避免地开始充当瓶颈。金宝博官方逆转过程将响应递归的自我完善和挤压。

聪明的代理商将试图故意绕过这些瓶颈并经常成功,这就是为什么世界经济继续以指数级的速度增长,而不是在1200年公元1200年耗尽小麦的原因。尽管即使是对立的过程,但我还是不想将这次谈话转移到政治上,但它仍然以指数的速度增长。

现在可以肯定的是,最聪明的头脑不能比光更快,如果我们对物理定律的特征遥不可及,它的指数增长将在原子和原子镜上占据瓶颈。但是要说因此,这没有理由是无害超新星谬论的“有限,无害”的变体。超新星并不是一个无限的热,但是很热,只要穿着Nomex连身裤就无法生存。

当涉及智能时,系统间的通信会成为对基础模块的任何改进的刹车 - 更智能零件金宝博官方的大脑将在协调它们方面遇到更多麻烦;

Why doesn’t this prove that humans can’t be much smarter than chimps?

我们可以推断出从进化记录中控制人类大脑的规模定律是一个复杂的话题。在这一点上,我将您推荐给第3.1节,“大脑大小的回报”,第35-39页,inmy semitechnical discussion of returns on cognitive investment。得出的结论是,我们可以从过去几百万年的人类历史上的均衡脑大小的增加,以及人口遗传学的基本逻辑,在这段时间内,随着时间的增加和时间和时间的增加,边际回报增加了大脑的回报presumably increasingly sophisticated neural ‘software’. I also remark that human brains are not the only possible cognitive computing fabrics.

非常可能的人更有可能患有某些精神疾病,这也许不是巧合。

我希望与典型的黑猩猩相比,非常高的IQ黑猩猩患有某些神经系统疾病的可能性更大。这并不能告诉我们黑猩猩正在接近智力的最终硬度限制,除了您没有疯狂的情况下,您无法扩展。它告诉我们,如果您采用任何生物系统并尝试在典型祖先案件之外的条件下进行操作,则更有可能破金宝博官方裂。非常高的人类不是自然选择作为正常工作条件的典型人。

然而,现代科学进步是线性的。我在2012年的一篇题为“奇异之处都没有来”的文章中详细介绍了这种现象。在1950 - 2000年期间,我们没有比1900 - 1950年以上的物理学取得更大的进步 - 我们也可以说。数学的发展速度并没有比1920年更快。几十年来,医学一直在其所有指标上取得线性进步。

我广泛地同意最近的历史。我倾向于将其视为人类官僚机构以脚开枪的人工制品,以我不希望在单个统一的代理中应用。

在有限的物理供应中,我们可能会结束可用的水果。这并不意味着我们目前的材料技术可以与可能的材料技术的限制竞争,这至少包括任何生物学混合系统都可以迅速制造的任何生物学混合系统。金宝博官方

随着科学知识的扩大,必须投资于教育和培训的时间和精力,并且个人研究人员的调查领域变得越来越狭窄。金宝博娱乐

Our brains don’t scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.

Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.

True of chimps; didn’t stop humans from being much smarter than chimps.

真空中金宝博官方没有系统;任何个人智能始终都会受到其存在的环境的定义和限制。

小鼠的真实;没有阻止人类比老鼠更聪明。

上面的论点的一部分是,正如我可能不公平地总结它的那样,“没有任何意义,人类绝对比章鱼更聪明。”好的,但是务实说,我们有核武器和章鱼没有。一个相似的务实capability gap between humans and未对准阿吉斯似乎是一个合理关注的问题。如果您不想将其称为智能差距,请称其为您喜欢的东西。

目前,我们的环境而不是我们的大脑正在充当我们智慧的瓶颈。

I don’t see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.

Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves.

这个事实应该暗示慢点通过与自己的板载认知工具具有连续高宽的互动的AGI进行的进展?

一个金宝博官方已经自我改善的系统,已经很长时间了。

True if we redefine “self-improving” as “any positive feedback loop whatsoever”. A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don’t recommend standing next to one when it goes off.

递归自我提高的系统,由于偶然的瓶颈,减少的回报和来自其金宝博官方存在的更广泛背景而产生的反反应,无法在实践中取得指数进步。从经验上讲,它们倾向于表现出线性或乙状结肠的改进。

几乎在任何时间尺度上被世界GDP的图表伪造。

In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe.

我认为我们主要是做科学错误, but that would be a讨论更长

合适的A-T恤衫重新加入将包括“为什么我们认为我们在科学方面的上限比黑猩猩更擅长?”

在我们的文明层面上,递归情报扩展已经在发生。它将在AI时代继续发生,并且以大致线性的速度进行。

如果这是真的,我认为这不会由提出的论点确定。

我本人和罗宾·汉森(Robin Hanson)在“AI Foom Debate。”我希望甚至罗宾·汉森(Robin Hanson)在这场辩论中广泛反对我,也会对所有系统中的进步都局限于大致线性速度的想法咳嗽。金宝博官方

有关更多阅读,我建议我自己的半技术论文,以了解我们当前的观察结果可以告诉我们有关通过资源越来越多的认知系统规模的规模,并提高了优化,”金宝博官方Intelligence Explosion Microeconomics。”

你喜欢这个帖子吗?你可能会喜欢我们的另一个分析posts, including: