Markus Schmidt on Risks from Novel Biotechnologies

||对话

Markus Schmidt portraitMarkus Schmidt博士是创始人兼团队负责人Biofaction,奥地利金宝博娱乐维也纳的一家研究与科学传播公司。在电子工程,生物学和环境风险评估方面的教育背景,他在许多科学技术领域(GM-CROPS,GENE治疗,纳米技术,融合技术和合成生物学)进行了环境风险评估和安全和公众的看法研究)超过10年。

He was/is coordinator/partner in several national and European research projects, for exampleSYNBIOSAFE, the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).

他在德国联邦政府(关于中国的通用汽车工厂)和奥地利运输,创新技术部(纳米技术和融合技术)的技术评估办公室制作了科学政策报告。他曾担任欧洲委员会欧洲伦理学集团(EGE)的顾问,美国总统生物伦理问题研究委员会,J Craig Venter研究所,Alfred P. Sloan基金会和德国议会的生物伦理学委员会以及几个主题相关的国际项目。马库斯·施密特(Markus Schmidt)是几篇经过同行评审文章的作者,他编辑了一本专刊和两本有关合成生物学及其社会影响的书籍,并制作了第一部有关合成生物学的纪录片。

除了科学工作外,他还组织了一个科学电影节,并制作了一个艺术展览(既是2011年),以探索有关生物技术未来的小说和创意和诠释。

Luke Muehlhauser: I’ll start by giving our readers a quick overview of合成生物学, the “design and construction of biological devices and systems for useful purposes.” As explained ina 2012 book you edited, major applications of synthetic biology include:

  • Biofuels: ethanol, algae-based fuels, bio-hydrogen, microbial fuel cells, etc.
  • 生物修复: wastewater treatment, water desalination, solid waste decomposition, CO2恢复, ETC。
  • Biomaterials: bioplastics, bulk chemicals, cellulosomes, etc.
  • 新颖的发展:用于生产新细胞和生物的原始细胞和异种生物学。

But in addition to promoting the useful applications of synthetic biology, you also说话andextensively about the potentialrisksof synthetic biology. Which risks from novel biotechnologies are you most concerned about?

阅读更多 ”

Bas Steunebrink在自我反射编程上

||对话

Bas Steunebrink肖像Bas Steunebrinkis a postdoctoral researcher at the Swiss AI lab IDSIA, as part ofSchmidhuber教授group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recentpaperon how such agents will naturally strive to be effective, efficient, and curious was awarded theKurzweil Prize对于AGI’2013的最佳AGI创意。BAS也对与自我反思和元学习有关的任何事物以及一般所有“元”的东西都非常感兴趣。

Luke Muehlhauser:您正在进行的项目之一是Gödel机器(GM)实施。您能解释一下(1)Gödel机器是什么,(2)为什么您有动力从事该项目,以及(3)您的实施是什么?


Bas Steunebrink: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.

Godel Machine: diagram of scheduler

阅读更多 ”

哈迪·埃斯梅尔扎德(Hadi Esmaeilzadeh)在黑暗硅上

||对话

Hadi Esmaeilzadehrecently joined the School of Computer Science at the Georgia Institute of Technology as assistant professor. He is the first holder of the Catherine M. and James E. Allchin Early Career Professorship. Hadi directs the替代计算技术(ACT)实验室, where he and his students are working on developing new technologies and cross-stack solutions to improve the performance and energy efficiency of computer systems for emerging applications. Hadi received his Ph.D. from the Department of Computer Science and Engineering at University of Washington. He has a Master’s degree in Computer Science from The University of Texas at Austin, and a Master’s degree in Electrical and Computer Engineering from University of Tehran. Hadi’s research has been recognized by threeCommunications of the ACM金宝博娱乐研究亮点和三个IEEE Micro首选。哈迪的黑硅也工作en介绍了New York Times.

Luke Muehlhauser: Could you please explain for our readers what “dark silicon” is, and why it poses a threat to the historical exponential trend in computing performance growth?


Hadi Esmaeilzadeh: I would like to answer your question with a question. What is the difference between the computing industry and the commodity industries like the paper towel industry?

The main difference is that computing industry is an industry of new possibilities while the paper towel industry is an industry of replacement. You buy paper towels because you run out of them; but you buy new computing products because they get better.

And, it is not just the computers that are improving; it is the offered services and experiences that consistently improve. Can you even imagine running out of Microsoft Windows?

One of the primary drivers of this economic model is the exponential reduction in the cost of performing general-purpose computing. While in 1971, at the dawn of microprocessors, the price of 1 MIPS (Million Instruction Per Second) was roughly $5,000, it today is about 4¢.This is an exponential reduction in the cost of raw material for computing. This continuous and exponential reduction in cost has formed the basis of computing industry’s economy in the past four decades.

阅读更多 ”

Russell and Norvig on Friendly AI

||Analysis

russell-norvigAI:一种现代方法is by far the dominant textbook in the field. It is used in 1200 universities, and is currently the第22名计算机科学出版物。它的作者,Stuart RussellandPeter Norvig, devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”

The first 5 risks they discuss are:

  • People might lose their jobs to automation.
  • People might have too much (or too little) leisure time.
  • People might lose their sense of being unique.
  • 人工智金宝博官方能系统可能用于不良目的。
  • The use of AI systems might result in a loss of accountability.

这些部分中的每个部分都是一两个段落。最终的小节是“人工智能的成功可能意味着人类的终结”,给出了3.5页面. Here’s a snippet:

The question is whether an AI system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example… a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions…

Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering…

Third, the AI system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth. I.J. Good wrote (1965),

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

阅读更多 ”

Richard Posner on AI Dangers

||Analysis

PosnerRichard Posneris a jurist, legal theorist, and economist. He is also the author of nearly 40 books, and is by far themost-cited legal scholar of the 20th century.

2004年,波斯纳出版Catastrophe: Risk and Response, in which he discusses risks fromAGI一定程度。他的分析很有趣,部分原因是它在智力上独立于当今主题的Bostrom-Yudkowsky传统。

实际上,波斯纳没有appearto be aware of earlier work on the topic by I.J. Good (1970年,1982),Ed Fredkin (1979年),罗杰·克拉克(Roger Clarke)(1993,1994),丹尼尔·韦尔德(Daniel Weld&Oren Etzioni)(1994),詹姆斯·吉普斯(James Gips)(1995),Blay Whitby(1996),Diana Gordon (2000),Chris Harper (2000),或科林·艾伦(Colin Allen)(2000)。He is not even aware of Hans Moravec (1990,1999),Bill Joy (2000),尼克·博斯特罗姆(Nick Bostrom)(1997;2003),or Eliezer Yudkowsky (2001)。Basically, he seems to know only of Ray Kurzweil (1999)。

Still, much of Posner’s analysis is consistent with the basic points of the Bostrom-Yudkowsky tradition:

[一类灾难性风险]包括……科学事故,例如涉及粒子加速器,纳米技术…和人工智能的事故。技术是这些风险的原因,因此减慢技术可能是正确的反应。

…也许有一天,也许有一天很快(十年s, not centuries, hence), be robots with human and [soon thereafter] more than human intelligence…

…人类可能被证明是二十一世纪的黑猩猩,如果是这样,机器人可能会像我们为我们的同胞一样,对我们有很少的用处,但是非人类的灵长类动物……

…A robot’s potential destructiveness does not depend on its being conscious or able to engage in [e.g. emotional processing]… Unless carefully programmed, the robots might prove indiscriminately destructive and turn on their creators.

…Kurzweil可能是正确的,“一旦计算机达到人类的智力水平,它必然会咆哮过”……

差异的一个主要点似乎是,波斯纳担心阿吉斯变得自我意识,重新评估目标,并决定不再“被邓默斯物种围绕”的情况。相比之下,博斯特罗姆(Bostrom)和尤德科夫斯基(Yudkowsky)认为阿吉斯(Agis)会很危险,不是因为它们会“反叛”对人类的“反叛”,而是因为(大致)使用所有可用资源(包括人类生活依赖的资源)是几乎任何最终一套最终集合的融合工具强大的AGI可能拥有的目标。(参见例如Bostrom 2012

Ben Goertzel on AGI as a Field

||对话

Ben Goertzel portrait本Goertzel写到博士是金融p的首席科学家rediction firm艾迪亚控股;AI软件公司主席Novamente LLCand bioinformatics companyBiomind LLC;主席人工通用情报学会和OpenCog Foundation;未来主义非营利组织副主席人类+;Scientific Advisor of biopharma firmGenescient Corp.;Advisor to the奇异大学and美里;Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of theArtificial General Intelligence conferenceseries. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand. He has three children and too many pets, and in his spare time enjoys creating avant-garde fiction and music, and exploring the outdoors.

阅读更多 ”

美里’s October Newsletter

||Newsletters

Greetings from the Executive Director

Dear friends,

The big news this month is that Paul Christiano and Eliezer Yudkowsky are giving talks at Harvard and MIT about the work coming out of MIRI’s workshops, on Oct. 15th and 17th, respectively (details below).

Meanwhile we’ve been planning future workshops and preparing future publications. Our experienced document production team is also helping to prepare尼克·博斯特罗姆(Nick Bostrom)Superintelligencebook for publication. It’s a very good book, and should be released by Oxford University Press in mid-2014.

By popular demand, MIRI research fellow Eliezer Yudkowsky now has a few “Yudkowskyisms” available on t-shirts, at理性服装. Thanks to Katie Hartman and Michael Keenan for setting this up.

干杯,

Luke Muehlhauser
执行董事

Upcoming Talks at Harvard and MIT

If you live near Boston, you’ll want to come see Eliezer Yudkowsky give a talk about MIRI’s research program in the spectacular Stata building on the MIT campus, onOctober 17th.

His talk is titledRecursion in rational agents: Foundations for self-modifying AI. There will also be a party the next day in MIT’s Building 6, with Yudkowsky in attendance.

Two days earlier, Paul Christiano will give a technical talk to a smaller audience about on of the key results from MIRI’s research workshops thus far. This talk is titledProbabilistic metamathematics and the definability of truth.

For more details on both talks, see the blog post这里.

阅读更多 ”