Superintelligence: Fears, Promises and Potentials

Reflections on Bostrom’s Superintelligence, Yudkowsky’s From AI to Zombies, and Weaver and Veitas’s “Open-Ended Intelligence”

Authors

  • Ben Goertzel Novamente LLC

DOI:

https://doi.org/10.55613/jeet.v25i2.48

Abstract

Oxford philosopher Nick Bostrom, in his recent and celebrated book Superintelligence, argues that advanced AI poses a potentially major existential risk to humanity, and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail, and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute (formerly Singularity Institute for AI), and David Weinbaum (Weaver) and Viktoras Veitas of the Global Brain Institute.

 

Relevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed, and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However, Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form, making clearer the essence of what is being claimed. For instance, the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky, with many of the same practical conclusions (e.g. that it may well be best if advanced AI is developed in secret by a small elite group).

 

Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence, viewing it as centered not on reward maximization, but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing, in only partially knowable ways.

 

It is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way. For instance, formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire, are often informally discussed as if they demonstrated the likelihood, rather than just the possibility, of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general, rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past, current, and future intelligence as “open-ended,” in the vernacular of Weaver and Veitas, the potential dangers no longer appear to loom so large, and one sees a future that is wide-open, complex and uncertain, just as it has always been.

Downloads

Published

2015-12-01

How to Cite

Superintelligence: Fears, Promises and Potentials: Reflections on Bostrom’s Superintelligence, Yudkowsky’s From AI to Zombies, and Weaver and Veitas’s “Open-Ended Intelligence”. (2015). Journal of Ethics and Emerging Technologies, 25(2), 55-87. https://doi.org/10.55613/jeet.v25i2.48