Don’t Worry about Superintelligence

Authors

  • Nicholas Agar Victoria University of Wellington

DOI:

https://doi.org/10.55613/jeet.v26i1.52

Abstract

This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligence should lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However, we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to proceed on that assumption that we will solve it. These observations do not reduce to zero the existential threat from superintelligence. But we should not permit fear of very improbable negative outcomes to delay the arrival of the expected benefits from AI.

Downloads

Published

2016-02-01

How to Cite

Don’t Worry about Superintelligence. (2016). Journal of Ethics and Emerging Technologies, 26(1), 73-82. https://doi.org/10.55613/jeet.v26i1.52

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 10 > >>