Amazon cover image
Image from Amazon.com

Superintelligence : paths, dangers, strategies Nick Bostrom

By: Material type: TextTextPublication details: Oxford : Oxford University Press, c2014.Description: xvi, 328 p. : ill. ; 25 cmISBN:
  • 9780199678112
  • 0199678111
Subject(s): DDC classification:
  • 006.3/01
Online resources: Summary: The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
REGULAR University of Wollongong in Dubai Main Collection 006.301 BO SU (Browse shelf(Opens below)) Available T0036517
Browsing University of Wollongong in Dubai shelves, Shelving location: Main Collection Close shelf browser (Hides shelf browser)
006.3 WO IN An introduction to multiagent systems / 006.3 WO IN An introduction to multiagent systems / 006.3 YA AR Artificial superintelligence : 006.301 BO SU Superintelligence : 006.301 BO SU Superintelligence : 006.301 TE LI Life 3.0 : 006.301 TE LI Life 3.0 :

Examines the impact that the future of artificial intelligence could have as super-intelligence machines would be able to become smarter and faster on their own, and possibly be catastrophic to humanity.

Includes bibliographical references (p. 261-324) and index.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

There are no comments on this title.

to post a comment.