Readership: General readers as well as academics in the fields of Artificial Intelligence and Machine Learning, Computer Science, and Philosophy.
Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School and Director, Future of Humanity Institute, University of Oxford
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic
as well as philosophy.
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era" - Stuart Russell, Professor of Computer Science, University of California, Berkley
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book" - Martin Rees, Past President, Royal Society
"This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial
intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last?" - Max Tegmark, Professor of Physics, MIT
"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever" - Olle Haggstrom, Professor of Mathematical Statistics
"Valuable. The implications of introducing a second intelligent species onto Earth are
far-reaching enough to deserve hard thinking" - The Economist
"There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake" - Financial Times
"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" - Elon Musk, Founder of SpaceX and Tesla
"A damn hard read" - Sunday Telegraph
1: Past Developments and Present Capabilities
2: Roads to Superintelligence
3: Forms of Superintelligence
4: Singularity Dynamics
5: Decisive Strategic Advantage
6: Intellectual Superpowers
7: The Superintelligent Will
8: Is the Default Outcome Doom?
9: The Control Problem
10: Oracles, Genies, Sovereigns, Tools
11: Multipolar Scenarios
12: Acquiring Values
13: Design Choices
14: The Strategic Picture
15: Nut-Cutting Time
Superintelligence in the Financial Times' book of the year list Nick Bostrom talks at Authors@Google about Superintelligence Read Time's article on 5 intelligent people who think AI could bring the apocalypse. Nick Bostrom made the list.