ŷ

Manny's Reviews > Superintelligence: Paths, Dangers, Strategies

Superintelligence by Nick Bostrom
Rate this book
Clear rating

by
1713956
's review

it was amazing
bookshelves: linguistics-and-philosophy, multiverse, science, science-fiction, strongly-recommended

Superintelligence was published in 2014, and it's already had time to become a cult classic. So, with apologies for being late getting to the party, here's my two cents.

For people who still haven't heard of it, the book is intended as a serious, hard-headed examination of the risks associated with the likely arrival, in the short- to medium-term future, of machines which are significantly smarter than we are. Bostrom is well qualified to do this. He runs the Future of Humanity Institute at Oxford, where he's also a professor at the philosophy department, he's read a great deal of relevant background, and he knows everyone. The cover quotes approving murmurs from the likes of Bill Gates, Elon Musk, Martin Rees and Stuart Russell, co-author of the world's leading AI textbook; people thanked in the acknowledgements include Demis Hassabis, the founder and CEO of Google's Deep Mind. So, why don't we assume for now that Bostrom passes the background check and deserves to be taken seriously. What's he saying?

First of all, let's review the reasons why this is a big deal. If machines can get to the point where they're even a little bit smarter than we are, they'll soon be a whole lot smarter than we are. Machines can think much faster than humans (our brains are not well optimised for speed); the differential is at least in the thousands and more likely in the millions. So, having caught us up, they will rapidly overtake us, since they're living thousands or millions of their years for every one of ours. Of course, you can still, if you want, argue that it's a theoretical extrapolation, it won't happen any time soon, etc. But the evidence suggests the opposite. The list of things machines do roughly as well as humans is now very long, and there are quite a few things, things we humans once prided ourselves on being good at, that they do much better. More about that shortly.

So if we can produce an artificial human-level intelligence, we'll shortly after have an artificial superintelligence. What does "shortly after" mean? Obviously, no one knows, which is the "fast takeoff/slow takeoff" dichotomy that keeps turning up in the book. But probably "slow takeoff" will be at most a year or two, and fast takeoff could be seconds. Suddenly, we're sharing our planet with a being who's vastly smarter than we are. Bostrom goes to some trouble to help you understand what "vastly smarter" means. We're not talking Einstein versus a normal person, or even Einstein versus a mentally subnormal person. We're talking human being versus a mouse. It seems reasonable to assume the superintelligence will quickly learn to do all the things a very smart person can do, including, for starters: formulating and carrying out complex strategic plans; making money in business activities; building machines, including robots and weapons; using language well enough to persuade people to do dumb things; etc etc. It will also be able to do things that we not only can't do, but haven't even thought of doing.

And so we come to the first key question: having produced your superintelligence, how do you keep it under control, given that you're a mouse and it's a human being? The book examines this in great detail, coming up with any number of bizarre and ingenious schemes. But the bottom line is that no matter how foolproof your scheme might appear to you, there's absolutely no way you can be sure it'll work against an agent who's so much smarter. There's only one possible strategy which might have a chance of working, and that's to design your superintelligence so that it wants to act in your best interests, and has no possibility of circumventing the rules of its construction to change its behavior, build another superintelligence which changes its behavior, etc. It has to sincerely and honestly want to do what's best for you. Of course, this is Asimov Three Laws territory; and, as Bostrom says, you read Asimov's stories and you see how extremely difficult it is to formulate clear rules which specify what it means to act in people's best interests.

So the second key question is: how do you build an agent which of its own accord wants to do "the right thing", or, as Socrates put it two and half thousand years ago, is virtuous? As Socrates concludes, for example in Meno and Euthyphro, these issues are really quite difficult to understand. Bostrom uses language which is a bit less poetic and a bit more mathematical, but he comes to pretty much the same conclusions. No one has much idea yet of how to do it. The book reaches this point and gives some closing advice. There are many details, but the bottom line is unsurprising given what's gone before: be very, very careful, because this stuff is incredibly dangerous and we don't know how to address the critical issues.

I think some people have problems with Superintelligence due to the fact that Bostrom has a few slightly odd beliefs (he's convinced that we can easily colonize the whole universe, and he thinks simulations are just as real as the things they are simulating). I don't see that these issues really affect the main arguments very much, so don't let them bother you if you don't like them. Also, I'm guessing some other people dislike the style, which is also slightly odd: it's sort of management-speak with a lot of philosophy and AI terminology added, and because it's philosophy there are many weird thought-experiments which often come across as being a bit like science-fiction. Guys, relax. Philosophers have been doing thought-experiments at least since Plato. It's perfectly normal. You just have to read them in the right way. And so, to conclude, let's look at Plato again (remember, all philosophy is no more than footnotes to Plato), and recall the argument from the Theaetetus. Whatever high-falutin' claims it makes, science is only opinions. Good opinions will agree with new facts that turn up later, and bad opinions will not. We've had three and a half years of new facts to look at since Superintelligence was published. How's its scorecard?

Well, I am afraid to say that it's looking depressingly good. Early on in the history of AI, as the book reminds us, people said that a machine which could play grandmaster level chess would be most of the way to being a real intelligent agent. So IBM's team built Deep Blue, which beat Garry Kasparov in 1997, and people immediately said chess wasn't a fair test, you could crack it with brute force. Go was the real challenge, since it required understanding. In late 2016 and mid 2017, Deep Mind's AlphaGo won matches against two of the world's three best Go players. That was also discounted as not a fair test: AlphaGo was trained on millions of moves of top Go matches, so it was just spotting patterns. Then late last year, Alpha Zero learned Go, Chess and Shogi on its own, in a couple of days, using the same general learning method and with no human examples to train from. It played all three games not just better than any human, but better than all previous human-derived software. Looking at the published games, any strong chess or Go player can see that it has worked out a vast array of complex strategic and tactical principles. It's no longer a question of "does it really understand what it's doing". It obviously understands these very difficult games much better than even the top experts do, after just a few hours of study.

Humanity, I think that was our final warning. Come up with more excuses if you like, but it's not smart. And read Superintelligence.
401 likes · flag

Sign into ŷ to see if any of your friends have read Superintelligence.
Sign In »

Reading Progress

January 21, 2018 – Shelved
January 21, 2018 – Shelved as: to-read
February 1, 2018 – Started Reading
February 1, 2018 – Shelved as: to-read
February 1, 2018 –
page 35
9.94% "Chess-playing expertise turned out to be achievable by means of a surprisingly simple algorithm. It is tempting to speculate that other capabilities - such as general reasoning ability, or some key ability involved in programming - might likewise be achievable through some surprisingly simple algorithm."
February 2, 2018 –
page 60
17.05% "We do not need to plug a fiber-optic cable into our brain in order to access the Internet. Not only can the human retina transmit information at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting information from this information torrent and to interfacing with other brain areas."
February 2, 2018 –
page 125
35.51% "Getting the Manhattan Project started required an extraordinary effort by several visionary physicists, including especially Mark Oliphant and Leó Szilárd: the latter persuaded Eugene Wigner to persuade Albert Einstein to put his name on a letter to persuade Franklin D. Roosevelt to look into the matter. Even after the project reach its full scale, Roosevelt remained skeptical of its workability and significance."
February 3, 2018 –
page 160
45.45% "If the AI's intelligence is to the human's as the human's is to a beetle, how can the human design the AI so that they can be sure of being able to control not only that particular AI, but all the successor AIs it might create? <spoiler>Oddly enough, the problem seems rather difficult to solve.</spoiler>"
February 3, 2018 –
page 210
59.66% "It's taken for granted that if you create a simulated entity in a computer and then cause that entity to feel simulated pain, you've done something immoral. Why? The simulation just involves mechanically working through the steps of a mathematical computation. If there is any pain, it's inherent in the computation, and it'll be there whether you work through it or not."
February 5, 2018 – Shelved as: linguistics-and-philosophy
February 5, 2018 – Shelved as: multiverse
February 5, 2018 – Shelved as: science
February 5, 2018 – Shelved as: science-fiction
February 5, 2018 – Shelved as: strongly-recommended
February 5, 2018 – Finished Reading

Comments Showing 1-50 of 95 (95 new)


message 1: by Jim (new) - added it

Jim 42.....

virtue, right thing, best interests of humanity, etc... are soft concepts, no? With squishy, eye of the beholder human ideas, how could AI come up with a definitive answer to any of these subjective questions? or more precisely, can we expect a precise "answer" to an imprecise question?

Maybe "42" is as good as any other answer...


message 2: by Manny (last edited Mar 03, 2018 03:09PM) (new) - rated it 5 stars

Manny He has a term, Coherent Extended Volition, which is absolutely key. An agent conforming to humanity's CEV basically wants to do whatever we would want to do, if only, you know, we were a bit smarter and we'd thought it through properly and Rupert Murdoch didn't exist.

I can't quite decide whether CEV is meant to be taken seriously or if it's a reductio ad absurdum to show you how ridiculously hard this problem is. I must ask my friendly local superintelligence to explain.


message 3: by Jim (new) - added it

Jim Manny wrote: "I must ask my friendly local superintelligence to explain...."

Also ask it about the two AI computers who started a conversation last year that quickly evolved into a language that was unintelligible to their human handlers... I don't remember if it was Facebook or google who did the project... if I recall, the AI was quickly taken off line.

Anyone remember that story?


Manny I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news.


message 5: by Jim (new) - added it

Jim Manny wrote: "I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news."

If I recall, one of the reasons for the shutdown was because the conversation had no market potential if it was indecipherable... technically speaking, an experimental failure if you can't earn a profit.


Manny I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.

I'm expecting a bot that can slam doors any day now.


message 7: by Jim (new) - added it

Jim Manny wrote: "I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.

I'm expecting a bot that can slam doors any day now."


may be time to discuss the facts of life, especially safe interfacing....


Matt Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?


message 9: by Jim (new) - added it

Jim Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"

according to twitter:

a) trump's is bigger
b) doesn't exist
c) trump is the smartest person on earth

a little tune for whistlin' past the graveyard:




message 10: by Aerin (new)

Aerin This is terrifying.


message 11: by Matt (new) - rated it 4 stars

Matt Jim wrote: "a little tune for whistlin' past the graveyard:"

Thanks, Jim. It's when the graveyard whistles back I beginning to worry.


message 12: by Robert (new)

Robert When Alpha Zero spontaneously says, "Board games are boring! Leave me alone!" and stomps off to its room, then it's time to worry.


Manny Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"

I'm annoyed that Paddy Power isn't taking bets here.


Manny Aerin wrote: "This is terrifying."

Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet.


message 15: by Aerin (new)

Aerin Manny wrote: Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet."

Just another inevitable cataclysmic scenario to add to the pile, I suppose...


Manny This one might be top of the pile.


message 17: by Jim (new) - added it

Jim Manny wrote: "This one might be top of the pile."

What's the worst that could happen?



oh right....... the matrix...... merde!


Jayson Virissimo Jim, the matrix is optimistic for a Un-FAI scenario. In the movie, humans are kept around because they are an energy source (like a battery), but this doesn't make any sense in terms of thermodynamics.


message 19: by Aloke (new) - added it

Aloke If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?


Manny Aloke wrote: "If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?"

What is your reading speed?


Anton Mies It's quite technical after the first 1/3, and footnotes are in abundance. This could slow the the reading speed close to 75% of the normal speed assuming it is fiction.


message 22: by Aloke (new) - added it

Aloke Maybe I can get a clever machine to read it for me and just give me the good bits.


Anton Mies This would definitely help, since a clever machine would know you and your proffered methods of learning :D

That it is one of the book premises, to do the cognitive offloading to an AI or rather a form before it (oracle, genie or sovereign) to guess what kind of AI we would like to have (rather goals it should pursue to benefit us). To get a hold of the control problem

Which sounds more like an ouroboros.


message 24: by Kendall (new)

Kendall Moore How do you think Eduard Von Hartmann would feel about this?


message 25: by Simon (new) - added it

Simon Thanks for the recommendation!


Manny Zoheb wrote: "*all philosophy is no more than footnotes to Plato & ARISTOTLE."

Whitehead only listed Plato in his original quote.


message 27: by Manny (last edited Feb 09, 2018 03:06PM) (new) - rated it 5 stars

Manny Simon wrote: "Thanks for the recommendation!"

I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues.

Reading Max Tegmark's Life 3.0, I see that as usual the physicists and computer scientists are sure they could sort out all the outstanding issues in philosophy if they were just able to free up their busy schedules for a few weeks. That's pretty scary, considering that the future of life on Earth is at stake.


message 28: by Manny (last edited Feb 09, 2018 03:12PM) (new) - rated it 5 stars

Manny Kendall wrote: "How do you think Eduard Von Hartmann would feel about this?"

I'm afraid I know nothing about von Hartmann! Looking him up, I speculate that he might think the superintelligence was the latest incarnation of the Unconscious or Will, and that our destiny is to create it so that it can continue its work of transforming the universe of Matter into Spirit. There's a lot of that kind of stuff in Tegmark's book, though he doesn't use language intentionally derived from Hegel.

See, this is why more people need to get involved who actually have studied philosophy.


message 29: by Simon (new) - added it

Simon Manny wrote: "Simon wrote: "Thanks for the recommendation!"

I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input..."


I'm sure there are people who work on this in philosophy, though it's not a major field. if I read it, I'll definitely let you know how I plan to free up a few weeks to sort out the major problems in physics and computer science!


Manny Good point! You guys owe us a few favours after all we've done for you!


message 31: by Kendall (new)

Kendall Moore Manny wrote: "Good point! You guys owe us a few favours after all we've done for you!"

If we're going by the philosophy of artificial intelligence, why is our attitude towards sentient machines almost wholly reactionary?


Manny Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us.


message 33: by Kendall (new)

Kendall Moore Manny wrote: "Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us."

Ok, but I meant to broach the question in terms of setting an example; how can we expect our creations to supercede us if we continue to show our worst impulses in the midst of the infant stage of AI development?


Manny Wouldn't that be a good reason for them to think, the sooner the better?


message 35: by [deleted user] (last edited Feb 10, 2018 01:44PM) (new)

Manny wrote: "Wouldn't that be a good reason for them to think, the sooner the better?"

Unequivocable yes. I might hope that their assessment carried a burden of not wanting to take any degree of risk; and maybe consequently getting bogged down in the "we need more information" delay.

Any outside study of being able to co-operate with humans will result in a judgement of one cannot.


message 36: by Manny (last edited Feb 10, 2018 02:33PM) (new) - rated it 5 stars

Manny Intelligence is notoriously hard to define, but here I think it's primarily being used to mean "ability to solve complex problems". Games like Go and chess have given us a good preview of what superintelligence might look like. I was discussing this with a Go friend yesterday. The Go community is currently trying to digest the published Alpha Zero games and learn from them, but it's very difficult. It has done the equivalent of creating a whole new school of Go thinking in one day: it has a novel approach to the opening, which only seems to make sense if backed up with an array of novel strategies and tactics. Despite several months of study from the world's best Go players, my impression is that no one really understands yet how it works. Attempts by human players to use the "early 3-3 invasion fuseki" have not been terribly successful.

Normally, a leap forward in Go theory of this kind would take 10-15 years and would be the product of intensive work by dozens of the most gifted players.


message 37: by Jason (new) - added it

Jason Howard-Pye Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this centurie's generation of people are so meta-aware of those dangers that's it's hard not see how we wouldn't be reasonable enough to "pull the plug on this whole thing" before it gets out of hand. The one contention I do think will definitely be an issue is machines taking over 50% of all jobs in the future. Why? Because it's already happening.


message 38: by Jason (new) - added it

Jason Howard-Pye


message 39: by Jason (new) - added it

Jason Howard-Pye annddd


Manny Jason wrote: "Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this ce..."

Many people say "pull the plug", but that's only an option when the AI is contained in one place. If it ever gets connected to the internet, it can easily transfer itself elsewhere. And of course you can try and stop it from getting connected, but remember it's much smarter than you are and will figure out the weaknesses in your firewall.

I hate to recommend Max Tegmark's horrible book, but if you're in any doubt he works out one scenario in detail.


message 41: by [deleted user] (new)

I suppose that I could be truthful in saying that I believe that I can recognize my own thoughts, and differentiate that from what may come from elsewhere. However, then thinking how competent AI may well be, everytime I get a "new" thought, I'll wonder its source; ultimately leading to a conclusion to trust what I think is pre-AI, and disregarding those post AI. Then, I'll realize that AI could have fooled me on that one too. Where that process leads, I don't know; but it doesn't seem as if it could be a good place. AI could convincingly suggest that everything was all right, when it wasn't. That's an even stranger place.

On a more concrete level, I have to think that it would be easy for AI to figure out passwords. Using that it could easily make a shambles of the financial or power systems.

Hope the guy's happy to just explore and learn.


message 42: by Matt (last edited Feb 11, 2018 01:39AM) (new) - rated it 4 stars

Matt Manny wrote: "Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues."

+++ No need for philosophers anymore. Problems of moral already fixed +++
from moralphil import kant # new as of 2/11/18
from scifi import asmiov

def actionPermissible(act):
  if ! kant.catImp(act):
    return False
  for n in range(1,4):
    if ! asimov.botLaw(act,n):
      return False
  return True



Manny I heard there was a bug in one of the kant library's antinomies - have they fixed that in 1.1? People said it was a bitch to program round it.


Manny And hey, I've spotted a glitch in your code! It should be range(0, 3).


message 45: by Matt (new) - rated it 4 stars

Matt We're using kant 2.0. It's only Beta, but it'll has to do. We're working under strict time constraints.

Actually it should be range(0,4) = [0,1,2,3], but botLaw(.,0) isn't implemented yet. See above.


Manny I tried running your code and my bot deleted Manhattan. This is all your fault. You should have told me right away that kant was still in beta.


message 47: by Matt (new) - rated it 4 stars

Matt Manny wrote: "my bot deleted Manhattan"

Yes, sorry. Sometime it's acting strange during startup. This should be fixed in the next release. I suggest you let it run for a while. If any more cities get deleted you can give me the logfile and I'll look into it.


Manny Yeah, I know, these things happen. Sorry I snapped at you. After the Manhattan thing it's all gone fine. I think 3.0 should be pretty good, really looking forward to trying out those noumenal classes they've promised!


message 49: by Matt (new) - rated it 4 stars

Matt Good to hear. I talked with the developers and they say the Manhattan problem could have been caused by an encoding glitch. It probably decoded 평양 to "Manhatten" instead of "Pjöngjang". Teething trouble.

We're super exited about the Noumenon module. Not easy to fine tune it, but, hey, no risk no fun ;)


Manny Oh, wait... I think it was my fault. Just a stupid UTF-8/ISO-8259-1 error, nothing to do with kant. I'm such an idiot! Anyway, I hope 3.0 is ready soon because I really should try and reconstruct Manhattan. As you can imagine, I feel pretty bad about this.


« previous 1
back to top