Anton Mies's Reviews > Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies
by
by

The first third of the book is easy to follow and derives the different parts of how to achieve the super AI, whereas the remainder deals with possible sub options and possible consequences. The remainder is full with footnotes, pointing the details of already detailed intricacies.
Nick Bostrom tackled the problem most people shy away from as it is: abstract, uncertain and hard.
Put it simply, Nick Bostrom is a scenario analyst. He presents different scenarios and related analysis, unfortunately in none of them he arrives at a solution (desirable for humanity), but this is also not the point of the book. Rather it is to present that there are countless scenarios and that we should not procrastinate on that problem.
If we were to build a super AI, ultimately it would through reinforcement learning and self improving revolt against any moral scaffolds and values we had endowed it before the launch. Since, morals are subjective and a super intelligent instance would see beyond those and develop own or maybe conform, but who knows. This super AI might come to conclusion that everything is meaningless and become a proponent of cosmic nihilism.
As the authors points it out, we are like children playing with a bomb. Therefore, all major AI decision should be though through in advance and in cooperation before it's too late.
"The best path toward development of beneficial super AI is one in which AI developers and AI safety researchers are on the same side."
Some other bits:
Interesting approach to control AI from the book: let it belief that it is in a simulation which is testing its behavior (menace or benefits it creates for humans). In case of good behavior it will be let free otherwise wiped out. Similar to Matrix or Inception. Because of uncertainty AI in theory would behave. However, it would be a bad relationship build on fear.
Link to Social: Why our brains are wired to connect.:
1.We could put AI in the same fragile state as us, limited physically but not cognitively and base its state on our physical state.
2. What If we make AI with a mpfc module
"the MPFC is taking our assessments of what others believe about us as a proxy for what we should believe about ourselves. [...] MPFC may be involved in a social construction of the self." And also functions as a lie detector.
thus AI will consider what we think about its actions and depend on us, but mpfc would have to be like a "heart" of its systems. Meaning, it would not be able to switch it or remove it.
Nick Bostrom tackled the problem most people shy away from as it is: abstract, uncertain and hard.
Put it simply, Nick Bostrom is a scenario analyst. He presents different scenarios and related analysis, unfortunately in none of them he arrives at a solution (desirable for humanity), but this is also not the point of the book. Rather it is to present that there are countless scenarios and that we should not procrastinate on that problem.
If we were to build a super AI, ultimately it would through reinforcement learning and self improving revolt against any moral scaffolds and values we had endowed it before the launch. Since, morals are subjective and a super intelligent instance would see beyond those and develop own or maybe conform, but who knows. This super AI might come to conclusion that everything is meaningless and become a proponent of cosmic nihilism.
As the authors points it out, we are like children playing with a bomb. Therefore, all major AI decision should be though through in advance and in cooperation before it's too late.
"The best path toward development of beneficial super AI is one in which AI developers and AI safety researchers are on the same side."
Some other bits:
Interesting approach to control AI from the book: let it belief that it is in a simulation which is testing its behavior (menace or benefits it creates for humans). In case of good behavior it will be let free otherwise wiped out. Similar to Matrix or Inception. Because of uncertainty AI in theory would behave. However, it would be a bad relationship build on fear.
Link to Social: Why our brains are wired to connect.:
1.We could put AI in the same fragile state as us, limited physically but not cognitively and base its state on our physical state.
2. What If we make AI with a mpfc module
"the MPFC is taking our assessments of what others believe about us as a proxy for what we should believe about ourselves. [...] MPFC may be involved in a social construction of the self." And also functions as a lie detector.
thus AI will consider what we think about its actions and depend on us, but mpfc would have to be like a "heart" of its systems. Meaning, it would not be able to switch it or remove it.
Sign into Å·±¦ÓéÀÖ to see if any of your friends have read
Superintelligence.
Sign In »
Reading Progress
January 1, 2018
–
Started Reading
January 1, 2018
– Shelved
January 8, 2018
–
5.0%
"The book has a super apologetic beginning, things even humble persons shouldn't write because otherwise there is no point in writing such an "unworthy & incorrect" book in the first place"
January 8, 2018
–
10.0%
"An interesting point is that "artificial intelligence need not to resemble a human mind" which I have never considered so far. On the other hand it does make sense, why should it in the first place? Presumably there can be something better...But "whole brain emulation is more likely" by means of sheer technological brute forcing."
January 8, 2018
–
12.0%
""stem cell-derived gametes" to tweak our Intelligence in the pursuit of super Intelligence. Hence it is more transgenic meaning, would span further generations (See The Gene: An intimate...) and also it could work in humans opposed to CRISPR which works wonders on mice, but not on human genome."
January 8, 2018
–
15.0%
""Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization" Harsh, but honest.
Internet is one of the means for increasing collective Intelligence and is underutilized.
"That there are multiple paths does not entail that there are multiple destinations.""
Internet is one of the means for increasing collective Intelligence and is underutilized.
"That there are multiple paths does not entail that there are multiple destinations.""
January 28, 2018
–
Finished Reading
February 3, 2018
–
12.78%
"As it seems enhancing our brains via UI is a less promising path to super AI (intelligence), quote "difficult to achieve is a high-bandwidth direct interaction between brain and computer". This further complicated by the actual data processing speed "how quickly the brain can extract meaning and make sense of the data". Albeit in the long-run it would enhance our capability thus, speed up the invention of super AI."
page
45
February 3, 2018
–
12.78%
""system’s collective intelligence is limited by the abilities of its member minds" [...] "If communication overheads are reduced (including not only equipment costs but also response latencies, time and attention burdens, and other factors), then larger and more densely connected"
Decentralized with improved efficiency, word of the decade: blockchain"
page
45
Decentralized with improved efficiency, word of the decade: blockchain"
February 3, 2018
–
13.64%
""system’s collective intelligence is limited by the abilities of its member minds" [...] "If communication overheads are reduced (including not only equipment costs but also response latencies, time and attention burdens, and other factors), then larger and more densely connected"
Decentralized with improved efficiency, word of the decade: blockchain"
page
48
Decentralized with improved efficiency, word of the decade: blockchain"
February 3, 2018
–
14.77%
"There are "three forms: speed superintelligence, collective superintelligence, and quality superintelligence""
page
52
February 3, 2018
–
16.76%
"Self justification for, why I sometime prefer solo work instead of group/teamwork. "Even within the range of present human variation we see that some functions benefit greatly from the labor of one brilliant mastermind as opposed to the joint efforts of myriad mediocrities""
page
59
February 3, 2018
–
22.73%
""The mere demonstration of the feasibility of an invention can also encourage others to develop it independently" Follow the leader, Avantgarde"
page
80
February 3, 2018
–
25.85%
"Memes Dawkins, we accumulate thus stand on the pile/shoulders of previous knowledge.
"Our greater intelligence lets us transmit culture more efficiently, with the result that knowledge and technology accumulates from one generation to the next.""
page
91
"Our greater intelligence lets us transmit culture more efficiently, with the result that knowledge and technology accumulates from one generation to the next.""
February 3, 2018
–
26.99%
""Now when the AI improves itself, it improves the thing that does the improving. An intelligence explosion results""
page
95
February 3, 2018
–
30.97%
"Question of AI motivation is one of the most important ones, since it will determine its actions. However, motivation based on morals is a hard concept to grasp. Thousand of years, hundreds of philosophers and there are multiple solutions.
"According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals""
page
109
"According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals""
February 3, 2018
–
32.95%
"reflect that human beings consist of useful resources (such as conveniently located atoms
Some Matrix stuff"
page
116
Some Matrix stuff"
February 3, 2018
–
33.81%
"Morals and goal achievement, the devil lies in the detail
"until the AI becomes intelligent enough to figure out that it can realize its final goal more fully and reliably by implanting electrodes into the pleasure centers of its sponsor’s brain, something assured to delight the sponsor immensely.""
page
119
"until the AI becomes intelligent enough to figure out that it can realize its final goal more fully and reliably by implanting electrodes into the pleasure centers of its sponsor’s brain, something assured to delight the sponsor immensely.""
February 3, 2018
–
34.94%
"More intricate issues
"if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal""
page
123
"if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal""
February 3, 2018
–
36.65%
""principal–agent problem"
Let's the heart of every economic student beat faster"
page
129
Let's the heart of every economic student beat faster"
February 3, 2018
–
36.65%
""The need to solve the control problem in advance—and to implement the solution successfully in the very first system to attain superintelligence"
From here on the book predominantly focuses on the control problem"
page
129
From here on the book predominantly focuses on the control problem"
February 3, 2018
–
39.49%
""Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories""
page
139
February 3, 2018
–
52.84%
""Because the complexity is largely transparent to us, however, we often fail to appreciate that it is there"
"How could our programmer transfer this complexity into a utility function?""
page
186
"How could our programmer transfer this complexity into a utility function?""
February 3, 2018
–
60.51%
"If were to offload all the hard cognitive work about goals, values of AI to itself to find out how to reign itself (more or less what I understood)
"it would anyway be impossible—even for a superintelligence—to find out what humanity would actually want""
page
213
"it would anyway be impossible—even for a superintelligence—to find out what humanity would actually want""
February 3, 2018
–
62.5%
""offloading even more cognitive work onto the AI? Where is the limit to our possible laziness?""
page
220
February 3, 2018
–
64.49%
""It is not necessary for us to create a highly optimized design. Rather, our focus should be on creating a highly reliable design, one that can be trusted""
page
227
February 3, 2018
–
65.63%
"By delaying the arrival of super intelligence, we will advance in other areas leading to overall increased progress and also advance in possible AI control. However, in the same time we face risks presented by those advancements. Thus, by pursuing super AI first we could in theory effectively face the risks presented by other advancement, while dealing only with the risk poised by super AI."
page
231
February 3, 2018
–
72.44%
""We find ourselves in a thicket of strategic complexity, surrounded by a dense mist of uncertainty.""
page
255
February 3, 2018
–
72.73%
"Technology Coupling
leading discovery in other fields
so it is good to postpone and yet it is not"
page
256
leading discovery in other fields
so it is good to postpone and yet it is not"
February 3, 2018
–
73.86%
""The best path toward the development of beneficial superintelligence is one in which AI developers and AI safety researchers are on the same side""
page
260