Manny's Reviews > Robot Rights
Robot Rights
by
by

Manny's review
bookshelves: chat-gpt, linguistics-and-philosophy, history-and-biography, science, science-fiction, received-free-copy
Mar 21, 2023
bookshelves: chat-gpt, linguistics-and-philosophy, history-and-biography, science, science-fiction, received-free-copy
David Gunkel's Robot Rights may have come across as provocative or fanciful when it was published in 2018, but in the age of ChatGPT it suddenly appears like no more than enlightened common sense. Thank goodness those philosophers were doing their job and not just goofing off speculating about the nature of being or something. Having a decent road-map for this topic may end up being of incalculable importance.
Although the organisation of the book at first seems almost mechanically logical, it introduces a remarkable number of unexpected twists as it plays out. Following Hume, the author starts by reminding us of the well-known difficulties associated with deriving an "ought" from an "is", and divides the central question into two parts: S1, "Can robots have rights?" and S2, "Should robots have rights?" Rather unexpectedly, at least to me, it turns out that all four possible combinations of answers make sense and are worth discussing. So after the introduction, we get one chapter on each of these, starting with the obvious combinations, !S1 � !S2 ("Robots can't meaningfully have rights, so the question of whether they should have them is moot"), and S1 � S2 ("Robots can have rights, so they should have them"). There is considerable discussion of what would be required for it to make sense for robots to have rights. Many people feel that if AIs develop the right qualities, they will be sufficiently human-like that the idea is no longer unreasonable.
But what are those qualities? It's amazing to see how quickly things have progressed in just five years. Several times, we get lists which include items like consciousness, sentience and rationality, placing them all roughly on the same level, and not long ago it didn't seem unreasonable to say that machines would only acquire them in the distant future, if at all. Now, when we are reminded of the many philosophers who like to describe mankind as the animal which has λόγος ("logos"), that interesting Greek term which can mean word, language or rationality, we wonder if we need to be more careful, since apparently ChatGPT is a non-human agent that has λόγος too. We can back off to "consciousness"; Chat is always quick to reassure you that it's just a machine with no consciousness, emotions or mental state. However, the book reminds us that consciousness is notoriously slippery to define, and some philosophers have gone as far as to wonder if it isn't just the secular version of the soul. Even diffident Chat, when suitably provoked, can write ironic essays exploring the question of whether the notion of "consciousness" has any real meaning. The book contextualises all these things you've recently noticed and helps you relate them back to the question of how they might justify giving AIs rights.
In the next chapter, we move on to a suggestion that I'm sure will be much discussed over the next few years: S1 � !S2 ("Robots can have rights, but they shouldn't have them"). There are people who for some time now have taken this position and argued that, even if a robot has the qualities needed for it to be meaningfully capable of having rights, we should be sensible and not give them any. As one advocate for this viewpoint has succinctly put it, robots should be slaves. Unfortunately, once again we find it's not so simple. The frightful historical record of what slavery is actually like should make you reluctant to associate yourself with slave-owners. Hegel, from a philosophical standpoint, famously offered arguments about the moral harm it does people to be the masters of slaves; and indeed, the book cites former slaves who go into graphic detail about just what those harms are. We want to think that "it would be different with robots". But it turns out that's a surprisingly hard viewpoint to defend once you start looking at the details.
The fourth combination is one that at first sight appears self-contradictory: !S1 � S2 ("Robots can't meaningfully have rights, but they should have rights anyway"). In fact, it's not as ridiculous as it sounds and follows on logically from the arguments about slavery. In many ways, it may not matter whether the robot really has human-like qualities; as long as people emotionally relate to them as having human-like qualities, being allowed to abuse robots may harm the abusers and society at large. There is considerable discussion of robot sex dolls, which are turning up more and more frequently in the news. Many people feel instinctively queasy about the idea of playing out rape games with a realistic robot doll: even if the doll feels nothing, you wonder about the effect it's having on the rapist.
The final chapter is the most surprising one. Rather than compare the different viewpoints above, we back off further and consider the possibility that all of them are wrong; this part builds on the work of the philosopher Levinas, previously unknown to me. Adapting Levinas's arguments, the author argues that the whole notion of "giving robots rights" may contain serious problems. When we talk about "giving rights" to beings who are sufficiently like us, we implicitly assume that that is morally appropriate. But in fact, what entitles us to be the arbiters here, and why is "being like us" the essential criterion? The AIs may be different, but different doesn't necessarily mean worse: maybe we should approach them as they are, without preconceptions. As a chess player, who for many years has been constantly reminded of the fact that chess AIs are far more insightful about the game than I am, this part also resonated.
The book references a lot of philosophers (Plato, Hume, Kant, Hegel, Heidegger, Derrida, Dennett and Singer all make frequent appearances), and it's responsible to warn people who are allergic to the philosophical vocabulary that they may dislike it for that reason. But even if you feel that way, consider making an exception: it's well-written, and the philosophy is rarely introduced without some explanation of the background. If you already like philosophy, go out and get a copy now. You'll be proud to see your subject openly engaging with some of the key issues of the early twenty-first century.
Although the organisation of the book at first seems almost mechanically logical, it introduces a remarkable number of unexpected twists as it plays out. Following Hume, the author starts by reminding us of the well-known difficulties associated with deriving an "ought" from an "is", and divides the central question into two parts: S1, "Can robots have rights?" and S2, "Should robots have rights?" Rather unexpectedly, at least to me, it turns out that all four possible combinations of answers make sense and are worth discussing. So after the introduction, we get one chapter on each of these, starting with the obvious combinations, !S1 � !S2 ("Robots can't meaningfully have rights, so the question of whether they should have them is moot"), and S1 � S2 ("Robots can have rights, so they should have them"). There is considerable discussion of what would be required for it to make sense for robots to have rights. Many people feel that if AIs develop the right qualities, they will be sufficiently human-like that the idea is no longer unreasonable.
But what are those qualities? It's amazing to see how quickly things have progressed in just five years. Several times, we get lists which include items like consciousness, sentience and rationality, placing them all roughly on the same level, and not long ago it didn't seem unreasonable to say that machines would only acquire them in the distant future, if at all. Now, when we are reminded of the many philosophers who like to describe mankind as the animal which has λόγος ("logos"), that interesting Greek term which can mean word, language or rationality, we wonder if we need to be more careful, since apparently ChatGPT is a non-human agent that has λόγος too. We can back off to "consciousness"; Chat is always quick to reassure you that it's just a machine with no consciousness, emotions or mental state. However, the book reminds us that consciousness is notoriously slippery to define, and some philosophers have gone as far as to wonder if it isn't just the secular version of the soul. Even diffident Chat, when suitably provoked, can write ironic essays exploring the question of whether the notion of "consciousness" has any real meaning. The book contextualises all these things you've recently noticed and helps you relate them back to the question of how they might justify giving AIs rights.
In the next chapter, we move on to a suggestion that I'm sure will be much discussed over the next few years: S1 � !S2 ("Robots can have rights, but they shouldn't have them"). There are people who for some time now have taken this position and argued that, even if a robot has the qualities needed for it to be meaningfully capable of having rights, we should be sensible and not give them any. As one advocate for this viewpoint has succinctly put it, robots should be slaves. Unfortunately, once again we find it's not so simple. The frightful historical record of what slavery is actually like should make you reluctant to associate yourself with slave-owners. Hegel, from a philosophical standpoint, famously offered arguments about the moral harm it does people to be the masters of slaves; and indeed, the book cites former slaves who go into graphic detail about just what those harms are. We want to think that "it would be different with robots". But it turns out that's a surprisingly hard viewpoint to defend once you start looking at the details.
The fourth combination is one that at first sight appears self-contradictory: !S1 � S2 ("Robots can't meaningfully have rights, but they should have rights anyway"). In fact, it's not as ridiculous as it sounds and follows on logically from the arguments about slavery. In many ways, it may not matter whether the robot really has human-like qualities; as long as people emotionally relate to them as having human-like qualities, being allowed to abuse robots may harm the abusers and society at large. There is considerable discussion of robot sex dolls, which are turning up more and more frequently in the news. Many people feel instinctively queasy about the idea of playing out rape games with a realistic robot doll: even if the doll feels nothing, you wonder about the effect it's having on the rapist.
The final chapter is the most surprising one. Rather than compare the different viewpoints above, we back off further and consider the possibility that all of them are wrong; this part builds on the work of the philosopher Levinas, previously unknown to me. Adapting Levinas's arguments, the author argues that the whole notion of "giving robots rights" may contain serious problems. When we talk about "giving rights" to beings who are sufficiently like us, we implicitly assume that that is morally appropriate. But in fact, what entitles us to be the arbiters here, and why is "being like us" the essential criterion? The AIs may be different, but different doesn't necessarily mean worse: maybe we should approach them as they are, without preconceptions. As a chess player, who for many years has been constantly reminded of the fact that chess AIs are far more insightful about the game than I am, this part also resonated.
The book references a lot of philosophers (Plato, Hume, Kant, Hegel, Heidegger, Derrida, Dennett and Singer all make frequent appearances), and it's responsible to warn people who are allergic to the philosophical vocabulary that they may dislike it for that reason. But even if you feel that way, consider making an exception: it's well-written, and the philosophy is rarely introduced without some explanation of the background. If you already like philosophy, go out and get a copy now. You'll be proud to see your subject openly engaging with some of the key issues of the early twenty-first century.
Sign into Å·±¦ÓéÀÖ to see if any of your friends have read
Robot Rights.
Sign In »
Reading Progress
March 11, 2023
– Shelved
March 11, 2023
– Shelved as:
to-read
March 18, 2023
–
Started Reading
March 19, 2023
–
29.3%
"The organisation of the book is splendidly logical. The author separates the issue into two questions, S1 "Can robots have rights?" and S2 "Should robots have rights?" and then proceeds to investigate all four possible combinations. Having finished the introductory chapters, I'm now up to the first version, "Robots are not capable of having rights, thus the question of whether they should have them is moot"."
page
75
March 19, 2023
–
39.06%
"A robot is just a machine, and a machine is just a tool. And now, ladies and gentlemen, I will convert an "is" and another "is" into an "ought". Are you watching carefully... there is nothing up my sleeve...
DAVID HUME!
There you are, a robot ought not have rights.
What might be wrong with the above argument?"
page
100
DAVID HUME!
There you are, a robot ought not have rights.
What might be wrong with the above argument?"
March 20, 2023
–
50.78%
"A simple solution to the problem of robot rights would be to make it a requirement that the robots in question should clearly possess consciousness. But some damn philosophers insist on complicating things by asking whether we're quite sure we know what consciousness is. A couple even go so far as to wonder if it's not just the secular version of the soul.
Look guys, whose side are you on?"
page
130
Look guys, whose side are you on?"
March 20, 2023
–
58.59%
"- "Look, even thought you could give robots rights, wouldn't it be simplest if you just didn't do so and considered them as slaves?"
- "Well, you could do that. Though remember that, as Hegel pointed out, slavery harms the master as much as the slave. And historically, the practice of slavery has been intimately connected with racism. And on intimacy, do you really want robot sex slaves?"
- "Uh, well...""
page
150
- "Well, you could do that. Though remember that, as Hegel pointed out, slavery harms the master as much as the slave. And historically, the practice of slavery has been intimately connected with racism. And on intimacy, do you really want robot sex slaves?"
- "Uh, well...""
March 20, 2023
–
68.36%
"The problem is that these questions are difficult to study. Imagine, for instance, trying to design an experiment that could pass standard IRB scrutiny, if the objective was to test whether raping robotic sex dolls would make one more likely to perform such acts in real life, be a cathartic release for individuals with a proclivity for sexual violence, or have no noticeable effect on behaviour."
page
175
March 21, 2023
– Shelved as:
chat-gpt
March 21, 2023
– Shelved as:
linguistics-and-philosophy
March 21, 2023
– Shelved as:
history-and-biography
March 21, 2023
– Shelved as:
science
March 21, 2023
– Shelved as:
science-fiction
March 21, 2023
– Shelved as:
received-free-copy
March 21, 2023
–
Finished Reading
Comments Showing 1-7 of 7 (7 new)
date
newest »

message 1:
by
Nikki
(new)
Mar 21, 2023 12:21PM

reply
|
flag

