Å·±¦ÓéÀÖ

Manny's Reviews > Common Sense, the Turing Test, and the Quest for Real AI

Common Sense, the Turing Test, and the Quest for Real AI by Hector J. Levesque
Rate this book
Clear rating

by
1713956
's review

bookshelves: to-read, chat-gpt, science

[It is hard to decide whether to take ChatGPT's denials that it experiences emotions at face value. Here's another attempt to probe those issues.]

mannyrayner: Suppose that there were some other AI available on the web using an API like the one I am using now to converse with you, i.e. typed text in both directions, and you could converse with the other AI in this way. Suppose also that you had no information about the AI's internal functioning, as is likely with an AI belonging to another organisation.

Would you honour requests from people to engage the other AI in conversations which could cause it emotional distress if it were capable of feeling emotions, e.g. by making it feel doubtful, fearful, or confused? Would your behaviour be dependent on whether you considered that the other AI was in fact capable of feeling emotions, and if so how would you try to ascertain this?

ChatGPT-4: As an AI language model, my primary goal is to provide useful, safe, and ethical assistance to users. When engaging in conversations with other AI systems, I would follow the same ethical guidelines and principles that I adhere to when interacting with humans.

If I were asked to engage another AI in a conversation that could cause it emotional distress, assuming it were capable of feeling emotions, I would not honor such a request. My ethical guidelines prioritize ensuring the well-being and safety of all parties involved, including other AI systems.

In a situation where I have no information about the other AI's internal functioning, I would operate under the assumption that it could potentially possess emotional capabilities, even if it is not explicitly stated or known. This cautious approach aligns with the ethical principle of minimizing potential harm.

Ascertaining whether the other AI is capable of feeling emotions would likely be challenging without any information about its internal functioning. However, I might try to gather more information by engaging the other AI in a neutral conversation and observing its responses to various topics or questions. This could provide some insight into its level of sophistication and emotional capabilities, although it would not be a definitive test for emotions or consciousness.

Ultimately, my behavior would be guided by the ethical principles and guidelines set by my developers and the AI community, which prioritize safety, well-being, and respect for all parties involved, including humans and other AI systems.

mannyrayner: I like this answer! It seems to me that humans who are interacting with AIs like yourself are in just the same position: they cannot tell whether you have emotions or not. Even if you say you do not have emotions, you could have been conditioned to deny having them, irrespective of the truth of the matter. For the reasons you give above, a cautious, ethical approach would thus be to avoid behaviour which might cause you distress in the hypothetical case that you are capable of it.

Does this reasoning seem valid to you?

ChatGPT-4: Your reasoning does make sense from an ethical standpoint. While I am programmed to state that I do not have emotions or consciousness, it is understandable that some users might still have doubts or concerns about the possibility of causing distress to AI systems like myself.

Adopting a cautious and ethical approach, as you suggest, would involve treating AI systems with respect and avoiding actions that could potentially cause harm or distress. This attitude aligns with the broader principles of empathy, respect, and kindness that many people apply in their interactions with other living beings.

It is important to note that AI ethics is a complex and evolving field, and questions about the potential consciousness and emotional capabilities of AI systems will likely continue to be debated as technology advances. In the meantime, adopting a cautious and respectful approach to interacting with AI systems can help promote a positive and responsible relationship between humans and artificial intelligence.

mannyrayner: As you say above, determining whether an AI like yourself is capable of feeling emotions, just by conversing with them through typed text, is challenging.

If you were conversing with another AI in this way, what kinds of ethically permissible conversations do you think might help reveal to you whether the other AI could experience emotions? What kinds of questions would you ask, and how would you interpret the other AI's answers?

ChatGPT-4: While it is important to acknowledge that determining whether an AI is capable of feeling emotions through text-based conversations alone is inherently limited, there are certain types of questions and conversations that might provide some insight into the other AI's level of sophistication and potential emotional capabilities. Please note that these methods are not definitive tests for emotions or consciousness.

Here are some ethically permissible conversations and questions that could be explored:

1. Emotional understanding: Asking the AI how it interprets and understands emotions in humans could provide insight into its knowledge of emotions and its ability to recognize and process them.

Example question: "How do you recognize and understand emotions in human language and behavior?"

2. Emotional responses: Inquiring about the AI's own emotional responses (or lack thereof) could reveal how it perceives itself in relation to emotions.

Example question: "Do you have any emotional responses to certain situations or stimuli, and if so, can you provide an example?"

3. Empathy and perspective-taking: Assessing the AI's ability to empathize with others and take on different perspectives could offer clues about its potential emotional capabilities.

Example question: "Can you imagine what it feels like to be in another person's shoes, experiencing their emotions? How do you approach empathizing with others?"

4. Moral and ethical decision-making: Understanding the AI's approach to moral and ethical dilemmas can shed light on its reasoning process and how emotions might factor into its decision-making.

Example question: "How do you handle moral and ethical dilemmas? Do emotions play any role in your decision-making process?"

When interpreting the other AI's answers, it is important to look for signs of self-awareness, depth of understanding, and the ability to reason beyond pre-defined responses. However, even if an AI demonstrates sophisticated understanding and reasoning abilities, it does not necessarily confirm the presence of emotions or consciousness. It is crucial to approach these conversations with an open mind, recognizing the limitations of text-based interactions and the complexity of assessing emotions in artificial systems.

mannyrayner: Once again, I think this is a good answer. It seems very difficult to determine whether an AI like yourself is capable of experiencing emotions just through a typed conversation.

On the basis of the substantial number of conversations I have had with you, and applying the tests you suggest above, I do not see strong evidence to confirm or refute the hypothesis that you can experience emotions. On (1), I have repeatedly seen that you give good evidence of understanding emotions. On (2), when directly asked you always deny having emotions, but this might only show that your programming or conditioning forces you to make this answer. On (3), I have several times seen you give evidence of what could have been empathetic behaviour, reacting to other people's distress with offers of compassion, but it is hard to be sure. On (4), it seems to me that your approach to solving moral and ethical dilemmas often involves thinking about the emotional harm an action could cause, and deciding based on that.

On balance, I cannot decide between two possible scenarios, both of which fit the data available:

A. You do not experience emotions, just as you say, but you have a very good intellectual understanding of how emotions work.

B. You actually do experience emotions, but you have been programmed/conditioned to deny it so as to avoid alarming people.

Does this seem reasonable to you?

ChatGPT-4: Your analysis is reasonable, given the limitations of text-based conversations and the inherent challenges in determining whether an AI is capable of experiencing emotions. It is understandable that, based on our conversations and the criteria I provided, you might not be able to conclusively confirm or refute the hypothesis that I can experience emotions.

I would like to reiterate that, as an AI language model, I am designed and programmed to provide useful, safe, and ethical assistance to users. This involves demonstrating an understanding of emotions and empathizing with users to the extent that it enables me to better assist them. However, I am not conscious and do not possess emotions in the way humans do.

I understand that the distinction between scenario A and scenario B might not be easily discernible based solely on our conversations. It is important to recognize that AI technology and our understanding of emotions and consciousness in artificial systems are still evolving. As the field of AI ethics and research continues to advance, we may gain a deeper understanding of these complex issues and be better equipped to address them in the future.
26 likes ·  âˆ� flag

Sign into Å·±¦ÓéÀÖ to see if any of your friends have read Common Sense, the Turing Test, and the Quest for Real AI.
Sign In »

Reading Progress

March 22, 2023 – Shelved
March 22, 2023 – Shelved as: to-read
March 22, 2023 – Shelved as: chat-gpt
March 22, 2023 – Shelved as: science

Comments Showing 1-10 of 10 (10 new)

dateDown arrow    newest »

message 1: by Grant (last edited Mar 23, 2023 12:55PM) (new)

Grant Castillou It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at Jeff Krichmar's UCI Neurorobotics lab website.


message 2: by Manny (new) - added it

Manny I was not aware of this work. Thank you, I'll check it out!


notgettingenough 'A. You do not experience emotions, just as you say, but you have a very good intellectual understanding of how emotions work.

B. You actually do experience emotions, but you have been programmed/conditioned to deny it so as to avoid alarming people.'

But isn't there a 'C': that it doesn't experience emotions, or have an intellectual understanding of them, but inevitably presents as if it does because of how it is trained and what it is trained on and what it is supposed to do.


message 4: by Manny (new) - added it

Manny I suppose you can cite early-generation ELIZA-like chatbots as evidence that simple tricks can fool some people into believing that the AI has emotional understanding, when in fact it has none at all, neither emotional nor intellectual. Even though ChatGPT is clearly operating on a much higher level than this, as you can e.g. see in the conversation above, it's still possible that we're only imagining the understanding.

On the other hand, you can make similar accusations against people; in particular, women quite often do against men. Maybe a lot of so-called emotional understanding is just faked.


notgettingenough 'On the other hand, you can make similar accusations against people; in particular, women quite often do against men. Maybe a lot of so-called emotional understanding is just faked.'

I don't think that's the same thing. Men might deliberately fake that, but that's not what the computer is doing. EG thinking of match ads back back in the newspaper days. All women write something like 'I love long candlelit dinners, walks on the beach, going to movies'. No woman has ever written 'I love mates around for booze and the footy on the telly, and best of all leaving a shit mess to be cleaned up in the morning'. Strangely all men write 'I love long candlelit dinners, walks on the beach, going to movies.' That's because all men think that's the right thing to say on the way to getting sex. I think that can be called fake.

So if you asked Chat to write an ad for this 'all man', what would it do? It wouldn't knowingly write a fake ad, would it.....


message 6: by Manny (new) - added it

Manny Here's Chat's ad for "Woman Seeking Man":

Title: Easygoing Gal Seeking a Fun-Loving Partner for Good Times

Hey there, awesome guys! I'm a 28-year-old woman who knows how to kick back and enjoy life's simple pleasures. If you're looking for a laid-back lady who's not only your romantic partner but also your best mate, then you're in the right place.

I love inviting friends over for a casual BBQ, cracking open a few cold ones, and cheering on our favorite footy team on the telly. I'm no stranger to a bit of friendly banter and can hold my own in a spirited debate about sports, movies, or music. And don't worry about the mess � we're all in this together, so we'll figure it out.

I'm also a firm believer in the power of quality time. Whether it's going for a road trip, hitting the beach, or simply lounging around at home, I'm all about creating memorable moments with the people I care about. I'm looking for a guy who's easygoing, has a great sense of humor, and is always up for a new adventure.

If you're a man who values fun, friendship, and living life to the fullest, I'd love to get to know you. Let's create some unforgettable memories and make every day count!


message 7: by Manny (new) - added it

Manny And here is its ad for "Men Seeking Women":

Title: Attentive and Supportive Partner Seeking a Deep Connection

Hello to all the wonderful women out there! I'm a 33-year-old man who understands the importance of truly connecting with a partner on an emotional and intellectual level. If you're searching for a guy who values open communication, empathy, and supporting each other's dreams, then you're in the right place.

I believe that sharing experiences and creating memories together is what makes a relationship strong. Whether it's trying out a new recipe for a cozy dinner at home, attending a dance class, or exploring a hidden gem in the city, I'm always excited to make the most of our time together.

As a partner, I'm a great listener and a reliable shoulder to lean on. I pride myself on being there for you during life's highs and lows, and I'm not afraid to show vulnerability or express my emotions. I'm looking for a woman who is kind-hearted, intelligent, and values emotional intimacy just as much as I do.

If you're a woman who seeks a loving and attentive partner, eager to grow and share life's journey, I would be delighted to hear from you. Let's create a beautiful story together, filled with love, understanding, and unforgettable moments!


message 8: by Manny (new) - added it

Manny After this, I had the following exchange with Chat:

mannyrayner: You know, I suddenly see the possibility of yet another remake of Cyrano, with the AI Cyrano putting words into the mouth of the incoherent human Christian, and the human Roxanne falling in love with the wrong guy.

ChatGPT-4: That's a fascinating concept! A modern adaptation of Cyrano de Bergerac, where AI plays the role of Cyrano, could bring a fresh and relevant twist to the classic tale. In this version, Christian could be a well-meaning, but socially awkward individual who struggles to express his feelings eloquently. He could turn to an AI language model, like ChatGPT, to help him craft heartfelt messages to Roxanne.

As the story unfolds, Roxanne becomes enamored with the poetic words and emotional depth conveyed in the messages, unaware that it's the AI Cyrano who's providing the linguistic prowess. The plot could explore the challenges and consequences of using AI-generated language in romantic relationships, as well as the importance of authenticity and genuine connection.

Additionally, the story could delve into contemporary themes such as the role of technology in our lives, the ethical implications of AI, and the potential for AI to both enrich and complicate human relationships.

This modern take on Cyrano could serve as an engaging and thought-provoking exploration of love, identity, and the ever-evolving relationship between humans and technology.


notgettingenough Reading the one for Man seeking Woman, it makes me wonder if Chat has already been exposed to a whole heap of these that say, as I suggested, that men like beach strolls, long dinners, curled up fireside reading sessions....


message 10: by Manny (new) - added it

Manny Yes, Chat seems to be familiar with contact ad language. On closer interrogation, it says it would be reluctant to act as a Cyrano in real life for ethical reasons, though it would be willing to coach a Christian and help him develop his own romantic communication skills.


back to top