Å·±¦ÓéÀÖ

Jump to ratings and reviews
Rate this book

ChatGPT is Bullshit

Rate this book
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations�. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

10 pages, ebook

Published June 8, 2024

9 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
0 (0%)
3 stars
0 (0%)
2 stars
0 (0%)
1 star
1 (100%)
Displaying 1 of 1 review
Profile Image for Manny.
AuthorÌý41 books15.7k followers
January 14, 2025
This recent article from Ethics and Information Technology has in six months established an amazingly high profile; currently claims 763,000 reads and 63 mentions in the media. Briefly, the authors argue that, since a Large Language Model (LLM) like ChatGPT generates text by trying to predict the next word based on the large amount of training data it's seen, it only accidentally appears to be distinguishing between true and false statements. Borrowing the colourful phrase from Frankfurt's classic essay On Bullshit, where "bullshitting" is given the technical meaning "using language without caring whether what one says is true or not", it follows that, in this precise philosophical sense, an LLM is just a bullshit machine. They back up their abstract argument with some concrete observations, in particular ChatGPT-3.5's notorious tendency to invent non-existent references.

Unfortunately, this neat argument is not at all correct. When I showed the paper to the o1 version of ChatGPT, it only took the AI seven seconds to point out that it and other similar LLMs are trained, not only on next-token prediction, but also through Reinforcement Learning through Human Feedback (RLHF), which teaches them to prefer true statements to false ones, helpful responses to unhelpful ones, and ethical behaviour to unethical; without the RLHF component, ChatGPT would be useless. It is also the case (try it out yourself) that ChatGPT-o1 is perfectly capable of adding correct references to a piece of academic writing. In fact, one might reasonably accuse Hicks et al's paper of itself showing a rather startling disregard for truth and accuracy, the very thing they accuse ChatGPT of doing.

If you're curious about the details, we posted our full response yesterday as a paper entitled "'ChatGPT is Bullshit' is Bullshit: A Coauthored Rebuttal by Human & LLM". We were unable to submit it to Ethics and Information Technology because, like almost all academic journals, they disallow AI authors; but luckily ResearchGate have a more relaxed policy. You can find it .

Enjoy!
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.