In “AI Chatbots are Evil,” Marc Barnes argues, as the title suggests, that the use of AI Chatbots is immoral. My interest in his argument was piqued by listening to the debate between Barnes and Joseph Hobbs on the New Polity podcast. My arguments in this essay are similar to Hobbs’ (who argued against Barnes) but I think it will advance the discussion to put my perspective formally into writing. I will approach this issue in two steps. First, I examine Barnes’ written argument that AI chatbots are evil. Second, I move to showing where the argument fails.
Barnes Against AI Chatbots
Barnes begins with the idea that the end of conversation is achieving “communion with another intelligence.” His concern is that interaction with “AI chatbots deliberately frustrates the act of conversation from attaining its natural end.” To see how this might be so, we can consider how lying works. When a lie is told, one deliberately attempts to lead another to believe a falsehood. The natural end of the human act of conversation is intentionally thwarted—communion is broken—and so something immoral has occurred. Similarly, if one tries to communicate with an impersonal chatbot, the end of conversation will always be frustrated because communion with another intelligence can never be achieved.
Another way of putting this worry is that we are treating something differently than it is. This too seems to be a way of acting immorally. For example, if one treats their pet rock as if it were one of their children. Or, to take an example from Barnes, if someone were to treat a chatbot as if it were his girlfriend. Obviously, to treat an inanimate object as if it were a person is immoral.
At this point, Barnes needs to do two things. First, he must move from the trivial claim that there are some immoral uses of chatbots to establishing the claim that every or most uses of a chatbot are immoral. Second, he must forestall the objection that, as a practical matter, most people do not intend to interface with the chatbot as if it were another person. He is worth quoting at length on this point. He says,
Artificial intelligence is built to appear as an intelligence. Chatbots are built to chat.… The person who uses a chatbot in a manner that does not elicit disordered acts of conversation uses a chatbot poorly… Using… [chatbots] for other purposes and in other ways [than for the purposes for which they were created], it seems to me, always risks what we call “the near occasion of sin.” A technological device begs to be used according to its design. Any rigid, puritanical (and therefore clumsy) use of AI would face the temptation to just talk to it—just use the thing as it was made to be used.
His idea, I think, is that chatbots are an artifact of human creation which, by design, are intended to elicit disordered acts of conversation from humans. There is some evidence to support this. Chatbots frequently use filler words such as “um” or “hmmm” specifically to appear more human-like. Similarly, Chatbots use the first personal pronouns “I” and “me” and give responses to prompts as if it were a real person—thinking through the prompt received.
At the very least, Barnes seems to say, we should be extremely careful to lightheartedly use a device whose intended purpose is to produce inherently disordered acts (treating a machine as if it were a person). Further, it is not implausible that interacting with a chatbot feels so similar to real conversation that one might by design easily slip into treating it as a person. So, as a matter of prudence, we should simply avoid the occasion of sin and not use chatbots at all.
Criticism of Barnes’ Argument
Barnes has a legitimate concern that if chatbots are created for the purpose of an inherently disordered act, then we should at least be more hesitant to use them—even if we do not necessarily intend to use them for a disordered purpose. I grant to Barnes that there are reasons to think chatbots are designed to elicit disordered acts of conversation. However, I think that this is an uncharitable interpretation of the purposes that the creators of the chatbots had in mind. Chatbots are an artifact of human creation; consequently, they have as their purpose whatever task humans create them to do. One possible purpose a chatbot creator could have is to encourage humans to attempt person-to-person conversation with machines. But, there are many other reasons why someone could create a computer to behave in a human-like way—reasons different from this disordered purpose.
For example, someone could create a chatbot that interfaces with humans in a human-like way simply to make it easier to search for and retrieve information. If the chatbot does not give you the result you want, then you can simply clarify, and it will readjust to your response in a much more useful way than conventional search engines. After all, this is why most people are enthusiastic about chatbots. Designers often anthropomorphize interfaces not to deceive users, but to make the use of the machine more cognitively efficient—similar to how GPS voices speak in the first person, though no one thinks the GPS is a person.
Of course, it is possible that some chatbots—such as the AI girlfriend—were created to be interacted with as if they were a person; but surely not all of them were created for this purpose. What Barnes needs to do is survey the creators of chatbots and see what in fact their purposes were in creating them. In absence of this and given that some chatbots, such as perplexity.ai, are specifically reported by their designers as being created for the purposes of information retrieval; a more balanced approach than near total prohibition would be to give preference to those chatbots which were created for a good purpose over those that seem to be designed for immoral ones.
When I was in grad school, I had a Protestant friend who (more or less) granted that asking for the intercession of the saints is not in itself immoral, but still maintained that doing so is dangerous and ought to be avoided; he argued that there is always the risk that one will slip into the attitude of worshiping the saints. I had to laugh when I heard this objection—he is right that it is possible that prayer to the saints could go wrong in this way. However, I replied, the risk of this happening should not deter us from asking for the saints’ intercession because the risk is so minimal that a Catholic would take himself to be worshiping the saints.
The response that I gave to my friend is basically the same one that I give to Barnes about the use of chatbots. Perhaps Barnes has the strong desire to communicate with chatbots as if they were people; if this is the case, then he should definitely stay away from them. The same goes for those who want an AI girlfriend. All this means is that the use of AI chatbots needs to be taken on a case-by-case basis. This is similar to the use of alcohol or social media. There are plenty of people who tend to use these things inappropriately, but there are also many who use them well. Similarly, there will be some who have a strong tendency to treat chatbots as persons, but there are also many who have little or no such tendency, and it is for these latter persons that the use of chatbots is permissible.
In conclusion, Barnes’ objection to chatbots only establishes that there are some immoral uses, but it does not show that every or most uses of chatbots are immoral. Rather than accepting his near categorical condemnation of the use of chatbots, a better conclusion is to view his argument as a caution against the potential misuses of chatbots. It also serves as a call for self-examination, to see if in one’s own life there is the temptation to view these non-personal machines as persons. Barnes is right to warn that technologies which imitate human interaction invite misuse, but his conclusion mistakes a call for prudence with a call for prohibition.

