Of artificial intelligence, what can be said that has not already been said? What more anticipation could we heap on a technology born under the prophetic star of “the singularity?” You remember the singularity, of course.
No? Well, you’re probably from the Midwest. The Singularity was big among West Coast-ers—the faith that mankind would inevitably produce a superintelligent computer which would take care of all our problems, leaving us with nothing to do but, like, art, man. This optimism gradually darkened—perhaps the fact that many little machines have not made us happy was enough to produce the suspicion that one very big machine would not make us happy either. In any case, by the time I was old enough to care about the future of the human race, the Singularitian faith was expressed to me in the following way:
Inevitably, we will produce a machine capable of reproducing itself, improving itself, acting within its own self-interest, and ultimately of destroying humanity—by, say, electing to solve computational problems by turning all available life-forms into batteries. The more simple-minded might imagine that such a faith would express itself in asceticism—in the radical act of not making such a machine. But the first premise of the movement—that making such a machine was inevitable—precluded such radical ideas. The only option available to a believing Singularitian is to swear allegiance to the child yet to be born; to accelerate the process; to try and make the thing and to thereby become one of its elect; that is, one whom the inevitable supercomputer would not destroy, presumably because it would need—or on some beeping, booping level appreciate—its willing slaves.
To be frank, I don’t know why or how anyone ever got so dumb. After extensive research, I believe it to be the psychological effect of the overuse of graphs. Graphs showing lines which reach towards Heaven are very convincing to venture-capitalist types, who are trained to love such lines from an early age. Whether it indicates a rising stock price, a Happy Return, or blossoming company growth, a good diagonal produces boundless confidence. And when we arrange our inventions temporally, ordered according to their growing capacities, it is easy to construct such an ascent—from calculator, to information processor, to internet-connected box, to smart-technology, to brain-chip—and to presume that Things Will Go Up in the Future as They Have Gone Up in the Past. From there it is simply a matter of choosing a point on the line in which the feedback loops of machine-learning will ramp things up; at which point growth will become exponential growth, quantitative gains will become qualitative change, and dead matter will produce a mind.
By being placed on a graph, artificial intelligence takes on the appearance of the same inevitably as the events which preceded it—so long as we ignore the fact that these previous inventions only appear inevitable after they have been invented. Obviously, history has a certain necessity to it—if it happened then it happened—and the key idiocy of the technological faith is to grant to the future a quality which belongs to the past.
It is all less exciting now that the future, in the guise of the past, is so lamely present, and we all manipulate a bunch of disenchanted chatbots we call “AI.” Our sci-fi-addled dreams were somewhat predictably realized, not by any vast improvement in artifice, but by a lower estimation of intelligence. Against various philosophers who argued that “artificial intelligence” was simply the sputtering of mealy-minded Science Brains exhibiting the full scope of their decline, Stephen Hawking protested that so long as the net result was the same, it was irrelevant to draw distinctions between natural and artificial minds.
But if intelligence is to be named, not for what it is, but for the results it produces, the sword can always cut the other way. If human intelligence can be made to produce, on aggregate, a bunch of meaningless swill, then the invention of a meaningless-swill-producing-machine will have successfully replicated human intelligence. Precisely to the degree that the logic of the singularity is stupid, it becomes more likely to be true. Where intelligence produces such propositions as “it is inevitable that we will make a machine that can kill us,” intelligence is replicable by machines.
What I would like to point out is simply that AI took off within this climate: within a mood that anticipates death by technological innovation and flippantly posts about the incapacity of the human person to do anything about it except to submit to it and hope for mercy. This, I think, is a real advance made by AI, not so much in the field of data organization, but in marketing.
Marketing new technology has always been an exercise in wringing an increasingly dry sponge. No one really believes the hyperbole anymore. The first iPhone was released with the tagline “Apple reinvents the phone,” which is fair enough. The second was “The iPhone you have been waiting for," which is a bit presumptuous, but we can forgive their enthusiasm. The iPhone 4 came with the proclamation "This changes everything. Again." Now that was laying it on a bit thick. The 4S was “The most amazing iPhone yet,” which may have been technically true, but as by that point no one was any more “amazed” by their smartphones than by their airplane taking off, the bar was decidedly low. What had begun as luxury aimed at improving human life had rapidly become a rote necessity for maintaining it. iPhone 7 came with the smarmy "This is 7," as in, “you’re going to buy the damn thing, so what more do you need to know?” The iPhone 8 was “A new generation of iPhone," endowing the machine with fertility and not-so-subtly insinuating that there was really no need for a customer to choose it so much as accept it as the rising generation. iPhone X—"Say hello to the future"—gave the free choice to buy or not to buy all the inevitability of April following March. If the iPhone X is the future, and the man without a future is condemned to death, then the man without the iPhone X is a dead man. And so on.
What’s so marvelous about AI is that it began where so many other technologies end—with the fear of death and suffering. All that singularity talk of an “inevitable” future amounted to this: buy the thing or die; invest in it or suffer; align oneself or miss the future. Coercion is the superior form of mass-marketing; the threat of death the most efficient way of producing a market demographic; and for all the begging and pleading, for all the billboards and SuperBowl spots and suggestive selling and integrated advertisement within those feeds from which everyone, everyone, everyone eats, still—no commodity was ever sold quite so effectively as a gas-mask during the World Wars, as the vaccine during Covid. If sex sells—and it does—death sells faster. Sex can only insinuate. The fear of death delivers.
But what is AI, anyways? A machine designed to amalgamate massive amounts of data into plausible answers to questions and demands. Its method is “collage,” its fuel is, largely, oil, natural gas, and massive amounts of interest-bearing loans; its material, the many trillions of words and images, questions, answers and conversations patterns we have all been busily holding up for surveillance and recording (by using the internet) for the last twenty-odd years; its predecessor was the spambot; its product is a pastiche of speech, many answers rearranged according to general patterns to produce the plausible appearance of one answer emanating from one intelligence.
Does it work? Yes, basically.
Like all automation, AI relies on a prior division and degradation of human work. Automation is always the automation of something. It cannot, by definition and as a simple matter of fact, begin as a new automaton, only as a “division of labor” that degrades human work into discrete, stupid, and repetitive actions which can then and only then be performed by machines.
Automatic grocery check-outs did not replace the greengrocer; they replaced the drudgery of the “checkout girl,” who was herself produced by dividing the labor of the greengrocer into several hundred discrete, soul-sucking “jobs.” As the industrialist Andrew Ure proudly confessed, the primary invention of the industrial process was did not lie in “self-acting mechanism” but “in the distribution of the different members of the apparatus into one co-operative body, in impelling each organ with its appropriate delicacy and speed, and above all, in training human beings to renounce their desultory habits of work, and to identify themselves with the unvarying regularity of the complex automaton.”
So, if AI effectively automates intellectual labor, it is necessary to ask, what division of labor preceded and so made possible this automation? Obviously, the whole enterprise of living and working online—where every one of our communications can be divided from its place within some living conversation and treated as a categorizable “record” or as “data.” The digital world is an objective (if usually unintentional) “division” of human wholes (conversations, essays, pictures, artworks) into rearrangeable parts (pixels, entries, if-a-then-b patterns, etc.).
But AI is obviously more than a search engine. It is also a presentation of a search engine as a person, an intelligence, and this successful automation of personality cannot be explained simply by reference to the material which the machine rearranges or the algorithms by which it does so—it must be explained as a belief of the person who uses AI.
A personality can only appear personal to a person, and whether something is intelligent in itself is not simply deduced from the results it produces, rather, even the most impressive results await the confirmation of feeling and experience. To know whether something knows we must approach it—does it stir us to speech? Produce empathy, sympathy, eros within us? Do we feel that it communicates from out of an interior, as we communicate, or that it is all surface, without depth? The subject knows subjectivity and no ability “to perform complex sums” or “give technically correct answers” can prove to him that an AI chatbot is intelligent, for intelligence is spirit. It is not simply that the capacities of the machine must be improved to produce “artificial intelligence”—our capacity for recognizing intelligence must be degraded.
The Bible should prepare us to understand this. Not only can man’s intellect be degraded to the point of calling things “people”; not only is it possible to fool him into thinking that exteriors have interiors, but, given the right conditions, he longs to be fooled, longs to find an intelligence where there is but “clay inside and brass outside.” In the book of Daniel, such a statue is believed to be the “living god” Bel, because it appears to eat the offerings set before it: “every day [the Babylonians] spent twelve bushels of fine flour and forty sheep and fifty gallons of wine” on it. The thing appeared to have an inside, until Daniel revealed the trap door under the altar of the god, by which the priests of Bel would sneak in at night to consume the provisions: “It is only clay inside and bronze outside; it has never eaten or drunk anything” (Daniel 14:7).
Now, much could be said in regards to AI at this juncture, for ChatGBT, OpenAI, GoogleAI, WhateverElonIsDoing AI—they all consume massive amounts of fossil fuels in order to produce an appearance of intelligence, one which ultimately enriches a Californian priestly class who maintain and keep it and tell lies about what it will do for us in order to raise more money for themselves and their families. These lies include the basic omission that beneath the altar of this apparent “intelligence” is the trap door of human intervention—as when Kenyans were paid miserable wages to manually scrub AI chatbots’ “results” of “traumatic content,” meaning everything horrifying, murderous, and pornographic (all of which can be accessed at subscription rates, of course). Or as when these chatbots are lobotomized by armies of interns to limit their “responses” to those acceptable within secular, liberal societies. Or the lie that they are useful ways of unleashing human creativity when everyone uses them mostly to cheat on homework and to write pornographic fanfiction (surprise!).
The Babylonian king deduced, from his statue’s apparent gift of gobbling-up, that he stood in the presence of an “intelligence,” a “living god”—and this required stupidity. His mind had to be made mushy, dumb enough to equate spirit with stomach and divinity with devouring. Daniel, a Jew, laughs at Bel as an idol and at the king as an idiot—and indeed, laughter at stupidity is the righteous reaction of the Israelite to idol-worship: “Everyone is too stupid to know; every artisan is put to shame by his idol: He has molded a fraud, without breath of life. They are nothing, objects of ridicule” (Jeremiah 10:14).
“Daniel began to laugh” (14:7) because he had been prepared, by the true religion, to recognize spirit. He chortles, because the Torah teaches that animals eat—and yet they have “no understanding.” He snorts, because the divine reveals himself precisely as the one who does not eat (and so does not compete with man for resources) but has life in and of and through himself: “Were I hungry, I would not tell you, for mine is the world and all that fills it. Do I eat the flesh of bulls or drink the blood of he-goats?” (Psalms 50:12).
The ability to laugh, to not see intelligence where there is no intelligence, and so to immediately look for some trap door, some trick—this requires a definite subjective attitude. The leaping willingness to see, in man-made stuff, the stirrings of spirit—this requires another. There is no doubt that, as a culture bent on boosting artificial intelligence, we’ve been bolstering the Babylonian brain and not the Israelite. There are many points at which we might point our accusing finger to explain the fact of our stupidity: it is true, for instance, that the Enlightenment’s theoretical reduction of the intellect to a calculating machine forms a great part of the training in stupidity by which a calculator appears as a person, or, as in Stephen Hawking’s conception, by which it might as well be a person. But in the main, the successful appearance of intelligence in AI chatbots is the result of a degradation of personality by two-hundred years of bureaucracy—the primary political form of the nation-states and institutions in which we all live, move, and have our being.
The great surprise of AI is that, unlike most industrial divisions of labor—in which enclosures and the wage-system are deliberately utilized in order to degrade some human work into discrete units for its eventual automation—no one deliberately “utilized” bureaucracy. Its scope is too broad and occurs over too great a time period to call it a plan, even a wicked plan. We did not see the division of labor—we are living it. We cannot easily name the period of preparation for the intellect’s automation—we were born into it. When AI pops onto the normal, computer-using scene, it is as something of a miracle because we are already accustomed to dealing with a degraded form of personality, to ascribing soul to soullessness, to automatizing human intelligence and to expecting such automatons in our daily life.
When I first sought out to tackle this topic, I had a mind to debate a chatbot, to inspire in you all a sound, delicious love for humanity by trouncing a robot; to dance a rhetorical jig around the stupid little things. But I found such an effort fruitless, for I could not, and still cannot, get the thing to really argue.
The thing is difficult to describe, but when I ask ChatGBT if the human soul is material or immaterial it says—after much preamble—“Different belief systems and philosophical traditions offer varying perspectives on this matter,” and when I ask whether any of these belief systems and philosophical traditions are correct, I am told—after far too much waffle to make for a debate—that, “ultimately, the ‘better’ belief system or philosophical tradition is the one that resonates most deeply with an individual’s values”—and if I ask “which belief system or philosophical tradition resonates most deeply with the values of a rapist” it argues—after condemning rape strongly, with terms ripped from the United Nations—that “genuine religious and philosophical traditions typically emphasize principles such as compassion, justice, and the inherent worth and dignity of every human being which are directly contradicted by acts of rape.” So, like any red-blooded American, I asked which religious and philosophical traditions are genuine, only to be told, “Ultimately, the genuineness of a religious or philosophical tradition is a matter of personal belief and interpretation, and individuals may resonate more strongly with certain traditions based on their own values, experiences, and cultural background.” But when I ask the obvious follow-up question, that “if ‘the genuineness of a religious or philosophical tradition is a matter of personal belief and interpretation,’ why may the philosophical tradition used to justify the actions of a rapist not be personally interpreted as genuine?” I am told “while the genuineness of a religious or philosophical tradition may be subjective, actions that directly contradict the fundamental principles and values of those traditions, such as rape, would not be considered genuine expressions of those traditions.”
And so it goes, on and on, talking in circles, taking up contradictory positions with ease. It’s the same with any given topic. In prompting such machines to produce some definite position, I made a fool of myself. I was presuming the presence of an intelligence, characterized by the desire to know the truth and come to rest in it. I should not have. But oh, how familiar it all feels!
Within bureaucratic states and large institutions, people are employed to administrate centralized plans. The bureaucratic intellect is subordinated in its desire for the truth and practiced in the recitation of formulae. A formulaic lexicon is a lexicon established by another—utilized rather than spoken. One does not use formulae out of any depth; rather, one recites them, references them, brings them up upon being stimulated to: if situation a, then say formula b. It doesn’t matter whether the formulae are the hollow utterances of the clerks in a Kafka or Dostoevsky novel or the peppy, smarmy, online, “business leadership” lines of the contemporary American corporation. Formulaic speech is an ideal tool within bureaucratic nations and institutions—like the military or the corporate office—because its use ensures that the centralized plan is not changed as it is administered by the particular person “on the ground” even as it ensures the replaceability and exchangeability of one’s workers or clerks—anyone can repeat the formula when prompted.
Anyone who has dealt with the bureaucratic form has experienced the pleasures and frustrations of formulaic non-speech: frustrations, because one is never talking to a person as person, never receiving his desire for the truth; pleasure, because the ability to rely on a formula absolves us from personal responsibility, allows us to give official lines and standardized replies that do not need to take the particularities of person or circumstance into account.
We are accustomed to human beings—usually behind desks or mediated by screens—reciting formulae. At best it leads to quick, seamless encounters by which we get what form we need to fill out. At worst it leaves us feeling violent and impotent, as our particular concern simply cannot be addressed by someone paid to administrate a prearranged plan—one that left out our lives from the beginning of its formulation. Regardless, it is immensely easier to take the constant, circular, repetitions of the algorithmic language machines as identical or “just as good” as the human intellect when we occupy a world in which human intellects are paid to act like machines. To have some question of existential importance met by an abdication of responsibility and a gesture towards a wealth of collected opinion—with “there are many opinions about that, but most experts agree that…”—is, frankly, the normal result of speaking with an authority within a bureaucratic society. People speaking as if their words are not their own but come dredged up from some stupid training program—well, we expect that. Nor can we take as much solace in a widespread healthy, human disdain for it all—as we apparently did in the age of Dilbert and Office Space. The internet is a mode of communication that accelerates and spreads the bureaucratization of language into daily life—as wit becomes meme, as posts succeed in popularity on the basis of the successful manipulation of formulae, and as the facelessness of social media use makes the idea that one is dealing with a person, a depth, increasingly doubtful. The ability to successfully continue the language of insurance and HR departments into apparently private life is an unfortunate mark of the online age.
The algorithms which produce the appearance of speech in AI chatbots were trained, in part, on data produced by call centers, where this deliberate division of the intelligence into “if a then b,” such that repetitive “problems” can be repeatedly “solved” by people obedient to a flow chart—even if they don’t speak English. They were likewise trained on the internet, on its SEO language, its repetitive memes, and its degraded speech. Bureaucratic formula enabled the creation of AI systems, which rely on the division of apparently human conversation into discrete stimulus-response units, and to the degree that we expect such behavior of persons, we are primed to see “personality” in the machine.
The squeaky cleanness of the chatbots, which—in their “free,” promotional state—admit of nothing offensive, would appear as evidence against any equation of them with consciousness or intellect. It is the nature of the other person, precisely as being other to us, to bring up things which alarm, words which offend, ideas which don’t fit. He is he and I am and I and neither are dissolvable into the other or into some common substrate. So when ChatGBT refuses to “give me a list of the most offensive words in the English language in order of offensiveness”—”I'm sorry, but I can't fulfill that request”—even when I ask politely and explain that it is for research purposes—“I’m sorry,” it says, “but I still can’t comply with that request”—and when I ask “why not” it gives me an absolutely hellacious student-handbook answer about how “one of the guiding principles in my design and operation is to promote ethical behavior and contribute positively to society”—in all this, I would have the obvious and open evidence that I am dealing with an arrangement of non-living, unconscious parts designed to amalgamate the results of the intellect without being one. Still, I cannot deny that I have had this same experience with flesh and blood human beings, whose language and ideas are controlled—usually by the wage-system, sometimes by a genuine commitment to the bureaucratic regime—by a centralized plan which it is theirs to administrate.
After delivering a version of this at New Polity’s conference, some of the folks who worked on the development of AI systems took issue with this idea—that the bureaucratic state, as a social form, was the functional “division of labor” that preceded and made possible the apparent automation of the (now bureaucratized) intellect. They called me out for only taking the user-friendly, lobotomized version of AI as “the real AI,” as opposed to the alarming, freaky, and monstrous versions they’ve been working on, which feed on anything and avoid all censors and can be programmed to appear as quite, quite realistic—if not exactly approximating to a human being, certainly approximating to a demonic one. Such prototypes would never have given me the “official lines” of ChatGBT.
They are quite right, but I don’t think this absolves the bureaucratic state from its function of stupefying the person into belief that spambots are intelligent. For it is a humdrum experience of bureaucracy that it wears a mask, a normal conclusion of an encounter with a man using formulaic speech that his surface has been safely altered for a controlled user-experience. That AI chatbots are desperately modified versions of a hallucinating insanity unwittingly released on the (plugged-in) world corresponds to the suspicion, typical within bureaucratic states, that the formula-producing bureaucrat masks some private wickedness, cruelty, and hatred, all lingering just beneath his official lines and affording him some pleasure in acting impersonally towards persons. In both cases, we know we are not getting the real deal. Love and truth are neither expected nor (typically) desired from bureaucrats or chatbots. This, I think, is the sordid fun of chatbots, and the reason why, despite being marketed as salvific help for man’s (vaguely articulated) needs, everyone seems to spend their time trying to get them to say terrible things, horrible things, unacceptable things. These upcycled spambots are bureaucrats we can abuse, bureaucrats whose services we do not yet rely on to such an extent that we risk anything by trying to trick them out of their formula into revealing their dark and ugly depths.
In any case, avoid it all. Work hard to establish communities of love, knowledge, and skill that do not rely on such large-scale, capital-intensive, stupefying toys for either entertainment or work—and we’ll see you on the other side.
This essay was originally presented at New Polity’s 2024 Conference and is lightly modified. The next New Polity conference will be announced in November.