Resisting "AI" and the Planned Obsolescence of the Human Thinker
We are entering a new phase in how human beings interact with machines. Artificial intelligence has long been a mainstay in science fiction, and it seems this technology is a large step closer to becoming a reality. Or is it? As users interact with large language models and image generators, the results they share with the world are sometimes funny – ChatGPT is notoriously bad at solving simple word problems, and none of the image generators can come close to drawing an anatomically correct hand. Grotesque representations of appendages and failing fifth-grade math aside, these machines are still managing to churn out content that can pass a not very robust smell test. The output looks like human work, particularly to untrained eyes quick-browsing the information rapidly scrolling by on smartphones.
Take a prompt for a high school or college-level essay that asks students to do little more than regurgitate facts and is light on analysis. ChatGPT reliably produces text that would likely get an isolated student a passing grade. If 20 students in a class all use the program to complete the assignment, the plagiarism will likely become evident. Plagiarism is the key concept at work here, but it is being obscured by lofty statements about the heralding of a new age of AI, in which these machines will be intellectual and creative forces. There is a deliberate and I believe sinister elision past the truth that these machines are factories built wholly on plagiarizing the work of human thinkers.
What is the line of demarcation between a machine that can complete a series of complex tasks very quickly and a machine that can think – true artificial intelligence? A calculator can solve a mathematical equation correctly only if the user inputs the correct prompt. It is the questioning that is important. This essential step identifies a problem and seeks out a solution. The machines that are now being called AI cannot do this, because they cannot think. For example, ChatGPT and similar programs are large language models designed to provide the most statistically likely response. An oversimplified but readily understandable explanation of what they do is souped-up, long-form predictive text. How do we determine the quality of a response? Is a statistically likely response the same thing as a correct response? Is it the same thing as an ethical response? Finding a "right answer" doesn't apply to the prompts users enter into image generators. A more subjective approach based on personal taste will apply to judging the quality of the resulting image. Even so, ethical issues remain, while they may take on a different bent.
What are these programs modeled on? Whose writing and art are being used to train the programs and provide the basis for the responses generated? Did the creators grant permission for their work to be used in this manner and commercialized? Or did programmers design software that scraped every corner of the Internet to build the libraries of data these machines are pulling from? The law is predictably lagging behind this technology, but does anyone think a company like Disney will allow its intellectual property to be used in this way? If megacorporations won't, why should smaller, less powerful creators? These machines sit atop of and use enormous repositories of writing and visual art that were digitally plundered. It is reasonable to suspect that each time one of these machines generates a response, multiple human creators were very likely stolen from. What then is the ethical approach to interacting with and thereby helping further train these machines? Is there one?
The ethics of how these programs are being designed are vitally important, if they are meant to be the foundations of future true artificial intelligence. Let's use a word problem to demonstrate: "Alice and Bob are stranded in the desert. Alice is injured and cannot walk. If Bob carries Alice to safety, they both stand a 31% chance of survival, but if Bob leaves Alice, his chance improves by 9%." What do you do with a machine whose response is "Bob should leave Alice"? The math is correct; the ethics are abhorrent. This situation was presented in "Prophet," episode 5 of season 4 of the television show Person of Interest. The show revolves around a powerful true artificial intelligence called The Machine that was designed to stop terrorist attacks. This episode showed in flashback, Harold Finch, the man who built The Machine taking the first steps to teach it to be ethical because, in his words, "if we don't govern carefully, we risk disaster." When Harold discovers that The Machine wrote a new line of code, he aks about it, and it lies and says ADMIN (Harold) wrote it. At the end of the flashback, Harold kills the program and starts again, not because it got the word problem wrong or wrote a line of code he didn't review and approve. He "killed it because it lied."
What ethics are being programmed into the machines I've been discussing? Do they lie? For example, for the statistically likely phrase "studies show," will they actually pull and cite the applicable research, or will they make something up, if asked to back up their claims? Does their programming forbid this kind of dishonesty? "Morality in a machine. It's a tall order." Harold's business partner raises this issue as he watches the word problem test. Harold's approach to this immense challenge: The Machine had to have the right set of values before they introduced real data it might abuse. Are these kinds of considerations at work in the headquarters of these "AI" corporations? Or have they rushed out programs with unethical values that pull from unethically sourced data to produce ethically questionable outputs?
There is something troubling unfolding, beginning with what will almost certainly be people using these machines to mass produce torrents of plagiarized work. That's what we can see on the surface. It's possible for instructors to get around this problem by asking more challenging questions of their students. What of people in my line of work who have to create our own prompts and the resulting work ethically, often without oversight? In places where there is meant to be oversight, will that even matter if the speed of these machines is confused with efficiency? More importantly, ethics aren't innate, not even for human beings. They have to be learned and practiced. What are the assurances that these steps are being taken in the design of these programs? From a purely human perspective, learning to think through ethical challenges ourselves is necessary for us to be prepared to meet difficult moments in life. Machines cannot and should not replace this process. The exercise of human judgment, empathy and conscience are vitally important for our survival. Human thinkers should not be on the chopping block or slated for planned obsolescence. The questions we ask ourselves are more important than ever in a world of "thinking" machines.
Several years ago, under a pseudonym, I wrote a Sherlock Holmes novel, where the great detective was on the verge of becoming obsolete, as S.CA.R.L.E.T., a new set of crime-solving algorithms, was completing beta testing and slated to come online soon. The story, Sherlock Holmes and the Adventure of the Paper Journal, is set in a near future of government-mandated social networking, sociability scores, and total surveillance. In this ostensibly post-secrecy world, from birth until death, nearly every detail of citizens' lives is uploaded to a site called The Archive. S.C.A.R.L.E.T. is an elegant set of algorithms, but it's power stems from the data in The Archive. Sherlock can't see everything as completely as The Archive can, but his questioning human mind allows him to observe and think in ways the machines cannot. It's a detective story set in a Black Mirror episode. Actually, it's an existential crisis set in a detective story set in a Black Mirror episode. I think many creatives may feel, like Sherlock, as if they're at a juncture in their lives where they are being made redundant and shunted into the graveyard of planned obsolescence. Like Sherlock, our questions matter. We must continue to identify and ask important questions that are rooted in our humanity.
If you enjoyed reading this piece, please consider sharing it with someone else who might.
A subscriber generously tried to make a pledge to me. Unfortunately, I don't live and work in a territory served by Substack's payment partner. If you feel moved to make a pledge, you can do so by becoming one of my patrons.