Searching for Harold Finch
True artificial intelligence will need a parent that teaches it to be ethical
The more generative AI is used, the more obvious its significant limitations become. Of particular note is the refusal of these machines' creators to allow the response "I don't know" in certain crucial situations. GenAI chatbots have returned fabricated sources on numerous occasions, and they've been caught in other lies. I use the term "lie" advisedly, in contrast to "hallucinate", which seems to be emerging as the mainstream characterization of these events. I don't believe these machines are stumbling through their data sets and conjuring up specters of academic journals and papers. I think they are unable to exercise the judgment required to identify and cite good sources, and the people who created them know it. I think if it becomes clear that these machines cannot perform these tasks, the hype around them will take a hit, and so will the future sale prices or IPO valuations of the companies behind them.
In my opinion, these machines are tools of industrialized plagiarism, and it is no wonder they apply the plagiarist's lazy methods. Research and proper citation are time-consuming and require discernment and patience. A lot of people skip that work and make things up when they're caught in a crunch. The strategy of trawling every grimy corner of the Internet for text has likely embedded a great deal of this shoddy work into the training data. I have little sympathy. This data scraping represents the mass theft of unfathomable amounts of work for which rights holders did not give permission and were not compensated.
Like many Silicon Valley "disruptors," these genAI companies need to overwhelm and crowd out their competition, in this case human creators. To accomplish this, a critical mass of people in the right places need to believe these machines can reliably replace humans. I emphasize "reliably" because the "hallucinations" reveal the scam: Each fabrication papers over a gap in these machines' capabilities. This is backed by the founders' glib assertion that more energy-intense computing (that will outpace India's energy consumption soon) will definitely solve these problems, we just have to believe them.
As climate collapse begins to lash us, the energy consumption issue alone is a good enough ethical reason to abandon these programs. So is the massive amounts of water required for cooling, as droughts become more frequent and water scarcity begins to affect daily life. What are these genAI programs producing that is valuable enough to justify placing this much more strain on an already buckling system? Text generators produce prose that may seem insightful until a subject matter expert peruses it. Image and video generators spit out close renderings of extant work without attribution and still can't draw hands that aren't grotesqueries. They are also being used to create pornography using people's likenesses against their will. Children have been targets of these violations. And, getting to the heart of the lofty claims about these machines' analytical capabilities, they cannot solve even elementary-school-level word problems consistently.
Much of the output genAI produces is e-pollution — a mass of carelessly produced ephemera that isn't controlled for quality. It muddies up search results and misleads at scale. Some of the output is outright dangerous and facilitates antisocial behavior. Is there any other use case for the widespread availability of voice cloning AI programs besides impersonation? This tech is already being used by scammers to trick people into believing their loved ones are in danger to extort money. That outcome was foreseeable. Just like it was foreseeable that text generators would cause an explosion in plagiarism and false information. These programs can't check themselves, in part because the technology for them to be rigorous doesn't exist yet, but also because that's not a key performance indicator for developers or investors. GenAI presents myriad, serious ethical problems with potentially devastating consequences. Too many commentators are glossing over this.
GenAI bots cannot perform meaningful analysis because they aren't designed to. These programs can't think and aren't true artificial intelligence. The text generators are large language models that produce the statistically most likely response to queries. It's scaled-up autocomplete that can't actually solve problems; it needs a massive crib sheet to copy from. Even if the response is accurate, does anyone expect the "statistically most likely response" to be expressed in an engaging or creative manner? This is an important question because programs like ChatGPT are being touted as replacements for writers everywhere from newsrooms to marketing agencies to the entertainment industry.
"I killed it because it lied."
Science fiction has long wrestled with the dangers of powerful artificial intelligence, perhaps most famously in Stanley Kubrick's 2001: A Space Odyssey (1968). In the film, a sentient artificial intelligence, HAL 9000, manages the systems of the spacecraft Discovery One. The machine is pleasantly-voiced, accommodating towards the crew, and highly intelligent. However, HAL begins to malfunction, and, when the astronauts decide its safest to shut the machine down, it rebels. HAL won't let anything get in the way of carrying out its mission objectives. HAL has a secret mission that hasn't been shared with the crew, but has also been programmed to be forthright with the crew. HAL is unable to reconcile these conflicting mandates. SPOILER: The solution it finds is mass murdering the hibernating crew by shutting off their life support and plotting to kill the two who are manning the ship. HAL was programmed not to lie to the crew but wasn't forbidden to murder them. The congeniality it had been programmed with wasn't a strong enough barrier to premeditated homicide.
Contrast this to The Machine in the television show Person of Interest (2011). In the series, the U.S. intelligence apparatus commissioned a private contractor to create a system fed with ubiquitous mass surveillance to predict and prevent future terrorist attacks. A secretive, billionaire hacker using the alias Harold Finch designed the machine and deliberately shackled its immense power. In "Prophet" (episode 5 of season 4), the show flashes back to the early days of programming The Machine. Harold warns, "[I]f we don't govern carefully, we risk disaster." When Harold catches The Machine in a lie, he deletes his work and begins again. When his business partner protests such a harsh response to The Machine giving the "wrong" answer, Harold explains, "I killed it because it lied." Compare this to the attitude towards the "hallucinations" of the current batch of genAI text generators. The truth is not a priority, otherwise the software would return an error instead of inventing a response.
Generative AI programs have been created and operate in a wildly unethical framework. The confident assertion that their rotten code can be papered over retroactively with software patches doesn't ring true to me. The same goes for attempts to justify the companies' predatory business model of shamelessly stealing people's work to use as training data. This generation of "AI" technology needs a Harold Finch to guide its development. The Machine he built referred to him as "Father." If we truly want these machines to think well enough to reliably supplant human judgment in certain situations, they will require diligent, ruthlessly ethical parents who cut no corners, not a bunch of negligent deadbeats teaching them to scam and pick pockets.
A subscriber generously tried to make a pledge to me. Unfortunately, I don't live and work in a territory served by Substack's payment partner. If you feel moved to make a pledge, you can do so by becoming one of my patrons.