As columnist Peggy Noonan puts it, with Artificial Intelligence, “We are playing with the hottest thing since the discovery of fire.” To be scrupulously fair, actual artificial intelligence is likely a long way off – Economist Paul Krugman notes that “it” really takes the form now of “large language models,” however much their output looks like human expression. But the questions and fears around the topic are profound, and reasonable to hold.
What if, as Noonan cites Henry Kissinger, we can’t control the technology? What if AI develops a hint of malice? We can’t even conceive, as Ross Douthat notes, what shape the dangers might take, as we don’t yet know what the thing is. Most fears boil down to the idea that AI might become, or could replace, people. Those fears are likely heightened by descriptions like Eric Schmidt’s, of “’artificial general intelligence,’ or AGI,” which, as opposed to AI’s orientation to “solve a discrete problem, … should be able to perform any mental task a human can and more.”
Fears for the future are compounded by the likelihood today of who will be tasked to control AI to keep it safe, or ethical. As Noonan says, the ones “erecting the moral and ethical guardrails for AI …are the ones creating AI.” And even if those people mean well, their incentives are to grow their businesses, which is tangential to the general population’s concern. Said one tech figure, “You want to be there first and you want to be setting the norms …that’s part of the reason why speed is a moral and ethical thing here.” So the guardrails are narrowly constructed – “OpenAI and Microsoft … created a joint safety board, which includes [OpenAI leader Sam] Altman and Microsoft Chief Technology Officer Kevin Scott, (and) has the power to roll back Microsoft and OpenAI product releases if they are deemed too dangerous.” Such measures check off the corporate developers’ ethics box, but hardly answer the essence of the fears.
The inadequacy of controls on AI and its development, though, go beyond the scope or power of regulating bodies. We lack systematic thinking about AI and the ethics around its development. Right now this public discourse is a smorgasbord. We have the personal concerns of a Sam Altman. We have the considered general thought of a Kissinger. We have various philosophers and social scientists analyzing the issue through lenses of their particular fields and favorite schools of thought. We have regulators, trying to extend the disciplines they know to address this new set of concerns. And as always we have politicians, lining up arguments for their partisan cases.
Diverse viewpoints animating a vigorous exchange should generate a reasonable set of agreed upon axes of exploration, debate, and action. But, even if we leave aside the questions of business and technical regulation that a new industry must cope with, current public discourse ignores a whole aspect of the problem. The fears essentially boil down to whether AI could “become human,” and the harm that AI entities could inflict if they do. But why exactly do we fear AI becoming human? Do we know? To address fears of a “human AI,” we should ask first: “What makes humans human?”
There is no consensus answer, and it may be that AI could even, in an indirect way, help people think more clearly about what makes us us. As computers learn to outdo people in various tasks, we may be able to see more clearly just what human capacities an AI model cannot duplicate. Any such assessment will take judgment calls though, and the likelihood is that any such process will devolve quickly into argument, or be doomed to that from the start. This will not make human nature look particularly valuable.
In a sense, considered discourse about technology‘s ethical and social effects has been outpaced by the development of technology itself. One major critique was contained in the Marxist and Socialist antagonism to capital, and any debate was politicized early and devolved almost literally into global combat. So even today, an attempt to address the question of human essence amidst our industrialized world will lead many to check for the whiff of a “philosophical rat,” or even a politicized one.
Furthermore, there is already a view, widely and tacitly assumed, that humans are, at bottom, mechanisms optimizing our prospects for material survival and well-being, driven by our DNA. Everything about us, including feelings of solidarity, love, liberty, and anything else, are only tools to enhance these prospects. to help us confront the circumstances around us. This, the Realist view, also animates a large portion of U.S. policy thought. The view of indelible competition is reflected in Eric Schmidt’s advocacy of rapid US development of AI to outcompete China.
This call misses the point that China already views humans as mechanistic beings, best governed by their Communistic, neo-Confucian collective doctrines. People, their advocates assume without even saying, need wise rulers to interpret their best interests. American discourse needs to respond to our own national conception, which rests on the premise of equally endowed and unalienable personal rights, which governments exist to serve. If we pursue AI for Realist purposes, we risk undermining our own precepts, both in the goals of our pursuit and in the ways we might adopt AI.
An ethical discourse needs more than a winning argument. As Douthat points out, we do not even know the shape of our direst fears. What is needed is a consensus on those axes of inquiry, exploration, research and thought, and experimentation and debate. In a sense we need a counterpart to the whole mental infrastructure of science, research, development and implementation that underpins technology and industry. And any development of such a mental infrastructure faces the additional problem of comportment with America’s founding ethos. Science can make findings by materially objective measures whereas any finding by a discourse of ethics will involve judgement and subjectivity.
How to instigate a discourse that accounts for these issues is the stuff of a very extended discussion in itself. The one thing that a blog post can offer is a seed kernel for a basic premise. For American discourse, the whole structure of any discourse starts with the unalienable individual rights, which imply the mental and moral wholeness of each individual person. All possible channels of conceptual exploration and eventual regulation of AI – or any other complex new process – need to respect that integrity of the human person, which might be called the human soul.
We may, ultimately, find that AI actually enhances our humanity, or we may find ourselves committed to its suppression – anything is possible at this stage, so far as we can see. The most important point is that humans now need to approach technology in a deliberate and considered fashion, lest we lose all control of our own inventions by losing control of ourselves. For Americans, a considered understanding of our nation’s very definition gives our starting point.