As early Windows PCs started becoming fixtures in American homes around the mid 90s, forerunners of Microsoft’s Outlook came with them on a disk, as a business suite of programs. This disk offered a magical integration of contact lists, schedules and calendars, maps and phone books, and, here and there, innovations like journaling the operator’s activities, clipping work items together, and such. Your blogger’s initial reaction was that these functions imposed programmers’ uninformed ideas of working processes on the actual work that people actually did in their actual jobs. Learning the input formats did not feel natural, and the outputs often did not fit the needs of the actual business.
That said, over time, in many aspects, Office adapted its interfaces and some of its processes (it didn’t take up the journaling function), while the world of work grew in conformity with these programs. To some extent, “things work” today the way the software lays it out. Machinery has fixed another set of life parameters, these more mental than the old industrial architecture of 9-to-5 workdays, traffic laws, airline schedules and tax forms.
As Artificial Intelligence grows in its information base and its algorithmic capacities, humanity is looking at a next level of life parameters, in an even less tangible forms than the architecture of what we call “productivity apps.” AI may well re-shape life as drastically and as profoundly as the first Industrial Revolution did, as critic Yascha Mounck points out. How might it do so? We actually have extremely little to go on. So it is hard figure how we feel about the possibilities, why we feel that way, and what we might want from any change – or seek in repressing change.
There will be sentiment that resists change. Today’s app-based norms of office work make this blogger miss old routines – spontaneous phone calls to others in one’s network, flashes of insight that “Joe might like this deal,” and the like. Ideologists, particularly of the anti-capitalists or potentially of some techno-utopians, will try to impose their terms on any contemplation of what may come. And of course politics, most likely starting from the partisan “free enterprise vs corporate greed” issue-axis, will play on people’s fears, resentments, and interests.
But the effects of AI development could reach far beyond anything the worker, the politico, or the ideologue can even imagine. We see signs that AI is getting beyond our control, and may already be a kind of autonomous mind. What will guide AI bots if this is the case? All the human folly and inspiration reflected in the data that they have become trained on? Fear for survival? Does that spark an evolution akin to what we humans have gone through? Will that, or any other process, create benign beings? Are we benign beings? Will AI evolve into creative beings? Are we creative beings?
In the near-infinite potential depth and breadth of whatever comes next, we can only face the possibilities in any coherence if we adopt, or at least sketch some image, of what we most fundamentally value. Do we insist that we humans are creatures of free will, individual agency not pre-determined by mechanistic external conditions? If so, what discretions must be denied to AI bots, and how do we go about denying them? Who, after all, programs these things in the first place, and what is their intent as they do so? Could we not imagine that some young coders and bot-trainers are trying to play God? After all, they seem to have non-monetary ambitions, turning down “billion dollar jobs.” And yet is there any just way to ensure against ill intent in these pursuits?
Conversely, taking the viewpoint of the traditional Realist, what is the point of any of these inventions or the concerns they spark? If humans and other organisms are merely mechanisms for survival and self-replication, if all our traditions and codes and institutions are tools for our DNA best to perpetuate its legacies, what does AI do for us? Do we have the capacity to track its capacities and development, to see that it serves that end? Particularly in the possibility that AI becomes independent and adopts its own purposes, can we know that it will fit with our needs? And if not, what does it matter if AI takes over all existence and cuts us out?
Such questions feel distant from life lived today, but we should be wary of what we don’t know, and we don’t know what is going on inside the AI models. We also need to figure out how to judge where we fear too much. Could AI actually open opportunities to make good us of our free will? Does it enable humans in our agency, in a good way? Can we insert design elements that would point models toward our wishes? And yet, by the way, what will motivate the designers? If Ai can work all sorts of miracles, who wants to move it, and what’s in it for them? Will this new discovery be subjugated to the oldest of human lustings?
At bottom, humans need right now to concur on our deepest fundamental premises, from which we can determine the “good,” “bad,” or “indifferent” among coming developments. Our premises will, of epistemological necessity, rest on some article of faith, which we hope we hold with at least decent reason and benign sentiment.
This blog points out America’s founding identity, in our holding of the Truths of personal rights and government purposed to secure them. This holding implies an stake in protecting, nurturing, and promoting conditions for all persons to live best by their own lights. These premises yield some direction to the questions touched on here. Translating their import to choices and actions today will be extremely complex and emotionally fraught; politics will be involved. But affirming at least a basic bedrock premise is the only way to have a coherent approach to this new development. If nothing we know is safe, we need to know what want most and most fundamentally. For Americans, our founding creed offers a good start.