The Hudson Institutes Walter Russell Mead, in a March 27 piece, brilliantly analyzes and skewers our current institutional elite, what he terms a “wonkocracy,” running a
… sleek, self-interested reign of rule-following meme processors for whom blue chip academic credentials are the modern equivalent of patents of nobility, conferring a legitimate right to rule over the unwashed masses …
Though he clearly does not like these people, he gives his customarily rigorous analysis of the wonkacracy, and foresees an “accelerating disintegration” of this order, brought on by
…. the inexorable pace of technological advance, the gravitational pull of economic advantage, the stark requirements of national security, and the molten magma of populist rage …
This blog concurs with and recommends a full read of Mead’s highly nuanced analysis. And one may suspect that few outside the inner circles and bureaucratic org charts of today’s institutional order – that wonkocracy – would object too much to his tone.
Two questions might hit some of us: what would follow, and what would, or should, we want to succeed this institutional order?
One possible evolution might involve artificial intelligence, a technological advance as Mead cites, as it performs the meme-processing for which our universities have trained and credentialed the wonks. As the large language models (LLMs), the bots, do that meme-processing work, the wonks’ reason for being could disappear. The bots would need only a small number of handlers to keep then functioning and perhaps somehow “trained” in line with policies passed down from some to-be determined governance mechanisms.
Any “bot-ocracy,” whatever its blend of humans and machines, may well not arise. But the image offers one possibility to wish for, thinking the same hypothetical spirit of this hypothetical image. As business professor Ethan Mollick reportedly recommends in his book “Co-Intelligence,” bot users/handlers/managers should make themselves the “human in the loop” of algorithms and machine learning processes as the bots run them. If possible as we operate the LLMS, we should insert fundamental human moral premises into the architecture. Perhaps there will be other points in the loops where human qualities will have entre.
The next questions become “what kind of people should the operators be?” and “what moral premises should be infused into the bots?” As this blogsite has conjectured, perhaps we can “train” the LLMs to take the creed of the Declaration of Independence as ground-zero orientation point for their processes, baking our founding creed in as a basic programming / processing “rule.” For American institutions, and perhaps for the good of our species, an ethos of unalienable rights, equally and inherently endowed in all humans, must take primacy, and government and governance must be dedicated to securing those rights.
A technology-based order reflects only one speculative post-wonkocracy scenario. Others may be driven by populism, economic incentives, or national security concerns. As the current order declines, new possibilities will likely run well beyond any current conceptions, and could be Edenic or disastrous. For America, whatever institutions or disciplines succeed today’s order, the question of what kind of people should lead, or what the operators should hold in sanctity, should draw the same answer.
Whatever the outcomes in institutions and life, whatever organizing ideas or ideologies take hold, if we find ways to keep America’s founding premises as our beacon, we can adapt to almost any “-ocracy.” If we cannot find a way to keep fidelity to our founding, no mechanism and no order will save us.