Today's AI policy debates echo a profound warning from fifty years ago today. When F.A. Hayek accepted his Nobel Prize in Economics, he delivered a critique of his own field that, for me, resonates in current AI debates. His speech, "The Pretence of Knowledge," challenged economics' attempts to mimic the precision of physical sciences—a mistake that today's AI policy advocates risk repeating.
I found Hayek's insights so compelling that, in 2016, I wrote a simplified version of his speech to help me understand it better. As AI policy discussions have evolved, the parallels to Hayek's warnings have become increasingly clear. Just as economists of his era sought inappropriate control over complex systems, today's AI policy advocates often pursue a misguided quest for guaranteed safety and control over AI tools such as large language models.
Hayek's critique centered on economics' “scientistic” approach—its attempt to “imitate as closely as possible the procedures of the brilliantly successful physical sciences.” This misguided imitation, he argued, led to “outright error” and “some of the gravest errors of recent economic policy.” He identified three fundamental differences between economics and physics, each with parallels to current AI debates.
First, economists focused on what they could measure rather than what was truly important. Are today’s AI safety metrics measuring what is important? Second, the information needed for accurate economic predictions was impossible to gather completely. This knowledge problem is a major constraint on raw intelligence, suggesting that many AI doomer scenarios are far overblown.
But I mostly want to focus on his third critique. Hayek observed that economics deals with complex systems where the relationships between components are essential and irreducible. Such systems can't be reduced to simple averages or models, and yet economists were treating them as merely complicated systems.
As I paraphrased Hayek:
“[T]he variables that economists study cannot be summarized or averaged. Physicists, for example, don't need to measure the velocity and acceleration of every atom in a swinging pendulum. They can instead observe the average behavior of the atoms as a whole, thereby reducing a large number of independent variables into a very few variables representing the essence of the swinging pendulum. In contrast, economics and other social sciences deal with complex structures that we cannot reduce in this way. Economists study people and other complex elements where the essential nature includes the relationships between the individual elements, and this cannot be reduced without losing critical information.”
This insight applies directly to artificial intelligence. Machine learning models such as large language models are not deterministic in the way traditional software is. Instead, they are statistical function approximators that rely on probabilistic reasoning and exhibit emergent behaviors arising from the interactions of millions of parameters. These relationships cannot be reduced to simple, controllable, interpretable variables without losing their essential nature. When AI researchers and policy advocates call for “mechanistic interpretability,” safety guarantees, or precise control over AI models, I fear they're trying to cram these complex systems back into a deterministic box.
Computer scientists might be particularly susceptible to this temptation. Most software development involves deterministic, interpretable code where eliminating unpredictability is a primary goal. Machine learning models, with their probabilistic and non-deterministic nature, can seem unsettlingly foreign. However, such complex systems are the norm rather than the exception in nature and society—from ecological systems to human societies to biological organisms.
The pressure to claim control over AI systems stems from the same impulse Hayek identified in economics. As he noted, “The motivation for scientism is easily understood... because the natural sciences have been so incredibly successful, people often uncritically accept arguments that look scientific.” Today, we see this pattern in proposals that demand mathematical guarantees of AI safety or strict technological controls over model behavior. These approaches reflect what we might call a “control frame” rather than a “cultivation frame.”
This distinction between control and cultivation is crucial. As Hayek notes, a sculptor approaches their work with complete control, imposing their vision on passive material. A gardener, by contrast, works with natural processes—creating conditions for growth while respecting the inherent nature of their plants. AI development and governance require this gardener's mindset: understanding and nurturing beneficial developments while remaining humble about our ability to control complex systems.
Current policy proposals often reflect the sculptor's mindset. For instance, calls for mandatory AI safety standards, requirements for model interpretability, or demands for guarantees against misuses all assume a level of control that may be fundamentally impossible with complex systems. These approaches risk hampering innovation without achieving their intended safety goals—much like heavy-handed economic controls can stifle growth while failing to prevent market problems.
As Hayek warned about economics, attempts to exert rigid control over complex systems often eliminate the very benefits of the complex system: “Even if such power is not in itself bad, its exercise is likely to impede the functioning of those spontaneous ordering forces by which, without understanding them, man is in fact so largely assisted in the pursuit of his aims.”
The path forward lies in embracing a cultivation approach to AI development and governance. This means creating environments and incentives that encourage beneficial AI development while remaining humble about our ability to predictably control every aspect of these systems while maintaining their capabilities. It means developing market-incentivized, robust testing frameworks and safety practices while acknowledging that absolute guarantees may be impossible. Most importantly, it means recognizing that the most effective solutions may emerge from the distributed efforts of many actors rather than from centralized control.
Hayek's wisdom remains relevant. As I paraphrased it:
If we truly wish to improve society, we must be humble and realize the bounds of what is possible with social science. Rather than attempting to shape society directly like a sculptor shapes a statue, we must seek instead to understand and to create the right environment for progress, like a gardener in a garden. Overconfidence in the use of science to control society will make a man a tyrant, and will lead to the destruction of a civilization which no brain has designed, but which has instead grown from the free efforts of millions of individuals.
In AI policy, as in economics, such humility might be our most important guide.
I largely agree with the point about technocratic approaches, the general Hayekian frame, and the way you argue for a balanced approach at the conclusion, but I think there are a few significant errors / incorrect assumptions in the middle of this piece.
One is that there seems to be a category error in treating the current direction and course of AI development as something that is neutral or organic, that simply requires 'cultivation', as if there wasn't some factor of control. "The cultivator works with natural processes—creating conditions for growth while respecting the inherent nature of their plants".
Any "inherent nature" of AI systems today is a result of design choices that were influenced by market factors and forces that involved an element of control; just because the control wasn't coming directly from the government, a governance-first approach, or from people with a regulatory point of view, does not mean that there isn't some kind of control.
Arguably, the market and AI labs driving development adopted a "scientistic" approach, merely in a way that emphasized AI as "black-box" type, stochastic, neural reinforcement AI systems, vs. other paradigms and approaches that may have different strengths and weaknesses, or fewer inherent externalities..
Is it really accurate to claim that AI development as we know it today is merely natural and not "scientistic" whatsoever simply because either the initial innovation of the past ten+ years happened completely outside the auspices of government, or because the paradigms and design architectures that have captured the majority of the market are based on stochastic approaches that might reflect some of the complexity of the natural world?
Haven't market forces "exerted rigid control" in opting to prioritize stochastic neural language models and practically redefining AI to mean only that?!
You say that "current policy proposals often reflect the sculptor's mindset" while seemingly ignoring that current AI designs and implementations are also "sculpted" and in many ways reflect the mindset, values, and ideologies of their designers.
Maybe the truly Hayekian move would be to question whether the course and outcomes of the AI development that has occurred in the past ten+ years is actually the best or most innovative, or at least whether it actually deserves the market attention it has gotten, and has the right to suck up all the oxygen in the room..
It is also fair to question the degree to which the "market", in which today's AI development has progressed, is or is not actually free from control, whether it comes from government distorting incentives via industrial policy, or inherent ideologies and motivated reasoning at major tech companies and AI research labs.
The word "freeorder", defined as "quest serving balance among designed and spontaneous orders", may be useful. See chapter 2, "Cosmos and Taxis" of Hayek's "Law, Legislation and Liberty".
A discussion with Professor A.I. Hayek (AI emulation) about freeorder:
https://explorersfoundation.org/glyphery/642.html
http://explorersfoundation.org/freeorder.html
http://explorersfoundation.org/threads.html