Last week, I testified before the Connecticut legislature’s General Law Committee about S.B. 2, a reintroduced bill seeking to regulate AI discrimination, among other things. One of the hearing’s themes was the ever-present dilemma: When is the right time to regulate new technology? General Law Committee co-chair Rep. Roland Lemar repeatedly analogized the proposed bill to seat belt laws and other automobile regulation, arguing that the time to act on AI is now.
In my testimony I praised the analogy and pointed to a specific regulatory lesson from automobile history: England’s infamous Red Flag Laws. In the 19th century, England was a hot bed of vehicle innovation. Steam-powered road coaches carried thousands of passengers a year as early at 1831.1 Yet the country implemented the Locomotives on Highways Act of 1865, which limited speeds to 2-4 mph and required that a person walk ahead of every vehicle waving a red flag of warning. The law was a “crushing blow” that “effectively stifled the development of road transport in the British Isles.”2
More than 100 years after the Red Flag laws, the U.S. still didn’t have the seatbelt laws of Rep. Lemar’s analogy. It wasn’t until 1968 that the U.S. federal government required all cars to install seat belts, and the first state mandated the use of seatbelts in 1984.
Perhaps the U.S. adopted seatbelt laws too late. But it’s crystal clear that England regulated too early, far before the full potential uses and risks of the technology were well-understood. They regulated an entirely new paradigm of transportation based on the characteristics of a few early products.
U.S. states are fast pursuing regulation of AI. Connecticut’s bill joins more than 700 other state AI bills introduced since January 2025. Virginia has an AI bill on Governor Youngkin’s desk. (He should veto it!) Colorado passed an imposing AI “algorithmic discrimination” law that has created significant controversy and concern. (A final report from a task force charged with recommending improvements the law came to the unsatisfying conclusion that there were major “issues with firm disagreement on approach and where creativity will be needed.” 🤔) Experts expect more than 1000 AI bills introduced by the end of state 2025 legislative sessions, and certainly some of them will become law.
This onslaught of AI laws comes too early. We will look back in regret and see this laws from this period as our version of the Red Flag laws. And that’s because legislators are regulating a new paradigm based on a few, early products, and without understanding how this technology is likely to be adopted.
AI is a Computing Paradigm, Not a Product
One reason states are so active is that there are new, very popular consumer products using AI. The leader, ChatGPT from OpenAI, was the fastest growing consumer app ever and now has 400 million weekly active users. There are dozens of competing apps. People therefore see AI as a product, and products are the kinds of things that legislators regulate.
But AI isn’t a product, it is a computing paradigm.
This current wave of AI is more accurately called “machine learning,” and it is best understood not as a standalone technology but as a change in how we program computers. Historically, programmers wrote detailed instructions in code that dictated how computers functioned. Now, with machine learning, programmers define goals, provide many examples of how to (and how not to) achieve those goals, and the computer itself generates the “code.”
This is not some mystical new technology. It is still programming computers.
However, machine learning is an important development because it greatly expands the types of problems that we can use computers to solve. Computation itself is a general purpose technology; it can and has been used to solve many different problems across a wide range of industries and applications. Machine learning has further expanded the purposes to which we can put computers. It enables us to code computers with a repository of knowledge that users can apply to a wide range of different problems. In fact, machine learning might be the most general purpose technology we’ve ever invented.
AI chatbots like ChatGPT, while popular, are a tiny slice of what is possible with machine learning. We’ve barely scratched the surface of the possible applications and uses for this new paradigm of programming.
AI will be Everywhere, Like Databases
Listening to Benedict Evans discuss AI with Ben Thompson on Stratechery reinforced my certainty that we’ve only scratched the surface of this technology, and tomorrow’s uses will be very different than today’s.
Evans points out that in 2023, conventional wisdom held that training frontier AI models would remain prohibitively expensive and available only to a select few companies. Such companies would provide services to access those unique models, which would be used by everyone for a wide range of purposes.
Increasingly, however, AI seems poised for a highly distributed future. Costs to train models have rapidly decreased, and now highly capable models can be developed for tens of millions of dollars—expensive, yet increasingly feasible. As prices fall further, AI models are becoming commodified. HuggingFace, a repository of open weight models, has more than 1.4 million different models with a wide range of capabilities. Specialized AI models can run on laptops or mobile devices.
Moreover, the way users interact with AI is already evolving. Many people today directly engage with AI model services like ChatGPT by writing “prompts” to get results. There are hundreds of “prompt engineering” tips and tricks. It’s a bit like the early days of personal computing, when you had to learn to code to make the computer do useful things. But, as happened for the personal computer, developers are building problem-specific products that use AI models in the background. These tools often have the ability to swap between models or to use multiple models. Thus, the user doesn’t directly interact with the AI models, but they still benefit from this new mode of computation.
Benedict Evans analogized this distributed, in-the-background AI future to another computational technology: databases. Everyone indirectly uses databases today, even though few people write SQL queries. Databases are so pervasive that asking, "How many databases are there?" is a pointless question with a useless answer.
AI is a newly popular paradigm for computing, and is set to follow a familiar pattern. Right now, the public is fixated on chatbots and prompt writing, but this fascination will fade as existing systems seamlessly integrate AI. Future AI capabilities will reside within familiar interfaces, quietly enhancing functionality. We might still use chatbots and call those “AI,” but the exact same technology will be behind the scenes, enhancing many of the computing tools we use every day.
The Risk of Modern Red Flag Laws
So AI is a programming paradigm rather than a product. And AI, like databases, will be deeply integrated into many computer applications. What does this mean for the likely effect of state AI legislation? My conclusion: current efforts to regulate AI will have the inevitable effect of regulating almost all future computation, and under a framework that is almost already out of date.
Legislation like Connecticut’s SB 2, Texas’s TRAIGA, Virginia’s HB 2094, and Colorado’s law are all structured around AI as a product. Under their frameworks, an AI product is created by “developers,” and then separately “deployed” by companies, sometimes with help from “distributors” or “implementors.” These terms are statutorily-defined categories, with different obligations.
These categories barely match today’s AI industry structure, which has driven debate about these bills. For example, stakeholders debate what level of model customization would move a company from the “deployer” to the “developer” category. There is a lot of confusion about what companies fall into the “distributor” or “implementor” role. And no one can seem to make sense of what to do about open source or open weight AI models, which don’t neatly fit the imagined industry structure of these bills.
This mismatch with reality will only grow. Imagine if there had been comprehensive database regulation in the 1970s after relational databases were developed. What if database developers had to demonstrate that they took reasonable care that no one would misuse their databases to harm others, and if every person that used a database as part of “consequential decisions” had to file reports and audits to ensure that the database was unbiased and accurate? Would we have ubiquitously deployed databases today? Or would we have driven app development to more friendly jurisdictions?
The state AI regulations being pushed today risk becoming the Red Flag Laws of the 21st century. If we impose restrictions based on a flawed understanding of what AI is and how it will be deployed, we will lock in outdated assumptions, stifle competition, and cede leadership to countries that take a more open approach.
Yes, AI will bring challenges, just as automobiles, electricity, and the internet did. And there may be specific applications of AI technology where we know enough about the technology, the risks, and the benefits to propose useful legislation. I have been supportive, for example, of laws to prohibit the distribution of non-consensual pornographic deepfakes.
But history’s lesson is clear: regulating too soon slows innovation, stifles national leadership, and hurts consumers. AI is not magic, nor is it beyond governance—but let’s not legislate ourselves into irrelevance by regulating a technology whose impacts we don’t yet understand.
We should not be the country waving fifty different red flags while others race ahead.
Encyclopedia Britannica, History of the Automobile (“Between February 27 and June 22, 1831, steam coaches ran 4,000 miles (6,400 km) on [a Gloucester-Cheltenham] route, carrying some 3,000 passengers” and noting that “[t]he most successful era of the steam coaches in Britain was in the 1830s. Ambitious routes were run, including one from London to Cambridge.”).
Id.




Overregulating AI too soon risks crushing innovation before we even understand its full potential.