When discussing artificial intelligence (AI), we often hear about the need for more rules or “guardrails” to prevent the technology from going off course. This is an understandable notion I initially shared, but it’s also somewhat misleading. AI is more akin to a dog than to a car.
You can’t train a car. You program it, drive it, hope the brakes work—but it doesn’t learn. AI, on the other hand, does learn. It makes mistakes. It adapts. And that’s precisely why a fixed path with guardrails isn’t sufficient. AI requires someone at the other end of the leash—someone who intervenes when necessary, provides guidance, and offers feedback. Just as you would with a dog on a leash. Just as you would with children.
This perspective is inspired by researchers Cary Coglianese and Colton Crum. They propose that instead of rigid rules—”guardrails”—we should adopt flexible, human-guided control: a leash. It is not a system you install and then release, but a relational model that emphasises continuous oversight. It is not an attempt to eliminate every risk in advance but a way to remain engaged with what is developing.
This doesn’t mean that rules aren’t necessary. Just like raising a child—or training a dog, in the authors’ metaphor—you start with a clear set of basic rules: what’s allowed, what’s not, and where the boundaries lie. Without these frameworks, chaos ensues. But rules alone aren’t enough. They only work if someone is there to explain them when needed, monitor them, revisit them when things go wrong, and sometimes adjust them as situations change.
The more you think about it, the more relatable it becomes. We often opt for rules and protocols in education and parenting, hoping for predictable behaviour. But children aren’t cars. Like AI, they’re changeable, surprising, creative, and sometimes unpredictable. What truly works is proximity. Someone who pays attention, questions, sets limits when necessary, and provides space when possible.
Coglianese and Crum cite examples of AI systems that went awry: a self-driving car failing to recognise a pedestrian, a radicalised chatbot, and an algorithm that discriminated in hiring processes. Not out of malice, but because the training was inadequate or the human oversight failed to intervene. Children make mistakes too—sometimes serious ones. But we correct them, talk, learn, and try again. That’s parenting. And that’s also what AI needs: continuous human involvement, rather than blind control.
The problem with many AI discussions is the belief in the power of systems without people, as long as the protocol is correct and the rules are clear. But just as you don’t raise children with a manual, AI won’t adhere neatly to every script. What matters is someone who keeps watching, talking, and thinking.
Perhaps we should learn to train AI as we try to raise children: with a clear foundation of rules, but primarily through proximity and responsibility, not by trying to control everything in advance, but by staying present. By admitting that mistakes are sometimes unavoidable, they can be corrected. So maybe AI doesn’t need guardrails, but a sturdy leash. And above all: someone to hold it.
Abstract of the paper :
Calls to regulate artificial intelligence (AI) have sought to establish “guardrails” to protect the public against AI going awry. Although physical guardrails can lower risks on roadways by serving as fixed, immovable protective barriers, the regulatory equivalent in the digital age of AI is unrealistic and even unwise. AI is too heterogeneous and dynamic to circumscribe fixed paths along which it must operate—and, in any event, the benefits of the technology proceeding along novel pathways would be limited if rigid, prescriptive regulatory barriers were imposed. But this does not mean that AI should be left unregulated, as the harms from irresponsible and ill-managed development and use of AI can be serious. Instead of “guardrails,” though, policymakers should impose “leashes.” Regulatory leashes imposed on digital technologies are flexible and adaptable—just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration. But just as a physical leash only protects others when a human retains a firm grip on the handle, the kind of leashes that should be deployed for AI will also demand human oversight. In the regulatory context, a flexible regulatory strategy known in other contexts as management-based regulation will be an appropriate model for AI risk governance. In this article, we explain why regulating AI by management-based regulation—a “leash” approach—will work better than a prescriptive or “guardrail” regulatory approach. We discuss how some early regulatory efforts are including management-based elements. We also elucidate some of the questions that lie ahead in implementing a management-based approach to AI risk regulation. Our aim is to facilitate future research and decision-making that can improve the efficacy of AI regulation by leashes, not guardrails.