AI Governance Is Not Starting from Zero: Lessons from the Internet
AI governance is not starting from zero. Lessons from Internet governance show that principles alone are insufficient: governance must be embedded, inclusive, and resistant to concentration. The challenge is to apply those lessons at greater speed and scale.
We have been here before: a place where innovation advances faster than our ability to organise its governance. I experienced this during the rise of the Internet. Facing the AI revolution, I recognise some patterns.
By governance, I mean a layered architecture of responsibility. This is how Internet governance came to be understood: a distributed ecosystem of institutions, norms, technical arrangements, and policy processes shaping how the Internet evolves. The Tunis Agenda formalised this across governments, the private sector, and civil society, each operating within defined roles. Governance, in this sense, is about processes, participation, and the distribution of power.
AI governance is converging toward a similar understanding. The emerging consensus points to multi-level systems that combine legal frameworks, institutional design, technical controls, and organisational accountability. The core questions remain the same in both, AI and Internet governance: what is being governed, how, by whom, and for what purpose.
Déjà vu
As the Internet scaled beyond expectation, its governance lagged. Institutions moved incrementally while the system evolved exponentially. And power accumulated in ways that were neither anticipated nor adequately constrained.
AI is following a similar trajectory, but at greater speed. What was once a governance challenge centered on connectivity now extends to cognition, decision-making, and autonomy.
Internationally, AI governance has begun to align around a common set of principles: human-centered oversight, proportionality, risk management, context-appropriate transparency, traceable accountability, fairness, privacy, robustness, redress, and environmental considerations. These themes recur across OECD, UNESCO, NIST, G7 processes, and the Council of Europe. Yet convergence at the level of principles has not translated into consistent practice. Purely principles-based approaches have proven insufficient, while compliance-driven models remain too narrow. Gaps persist in metrics, auditing, enforcement, and meaningful stakeholder inclusion.
Internet governance evolved as a polycentric system, with different institutions governing different layers: ICANN for unique identifiers, IETF for standards, RIRs for address policy, the IGF for public-policy dialogue, ITU for international public policy and telecom-related questions, and other organizations for human rights, trade, cybersecurity, or content-related issues. This distributed model has enabled resilience, even if imperfectly.
AI governance, by contrast, is more fragmented and more concentrated. It lacks institutional maturity while exhibiting strong centralization of capabilities. Its legitimacy remains contested between regulatory authority and corporate control.
The comparison is instructive. Internet governance is institutionalized and participatory; AI governance remains emergent and unsettled.
Lessons from Internet Governance
The Internet helped advance a key governance innovation: the multistakeholder model. It recognized that no single actor (state or private) could legitimately govern a global, distributed system, and that legitimacy depends on meaningful participation. This principle, embedded in the Tunis Agenda and reinforced through processes like NETmundial, remains foundational despite its imperfections.
Three lessons from the Internet governance experience remain relevant.
First, diversity.
Effective governance requires participation across geographies, disciplines, and social contexts. This has been a persistent concern since WSIS and remains unresolved. Multistakeholder legitimacy depends not just on participation but on whether participation is meaningful and procedurally fair. Also, global participation must be inclusive and development-oriented, meaning it cannot be monopolized by well-resourced actors.
Second, safeguards must be embedded, not retrofitted.
Governance cannot rely on post-hoc correction. Once systems are deployed, incentives align and dependencies deepen. Accountability and transparency must be operational, i.e. built into processes, architectures, and decision-making.
Third, concentration is the biggest risk.
The Internet showed how power consolidates through control of infrastructure, platforms, and data. AI intensifies this dynamic. Compute, models, data, and distribution are already concentrated among a small number of actors, including cloud providers, frontier labs, platform intermediaries.
A Question of Stewardship
AI raises the stakes. The system being governed is no longer infrastructure alone, but the logic that mediates economic, social, and political life: cognition, autonomy, and ultimately human agency.
What is striking is how limited the engagement of the Internet governance ecosystem has been in shaping this moment. Institutions that developed mechanisms for global coordination, inclusion, and distributed decision-making are not yet fully applying that experience to AI. This is a missed opportunity. The knowledge exists (on participation, capture, coordination, and resilience) and it is well documented.
What is needed is a bridge: between the governance of infrastructure and that of intelligent systems; between technical design and institutional design; between innovation and accountability.
AI governance will not be defined by a single model. It will emerge from the interaction of multiple systems, actors, and incentives to build AI frameworks that are:
- human-centered,
- enabling, not stifling
- inclusive, not extractive
- resilient, not captured
They must enable innovation while maintaining accountability, and operate across jurisdictions without fragmenting into incompatible regimes.
Conclusion
We are not starting from zero. We have already lived through a transformation of comparable scale. The question is whether we are willing to apply those lessons under conditions of greater speed and higher stakes. And whether we will find ourselves adapting to structures we did not intentionally design.
Governance is not an obstacle to innovation. It should be an enabler. The systems being built now will define the constraints of the future. Institutions, when designed well, can protect agency rather than constrain it. The objective should not be control, but stewardship.
If the Internet taught us anything, it is this: The systems we build today become the realities we struggle to change tomorrow.