CERN

The laboratory that invented the web as a side project

By VastBlue Editorial · 2026-03-26 · 18 min read

Series: Made in Europe · Episode 7

CERN

The Proposal Nobody Asked For

In March 1989, a British software engineer at CERN, the European Organisation for Nuclear Research, submitted an internal document to his supervisor. It was titled "Information Management: A Proposal." It ran to about twenty pages, with diagrams of interconnected nodes, circles linked by arrows suggesting a web of relationships between documents, people, experiments, and the data they generated. The supervisor, Mike Sendall, read it and wrote two words in the margin: "Vague but exciting."

That engineer was Tim Berners-Lee. The document he submitted was not a business plan, not a patent application, not a product roadmap. It was a frustrated physicist's attempt to solve a mundane infrastructure problem: how do you help thousands of researchers, working on hundreds of experiments, using dozens of incompatible computer systems, share information without losing their minds?

CERN in the late 1980s was one of the most information-rich environments on earth. Thousands of physicists from universities and research institutions across Europe and beyond descended on its campus straddling the Franco-Swiss border near Geneva. They came for the accelerators — the machines that smash particles together at nearly the speed of light so that physicists can study what falls out. But the data these experiments produced was trapped. It lived on different computers running different operating systems, formatted in different ways, accessible only through different protocols. Finding a colleague's report might require knowing which machine it was stored on, which network protocol to use, which file format to decode, and whom to telephone to get the right access permissions.

This was not a failure of technology. CERN had some of the most advanced computing infrastructure in the world. The internet already existed — CERN had been an early European node on the network since the early 1980s. Email worked. File transfer worked, after a fashion. The problem was not connectivity but navigability. Information was everywhere. Finding it was the hard part.

17,000+ Researchers using CERN facilities — By the late 1980s, CERN hosted thousands of visiting scientists from over 100 countries, each bringing their own computing systems, data formats, and institutional habits.

Building the Web in a Corner Office

Berners-Lee's proposal was deceptively simple. He suggested combining three existing technologies in a way nobody had thought to combine them before. The first was hypertext — the idea, developed by Ted Nelson in the 1960s and implemented in various forms since, that documents could contain links to other documents, allowing readers to navigate non-linearly through a body of information. The second was the internet — the global network of interconnected computers that already carried email, file transfers, and remote login sessions. The third was a system of unique addresses — a way to name any document on any computer on the network so that it could be retrieved by anyone who knew the address.

The insight was not in any one of these ideas. It was in their combination. Hypertext had existed for decades but was confined to single computers or closed networks. The internet connected millions of machines but offered no easy way to browse their contents. Unique identifiers existed in various forms but had never been applied universally across the network. Berners-Lee proposed welding them together into a single system: a global hypertext network running on top of the internet, where any document anywhere could link to any other document anywhere, and a user could follow those links with a click.

To make it work, he needed to invent three things. A protocol — a language that computers would use to request and deliver documents across the network. He called it HyperText Transfer Protocol: HTTP. A formatting language — a way to write documents that contained links, headings, paragraphs, and other structural elements. He called it HyperText Markup Language: HTML. And an addressing scheme — a way to give every document on the network a unique, human-readable name. He called it the Universal Resource Identifier, later refined into the URL: Uniform Resource Locator.

By the end of 1990, working on a NeXT computer — the elegant black cube designed by Steve Jobs's post-Apple venture — Berners-Lee had built all three. He had also built the first web server, the first web browser (which doubled as an editor, because he envisioned the web as a collaborative space, not a broadcast medium), and the first website: info.cern.ch. It described the World Wide Web project itself — what it was, how to use it, how to set up your own server.

None of this was Berners-Lee's job. He was employed as a software engineer at CERN, working on computing systems for physics experiments. The web was a side project — something he worked on because he believed it would solve the information-sharing problem that made his actual work harder. His managers tolerated it because CERN's culture encouraged exactly this kind of creative tinkering. The laboratory existed to push the boundaries of physics, and it had long understood that pushing those boundaries sometimes required pushing the boundaries of everything else too: engineering, computing, materials science, cryogenics. If a physicist or engineer saw a problem and had an idea for solving it, CERN generally let them try.

1 NeXT computer — Berners-Lee built the first web server, first web browser, and first website on a single NeXT workstation at CERN. His supervisor famously scrawled "Vague but exciting" on the original proposal.

The Decision That Changed Everything

If Berners-Lee's technical achievement was building the web, his strategic achievement — and arguably the more consequential one — was giving it away.

By 1991, the web had spread beyond CERN. Other research institutions had set up web servers. The first web browsers for non-NeXT systems appeared. Physicists who had visited CERN returned to their universities and installed web servers to share their own data. The network effect was beginning: every new server made the web more useful, which attracted more servers, which made it more useful still.

But the web's growth was constrained by uncertainty about its legal status. Who owned it? Could CERN charge licensing fees? Would universities and companies be free to build web servers and browsers without negotiating intellectual property agreements? In the early 1990s, the commercial internet was just beginning to emerge, and the question of who owned the underlying protocols was intensely contested. Proprietary alternatives to the web were being developed — Gopher at the University of Minnesota, WAIS, various commercial online services — and several of these had licensing restrictions that limited their adoption.

On 30 April 1993, CERN made a decision that would shape the next three decades of human civilisation. It released the World Wide Web software into the public domain. No licence fees. No royalties. No restrictions. Anyone, anywhere, could use the protocols, build browsers, set up servers, and create websites without asking permission or paying a centime.

CERN's decision to release the web into the public domain on 30 April 1993 may be the single most consequential act of institutional generosity in the history of technology. The laboratory gave away something it could have charged for, and in doing so made the modern internet possible.

Historical assessment

The context makes the decision even more remarkable. CERN is a publicly funded research organisation, supported by contributions from its member states. Its annual budget in the early 1990s was approximately one billion Swiss francs. Licensing the web could have generated significant revenue — potentially transformative revenue, as the technology spread. The decision to forgo that revenue was not made by a lone idealist acting unilaterally. It required the approval of CERN's management and, implicitly, the assent of the member states whose contributions funded the research. European taxpayers, through their governments, funded the laboratory that invented the web. And then the laboratory gave it to the world.

The effect was immediate and irreversible. Within months of the public domain release, the number of web servers exploded. The Mosaic browser, developed at the National Center for Supercomputing Applications at the University of Illinois, brought the web to personal computers with a graphical interface that made browsing intuitive. Marc Andreessen, one of Mosaic's developers, left to co-found Netscape, which released its Navigator browser in 1994. The commercial web was born. By 1995, there were an estimated 23,500 websites. By 2000, there were over 17 million. Today, there are nearly two billion.

30 April 1993 The web enters the public domain — CERN released the World Wide Web technology with no licence fees, no royalties, and no restrictions — enabling the explosive growth that followed.

Infrastructure for the Mission

To understand why the web was invented at CERN — and not at MIT, not at Bell Labs, not at Xerox PARC — requires understanding what CERN actually is and what it was built to do.

CERN was founded in 1954, nine years after the end of World War II, by twelve European governments seeking to rebuild the continent's scientific capability through collaboration rather than competition. The name is an acronym of Conseil Européen pour la Recherche Nucléaire — the provisional council that established it. The laboratory itself was named the European Organisation for Nuclear Research, but the acronym stuck.

The founding principle was radical for its time: that fundamental physics research was too expensive, too complex, and too important to be pursued by any single European nation alone. The particle accelerators required to probe the structure of matter demanded resources that no individual country could justify. But pooled together, European nations could build machines that rivalled and eventually surpassed anything the United States or the Soviet Union could construct independently.

The model worked. Over the following decades, CERN built a series of increasingly powerful accelerators, each one probing deeper into the subatomic world. The Proton Synchrotron in 1959. The Super Proton Synchrotron in 1976. And the machine that defines CERN in the public imagination: the Large Hadron Collider, approved in 1994, completed in 2008, and responsible in 2012 for confirming the existence of the Higgs boson — the particle that explains why other particles have mass.

The LHC is a circular tunnel 27 kilometres in circumference, buried 100 metres beneath the Franco-Swiss border. Inside it, two beams of protons travel in opposite directions at 99.9999991 per cent of the speed of light before colliding at four detector sites, each one a multi-storey instrument weighing thousands of tonnes. The ATLAS detector alone is 46 metres long, 25 metres in diameter, and weighs 7,000 tonnes. It records the debris of roughly 600 million proton collisions per second, generating approximately one petabyte of data per day.

27 km Circumference of the Large Hadron Collider — The LHC is the largest and most complex scientific instrument ever built, buried 100 metres underground and straddling the border between Switzerland and France.

It was this scale — the sheer volume of data, the number of collaborating institutions, the diversity of computing systems — that made the information-management problem acute enough to drive Berners-Lee's invention. The web was not a theoretical exercise. It was a practical response to a practical problem created by the largest scientific collaboration in history. The web was infrastructure for the mission. That it turned out to be infrastructure for everything else was, in the most literal sense, a side effect.

And this was not the first time CERN's infrastructure needs had produced world-changing technology. In the 1970s and 1980s, CERN was instrumental in developing TCP/IP adoption in Europe and played a key role in establishing the internet's presence on the continent. The laboratory operated one of Europe's first major internet nodes. It also contributed to the development of touchscreen technology, advanced detector instrumentation, and medical imaging techniques based on particle physics detector technology. The web is the most famous of CERN's side projects, but it is far from the only one.

What the Web Was Supposed to Be

There is a painful irony in how the web evolved compared to what Berners-Lee intended. His original vision was of a collaborative space — a place where people would both read and write, where every browser was also an editor, where the web would be a tool for collective knowledge-building. The first browser he built at CERN, called WorldWideWeb (one word, capital letters, because naming conventions in 1990 were different), was a browser-editor. You could view pages and create them in the same application.

That vision did not survive contact with the market. As the web commercialised in the mid-1990s, it became primarily a broadcast medium — a digital version of television and print, where publishers created content and users consumed it. The read-write web that Berners-Lee imagined was largely replaced by a read-only web controlled by an increasingly small number of platforms. The architectural openness remained — anyone could still set up a web server and publish content — but the practical reality shifted towards centralisation. By the 2010s, a handful of American companies — Google, Facebook, Amazon, Apple — dominated the web experience for billions of users.

Berners-Lee has been vocal about this drift. In 2009, he founded the World Wide Web Foundation to advocate for an open, accessible web. In 2018, he launched Solid (Social Linked Data), a project to re-decentralise the web by giving users control over their own data, stored in personal "pods" rather than on corporate servers. The project has received attention but limited adoption — a reminder that architectural openness and market openness are different things, and that the latter is far harder to sustain.

In 2019, on the thirtieth anniversary of his original proposal, Berners-Lee published a letter identifying what he saw as the three major threats to the web: deliberate misuse (state-sponsored hacking, online harassment, weaponisation of information), system design that creates perverse incentives (attention-driven business models that reward sensationalism over substance), and the unintended negative consequences of well-meaning design (the filter bubbles and polarisation produced by recommendation algorithms). Each of these problems, he argued, was solvable — but solving them required coordinated action by governments, companies, and citizens.

There is something fitting about the inventor of the web spending his later career trying to fix it. Berners-Lee did not build the web to make money — he has never personally profited from it in any proportion approaching the wealth it created for others. He built it to solve a problem, and it solved that problem and a million others nobody anticipated. That it also created new problems was perhaps inevitable. The question the web poses now is not whether it was a good invention — it was the most transformative communications technology since the printing press — but whether the institutions that govern it can evolve as fast as the technology itself.

The European Laboratory the World Forgot to Credit

In the popular imagination, the internet is an American invention and the web is part of it. This is not quite wrong, but it is importantly incomplete. The internet's foundational protocols — TCP/IP — were indeed developed by American researchers Vint Cerf and Bob Kahn, funded by DARPA, the US Department of Defense's research agency. The internet's early infrastructure was American. Its first nodes were at American universities and military installations.

But the web — the system of linked documents, browsers, servers, URLs, and markup that most people actually mean when they say "the internet" — was invented in Europe, by a British scientist working at a European research laboratory funded by European taxpayers. HTTP is European. HTML is European. The URL is European. The first web server ran in Geneva. The first website was hosted in Switzerland. The decision to release the technology for free was made by a European institution.

This distinction matters not for reasons of continental pride but because it illuminates something important about how transformative technology is actually produced. The web was not created by a startup in a garage. It was not funded by venture capital. It was not built to generate shareholder returns. It was created by a publicly funded research institution whose mission was fundamental science, by an employee who was given the freedom to pursue a side project, in an environment where the culture valued solving problems over capturing markets.

The economic value generated by these inventions is incalculable. The web underpins global e-commerce, which exceeded $6 trillion in 2024. It is the foundation of the platform economy — Google, Amazon, Meta, and their equivalents worldwide exist because the web exists. It transformed journalism, entertainment, education, politics, healthcare, finance, and communication in ways that no other technology of the past century has matched. Conservative estimates of the web's cumulative economic impact since 1993 run into the hundreds of trillions of dollars.

$6 trillion+ Global e-commerce revenue in 2024 — The entire global e-commerce economy runs on the protocols invented at CERN in 1989-1991 and released for free in 1993.

CERN received none of that money. It did not ask for any. The laboratory's mission was and remains fundamental physics research. The web was a tool built to support that mission, and when it turned out to be useful to the rest of the world, the laboratory let the rest of the world have it. This was not naivety — it was a deliberate institutional choice, consistent with the open-science principles that have governed CERN since its founding.

Still Building the Infrastructure

CERN did not stop inventing after the web. The laboratory's computing demands continued to grow, and its responses to those demands continued to produce technology that spread far beyond particle physics.

In the early 2000s, as the LHC was being constructed, CERN and its partner institutions developed the Worldwide LHC Computing Grid (WLCG) — a distributed computing infrastructure that links more than 170 computing centres in 42 countries. The Grid allows the massive datasets produced by LHC experiments to be processed, stored, and analysed by physicists working at institutions spread across the globe. It was one of the first large-scale implementations of distributed computing — the concept that would later evolve, in commercial hands, into cloud computing.

CERN also incubated key open-source technologies. ROOT, a data analysis framework developed at CERN in the 1990s, is used across the physical sciences and has influenced the development of modern data science tools. Invenio, a digital library framework created at CERN, powers several major institutional repositories. CERNBox, the laboratory's cloud storage service, is based on open-source technology that has been adopted by other research institutions. CERN even developed an early cloud computing platform, building on OpenStack, to manage its own computing resources.

The pattern is consistent: CERN encounters a problem driven by the scale and complexity of its physics mission, builds a solution, and then releases that solution for others to use. The web is the most famous instance, but it is part of a larger culture of instrumental innovation — innovation not as a goal in itself but as a necessary consequence of pursuing impossibly ambitious scientific objectives.

The web was not an accident. It was the inevitable product of an environment where the problems were hard enough to demand new tools, and the culture was open enough to share them.

Editorial observation

Today, CERN employs approximately 2,500 staff members and hosts more than 17,000 visiting scientists from over 110 countries. Its annual budget is approximately 1.2 billion Swiss francs, funded by contributions from its 23 member states — all European. The laboratory's next major project, the Future Circular Collider, is under study: a new accelerator with a circumference of approximately 91 kilometres that would push particle physics into a new energy frontier. If approved, it would be the largest scientific instrument ever conceived, and it would generate computing and data challenges that make the current LHC look modest.

What new infrastructure will those challenges require? What tools will CERN's engineers and physicists build to manage datasets that dwarf anything current systems can handle? And which of those tools will turn out, like the web before them, to be useful for far more than physics?

Nobody knows. That is the point. The web was not planned. It was not in CERN's mission statement. It was not in anyone's five-year roadmap. It was a side project by a frustrated engineer who needed a better way to find documents. It became the most consequential communications infrastructure since the printing press, and it was built in Europe, with European funding, by a European institution, and given to the world for free.

The next time someone asks what Europe has contributed to the digital age, the answer is straightforward. Europe built the platform on which the entire digital age runs. It just did not bother putting its name on it.

Sources

  1. Tim Berners-Lee, "Information Management: A Proposal" — CERN — https://www.w3.org/History/1989/proposal.html
  2. Tim Berners-Lee, "Weaving the Web" (HarperBusiness, 1999) — https://www.w3.org/People/Berners-Lee/Weaving/
  3. CERN, "The Birth of the Web" — CERN Official — https://home.cern/science/computing/birth-web
  4. CERN Document: "Putting the Web in the Public Domain" (30 April 1993) — https://cds.cern.ch/record/1164399
  5. Tim Berners-Lee, "30 Years On: A Letter" — World Wide Web Foundation — https://webfoundation.org/2019/03/web-birthday-30/
  6. CERN Annual Report 2023 — https://home.cern/resources/annual-report/cern-annual-report-2023
  7. The Worldwide LHC Computing Grid (WLCG) — https://wlcg.web.cern.ch/
  8. UNCTAD, "Global E-Commerce Reaches $6 Trillion" — Digital Economy Report — https://unctad.org/topic/ecommerce-and-digital-economy