OSI: The Internet That Wasnât
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multiÂlayered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policyÂmakers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
1961: Paul Baran at Rand Corp. begins to outline his concept of âmessage block switchingâ as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internetâs Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internetâs chief advocates, Einar Stefferud, gleefully pronounced: âOSI is a beautiful dream, and TCP/IP is living it!â
What happened to the âbeautiful dreamâ? While the Internetâs triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, youâll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.
1965: Donald W. Davies, working independently of Baran, conceives his âpacket-switchingâ network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented Âindependently by Paul Baran at the Rand Corp. in the ÂUnited States and Donald Davies at the ÂNational Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a networkâs various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defenseâs Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the Âcomputer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on âvirtual circuitsââa design that mimicked circuit switchingâs technical and organizational routines.
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46Â million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An Âearly attempt began in 1972, with the formation of the InterÂnational Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ÂMcKenzie in the United States, ÂDonald Davies and Roger ÂScantlebury in England, and Louis Pouzin and ÂHubert Zimmermann in France.
The purpose of INWG was to promote the âdatagramâ style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, âThe essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.â It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocolâthe key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ÂMcKenzie, ÂHubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWGâs proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, Franceâs own packet-Âswitching research project, sarcastically noted that members of the CCITT âdo not object to packet switching, as long as it looks just like circuit switching.â And when Pouzin complained at major conferences about the âarm-twistingâ tactics of ânational monopolies,â everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their countryÂmanâs candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzinâs involvement also ended.
1974: Vint Cerf and Robert Kahn publish âA Protocol for Packet Network Intercommunication,â in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzinâs datagram design and published the details of their âtransmission control programâ the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the âInternet,â a term adopted later to refer to a network of networks that utilized ARPAâs TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerfâs departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ÂInternet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasnât enough to doom the Âprojectâafter all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental Âassociation created after World War II. Unlike the CCITT, ISO wasnât specifically concerned with telecommunicationsâthe wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for ânetwork standards needed for open working.â These standards would, the British argued, provide an alternative to traditional computingâs âself-contained, âclosedâ systems,â which were designed with âlittle regard for the possibility of their interÂworking with each other.â The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbentsânamely, IBM and the telecom monopolies.
A layered approach: The OSI reference model (left column) divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database Âexpert Charles Bachman as committee chairman. Widely respected in computer circles, ÂBachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the âarchitectural visionâ that he brought to OSI, a vision that was inspired by his work with databases generally and by IBMâs Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses âvirtual circuits.â
Bachmanâs design departed from IBMâs Systems Network Architecture in a significant way: Where IBM specified a terminal-to-Âcomputer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachmanâs scheme would allow âinterworkingâ between different types of proprietary computers and networksâso long as they followed OSIâs standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachmanâs reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extraÂordinary number of plenary and committee meetings.
OSIâs first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes âStandards for the Internet Protocol and Transmission Control Protocol.â
Meanwhile, IBM representatives, led by the companyâs capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSIâs development in line with IBMâs own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates âfighting over who would get a piece of the pie.⦠IBM played them like a violin. It was truly magical to watch.â
Despite such stalling tactics, Bachmanâs leadership propelled OSI along the precarious path from vision to reality. ÂBachman and Hubert Zimmermann (a veteran of ÂCyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzinâs datagram design, championed âconnectionlessâ protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. governmentâthe main sponsor of the Internet protocols, which were incompatible with OSIâjumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies Âafter August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ÂInternet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the Âgovernment-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defenseâs mandated use of TCP/IP on the ARPANET signals the âbirth of the Internet.â
May 1983: ISO publishes âISO 7498: The Basic Reference Model for Open Systems Interconnectionâ as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSIâs slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled âIs OSI Too Late?â It was, he recalled in a recent memoir, âthe only time in my lifeâ that he âgot a standing ovation in a technical conference.â Two years later, the French networking expert and former INWG member Pouzin, in an essay titled âTen Years of OSIâMaturity or Infancy?,â summed up the growing uncertainty: âGovernment and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.â Even for OSIâs champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSIâs beautiful dream finally ended. The effortâs fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSIâs first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSIâs chances of success: âThe organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?â
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled âIs OSI Too Late?â He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachmanâs and othersâ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers Âattended the meetings of OSIâs various committees and working groups, and the bureaucratic procedures used to structure the discussions didnât allow for the speedy production of standards. Everything was up for debateâeven trivial nuances of language, like the difference between âyou will complyâ and âyou should comply,â triggered complaints. More significant rifts continued between OSIâs computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularityâthe key principles for Âcoordinating the projectâended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internetâs adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1Â January 1983, ARPA stopped supporting the ÂARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the âbirth of the Internet.â
Whatâs In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.Photo: John Day
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the âInternet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.â
Unfortunately, the Internet communityâs bias also led it to reject any technical insights from OSI. The classic example was the âpalace revoltâ of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and Âaddressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community Âconsiderâif not adoptâsome technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a âpalace revolt,â Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industryâs shift away from OSI and toward TCP/IP: âOn one side you have something thatâs free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?â
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSIâs creators, Internet advocates seized the mantle of âopennessâ and claimed it as their own. Today, they routinely campaign to preserve the âopen Internetâ from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized âanticipatory standardizationâ in an immature and volatile market. This emphasis on its failings, however, Âmisses OSIâs many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doingâÂincluding some hard knocksâfor a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of âsuccessâ and âfailure,â OSIâs history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that âopennessâ is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, âHow Quickly We Forget.â
How Quickly We Forget
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In âThe Internet That Wasnât,â in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internetâs Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, âOSI would creep up as a foil, something they didnât want the Internet to turn into,â he says. âSo thatâs the way I presented it.â
After the article was published, he says, veterans of OSI âcame out of the woodwork to tell their stories.â One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadnât captured the full scope of the story.
âNobody likes to hear that they got it wrong,â Russell recalls. âIt took me a while to cool down.â Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
âItâs almost alarming that something that recent can be so easily forgotten,â Russell says. On the other hand, itâs what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as âThe Internet That Wasnât.â
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called â âRough Consensus and Running Codeâ and the Internet-OSI Standards War.â And he will be delving into the history of OSI and the Internetâalong with related topics such as standardization in the Bell Systemâin his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbateâs Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzieâs article âINWG and the Conception of the Internet: An Eyewitness Account,â published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkeyâs online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968â1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkeyâs project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.