The Rise of the Internet
This is part 3 of a 7 article series that examines the technological history and theory behind the curation of human thought and discovery.
Since human beings could first put complex thoughts together they have been trying to save those ideas for a time when they would prove beneficial. Since then, technology has evolved to meet the needs of humans to curate our histories, discoveries, and thoughts. Symbols gave rise to spoken words, spoken words became written language, and from there we perfected how we could record and archive those sounds and symbols in ways that would let them endure long after we had passed on. The printing press made select curated information accessible to the masses, and sound recording coupled with photography gave birth to film which enabled curation by countless more individuals. All of this technology would come together to produce the keystone of modern curation: the computer.
Similar to film, this story of the modern computer is the story of technology coming together. Early computers were more akin to calculators than anything else. Used to ‘compute’ mathematical operations unmanageable for normal individuals, computers began life as tools capable of holding temporary information and using it to calculate complex equations.
Much like a library, computers could hold large amounts of data in ways a single person could not. Unlike a library, early computers couldn’t hold that information indefinitely without continuously being powered. The emergence of non-volatile memory in the form of hard disks would solve this problem and signal the beginning of digital curation.
As technological breakthroughs brought on smaller and smaller microchips and more sophisticated machines, computers picked up additional capabilities by integrating older discoveries. Word processing software, coupled with printers, would rattle the foundations of the publishing industry by putting the power to compose and print in the hands of the many instead of the few. While supplanting established publishers was a long way off, it was an early sign of the computer’s ability to take on the important task of idea curation.
The additional ability to look at pictures, play sound and watch video would set the stage for complete computer curation. For all of their strengths, computers were still limited to holding information in one place and accessible by only one user at a time. The information that could be curated with them was limited to what their end users could provide. They were no better than personal libraries. Fortunately, several teams of researchers were onto something that would solve this problem. That ‘something’ would evolve into what would later be called the Internet.
October 4, 1957, was a turning point for curation. It was on that day the Soviet Union succeeded in putting a beach ball-sized satellite named Sputnik into orbit. Coupled with the tensions of the Cold War, this event set off an effort by the U.S. and its allies to drastically scale up its science and research capabilities. This effort was responsible for the creation of ARPANET, a network that allowed for the sharing of computer resources across long distances. Coupled with the Internet protocol suite (TCP/IP), this formed the foundations of the modern Internet.
This was a fantastic accomplishment. Information stored on one computer could share that information with another. It would give more people more access to more information. Curated information was no longer a rival or regionally-bound good, any number of people could access it from anywhere. The era of indexing information in physical libraries was coming to a middle. The potential of this new technology was astounding, but it wouldn’t go mainstream and realize its potential until the introduction of the World Wide Web.
While ARPANET and the future projects it would spawn helped shape the path of human interaction and information storage, there were limitations. Access to information wasn’t unidirectional and required action from both the sender and receiver. This complicated the process of referencing information easily. In addition, general access to the early stages of the Internet wasn’t the norm. Much of the access to the Internet was through universities, governments, and businesses. In addition, computers were still prohibitively expensive. It wouldn’t be until the 90’s, when personal computers were more ubiquitous, that the developed world would be ready to make use of the net.
Meanwhile, the efforts of a team of researchers at CERN, led by Tim Berners-Lee, would produce three vital technologies that would power the incoming digital revolution: Uniform Resource Locators (URLs), Hypertext Markup Language (HTML), and Hypertext Transfer Protocol (HTTP). These three technologies, coupled with the previous advancements of ARPANET and its descendants, made up what would be called the World Wide Web. Computers the world over could now leverage the Web to access information curated on other computers. Accessibility to information had been achieved but the tools needed to access that information were still inaccessible to many. This need would be met by the Mosaic web browser. The web browser made navigating the World Wide Web approachable to the average person which was a requirement if the Web was to provide a future for curation. While Mosaic technically wasn’t the first web browser, it was the one (along with its successor, Netscape Navigator) that brought the masses into to this brave new digital world.
In just over half a century the computer went from calculator to connector. It could facilitate the creation, collection, and curation of human thought on a scale to rival the greatest libraries in history and preserve that information practically forever. The Internet and the Web took the potential of computing to new heights, but it wouldn’t end there. The net was changing fast and would soon come into its own as a curation powerhouse. The era of the modern web was about to begin.
Come Join Us on Gojurn!