The Internet is a worldwide web of interconnected university, business, military, and science networks. Why the term web? Isn't the Internet just one network? Not at all! It is a network of networks. The Internet is made up of little Local Area Networks (LANs), citywide Metropolitan Area Networks (MANs), and huge Wide Area Networks (WANs) that connect computers for organizations all over the world.
These networks are hooked together with everything from regular dial-up phone lines to high-speed dedicated leased lines, satellites, microwave links, and fiber optic links. And the fact that they're "on" the Internet means that all these networks are interconnected. This network web extends all over the world, but trying to describe all of it and how it fits together is a bit like trying to count the stars. In fact, so many networks are interconnected within the Internet that it's impossible to show an accurate, up-to-date picture. Some network maps show the Internet as a cloud, because it's just too complex to draw in all of the links. To complicate matters, new computers and links are being added every day. It's estimated a new network is added every 20 minutes.
"I'm starting to think of the Internet as a kaleidoscope. It is just so much broken glass and trinkets. Users turn the mirrors and lenses and suddenly, meaning snaps into place for them, where before there was only chaos. My job at NYSERNet is tuning the mirrors and polishing the lenses."
So think of the Internet as a "cloud of links." The cloud hides all the ugly details—the hardware, the physical links, the acronyms, and the network engineers. Remember that you don't actually need to know all the details to communicate and use resources on the Internet.
Overall, the Internet is the fastest global network around. Speed is often referred to as throughput—how fast information can be propelled through the network. The Internet isn't just one speed because, as explained above, it can accommodate both slow networks and the latest technology. There are networks on the Internet that are capable of transmitting 45 megabits (about 5,000 typescript pages) per second. The most typical network connection speeds are 56Kbps, which are popular for small organizations, and T1 (or 1.544Mbps) for larger organizations. Gigabit-per-second network speeds currently being tested will allow even more advanced applications and services, such as complex weather prediction models produced by supercomputers and transmitted to weather centers. Or transmitting extremely large (tens or hundreds of megabytes) databases—for example, earthquake data transferred from a collection site to the Institute of Geophysics and Planetary Physics for analysis. Or video conferences including people from all over the world.
The Internet was not born full-blown in its present worldwide form of thousands of networks and connections. It had a humble—but exciting—beginning as one network called the ARPANET, the "Mother of the Internet." The ARPANET, described in Chapter 1, initially linked researchers with remote computer centers, allowing them to share hardware and software resources, such as computer disk space, databases, and computers. Other experimental networks were connected with the ARPANET by using an internetwork technology sponsored by DARPA. The original ARPANET itself split into two networks in the early 1980s, the ARPANET and Milnet (an unclassified military network), but connections made between the networks allowed communication to continue. At first this interconnection of experimental and production networks was called the DARPA Internet, but later the name was shortened to just "the Internet."
Access to the ARPANET in the early years was limited to the military, defense contractors, and universities doing defense research. Cooperative, decentralized networks such as UUCP, a worldwide Unix communications network, and USENET (User's Network) came into being in the late 1970s, initially serving the university community and, later, commercial organizations. In the early 1980s, more-coordinated networks, such as the Computer Science Network (CSNET) and BITNET, began providing nationwide networking to the academic and research communities. These networks were not part of the Internet, but later special connections were made to allow the exchange of information between the various communities.
The next big moment in Internet history was the birth in 1986 of the National Science Foundation Network (NSFNET), which linked researchers across the country with five supercomputer centers. Soon expanded to include the mid-level and statewide academic networks that connected universities and research consortiums, the NSFNET began to replace the ARPANET for research networking. The ARPANET was honorably discharged (and dismantled) in March 1990. CSNET soon found that many of its early members (computer science departments) were connected via the NSFNET, so it ceased to exist in 1991.
The computers on a network have to be able to talk to one another. To do that they use protocols, which are just rules or agreements on how to communicate. Standards were mentioned in Chapter 1 as an important aspect in computer networking. There are lots of protocol standards out there, such as DECnet, SNA, IPX, and Appletalk, but to actually communicate, two computers have to be using the same protocol at the same time. TCP/IP, which stands for Transmission Control Protocol/Internet Protocol, is the language of the Internet. You may speak Japanese and I may speak English, but if we both speak French, we can communicate. So any computer that wants to communicate on the Internet must "speak" TCP/IP.
Developed by DARPA in the 1970s, TCP/IP was part of an experiment in internetworking—that is, connecting different types of networks and computer systems. First used ubiquitously on the ARPANET in 1983, it was also implemented and made available at no cost for computers running the Berkeley Software Distribution (BSD) of the Unix operating system. TCP/IP, developed with public funds, is considered an open, non-proprietary protocol, and there are now implementations of it for almost every type of computer on the planet. "Non-proprietary" means that no one company—not IBM, not DEC, not Novell—has exclusive rights to the products needed to connect to the Internet. Any number of companies, including those just mentioned, make the hardware and software necessary for the network connection.
TCP/IP isn't the only protocol suite that is considered "open." Since the early 1980s, the International Organization for Standardization (ISO) has been developing the Open Systems Interconnection (OSI) protocols. While many of the OSI protocols and applications are still evolving, a few are actually being used in some networks on the Internet, and more are planned. So even though most of the computers speak TCP/IP, the Internet is officially considered a "multi-protocol" network.
The whole idea of protocols and standards can get complicated, but as an Internet neophyte, all you need to be concerned with are the applications that TCP/IP offers. The difference between applications and protocols is that you don't actually see the protocols (they're invisible to the end user), but you will access the Internet using the applications that conform to these standards.
Three TCP/IP applications—electronic mail, remote login, and file transfer—are the Internet equivalent of the hammer, screwdriver, and crescent wrench in your toolbox. There are plenty of fancier applications using variations on or combinations of these basic tools, but wherever you roam on the Internet, you should have the Big Three available to you. The three basic Internet services, as well as the more powerful and colorful applications, are covered in later chapters, but here's a quick introduction to get you on your way.
Electronic mail, also known as email or messaging, is the most commonly available and most frequently used service on the Internet. Email lets you send a text message to another person or to a whole group of people. For example, a third-grade student in Texas can send an email message to a third-grader in Japan to ask how kids spend their free time there. Or a group of teachers can have an email conference on using the Internet in the classroom.
Remote login is an interactive tool that allows you to access the programs and applications available on another computer. For example, say Sven, a student at the University of Oslo, is heading out to a ski vacation in the Rocky Mountains and wants to check the weather conditions and snowfall there. An Internet computer at the University of Michigan houses a weather database called the Weather Underground, with temperatures, precipitation data, and even earthquake alerts for the entire United States. Sven uses the remote login tool to connect to this computer and interactively query the Weather Underground for the information he needs.
File transfer, the third of the "Big Three" tools, allows files to be transferred from one computer to another. A file can be a document, graphics, software, spreadsheets—even sounds! For example, you may be interested in information on Chernobyl from the Library of Congress's "Glasnost" online exhibit of documents from the former Soviet Union. Using file transfer, you can download those articles from the computer where they're stored onto your own personal computer, where you can read them, print them out, or clip and incorporate parts of them into a paper you're writing.
There are quite a few applications available today that use a combination or variation of these three tools to hide details even further. These operate on a client/server model—that is you use the client on your computer, and it contacts servers for directions and information. Clients and servers don't have to be located in the same geographical area, and in many cases on the Internet, they aren't. This technology is very flexible; during one session, your client may access servers all over the world to help you find information. The client/server concept is explained further in Chapter 4.
As the Internet grows larger, locating the information you need will become difficult unless you're using information discovery and retrieval tools. The major resource-browsing applications, which operate on the client/server concept, include archie, Gopher, WorldWideWeb (WWW), Wide Area Information Servers (WAIS), and Mosaic. Chapter 4 provides explanations for all of these and gets you started in using them.
When you're actually using the above-mentioned tools, information of various types is being transferred from one computer to another. TCP/IP breaks this information into chunks called packets. Each packet contains a piece of the information or document (several hundred characters, or bytes), plus some ID tags, such as the addresses of the sending and receiving computers.
Suppose that you wanted to take apart an old covered bridge in New England and move it lock, stock, and barrel to California (people do do these things). You would dismantle the sections, label them very carefully, and ship them out on three, four, maybe even five different trucks. Some take the northern route and some the southern route, and one has to go only through Texas. The trucks get to California at various times, with one arriving a little later than the others, but your careful labels indicate which sections go up first, second, and third.
Each packet, as TCP/IP handles it with its addressing information, can travel just as independently. Because of all the network interconnections, there are often multiple paths to a destination. Just as you might drive a different route to work to save a few minutes here or there, the packets may travel different networks to get to the destination computer. The packets may arrive out of order, but that's okay, because each packet also contains sequence information about where the data it's carrying goes in the document, and the receiving computer can reconstruct the whole enchilada. That's why the Internet is known as a packet-switched network. The switches are computers called routers, which are programmed to figure out the best packet routes, just as a travel agent might help you find the best flights with the fewest layovers. Routers are the airport hubs of the Internet; they connect the networks and shuttle packets back and forth. The packet is just a chunk of information; it doesn't care (or know) how fast it travels. So it can travel over a "fighter-jet" network—running at Mach-whatever speeds and connecting supercomputers—that interconnects with a "biplane" network operating a lot slower.
The Internet network connections don't follow any specific model, but there is a hierarchy of sorts. The high-speed central networks are known as backbones. The electronic equivalent of an interstate highway system, they accept traffic from and deliver it to the mid-level networks. An example of such a backbone system is Canada's CA,net, a nationwide network that connects all its province networks. Australia's Academic and Research Network (AARNet) is a nationwide network connecting its member organizations. Mid-level networks, in turn, take traffic from the backbones and distribute it to member networks, the neighborhood roads of the networking world. For example, the Texas Higher Education Network (THENet) is a mid-level network, connecting over 100 universities and research facilities in Texas. The organizational networks that connect to these nationwide and mid-level backbones may be very big networks themselves. For example, Vienna University in Austria has a large campus network that connects its university departments.
Each of the network links have speed limitations, but speeds are determined by the technology used (not by some "packet policeman"). Wide-area connections are slower than local-area networks. A WAN link is typically 1.544Mbps or 56Kbps. (More and more wide-area networks, however, are starting to operate at 45Mbps.) Local-area networks are much faster. Ethernet, a popular LAN technology, runs at 10Mbps. Compare that with another local-area networking technology, Fiber Distributed Data Interface (FDDI), which runs at 100Mbps. An easy way to understand these speeds is to imagine each of these technologies as a system of water pipes. More water can be pumped through bigger pipes during a given period of time, so they have more bandwidth. Local-area network pipes are usually pretty large, and therefore more water (or data) can be blasted through them than can be pumped (transmitted) during the same amount of time through a wide-area network pipe.
Once all the pipes—networks—are in place, the Internet, which is actually tens of thousands of networks, looks seamless to the user. By means of internetworking—that is, by connecting networks together to enable communication and information exchange—all the details are hidden from you: the packets, the routers, and all those interconnections. Despite legions of different computers and disparate networks, somehow the whole web works, and any computer directly connected to the Internet can talk to all the other computers on the Internet. So you, working on a computer in your office in Israel or in your spare bedroom in Los Angeles, can communicate with a colleague in South Africa or a friend in Calgary. It's as if you are directly connected by one wire.
Copyright © 1994 by Tracy LaQuey and Editorial Inc.