您的位置: 网站首页 > 公共课 > 计算机英语 > 第3章 COMPUTER NETWORKS > 【3.3 WIDE AREA NETWORKS】

3.3 WIDE AREA NETWORKS

 

3.3  WIDE AREA NETWORKS

1. Introduction

A wide area network, or WAN, spans a large geographical area, often a country or continent. It contains a collection of machines intended for running user (i.e., application) programs.

In most WANs, the network contains numerous cables or telephone lines, each one connecting a pair of routers. If two routers that do not share a cable, nevertheless, wish to communicate, they must do this indirectly, via other routers. When a packet is sent from one router to another via one or more intermediate routers, the packet is received at each intermediate router in its entirety, stored there until the required output line is free, and then forwarded. A subnet using this principle is called a point-to-point, store-and-forward, or packet-switch subnet. Nearly all the wide area networks (except those using satellites) have stored-and-forward subnets. When the packets are small and all the same size, they are often called cells.

When a point-to-point subnet is used, an important design issue is what the router interconnection topology should look like. Fig. 3-3 shows several possible topologies. Local networks that are designed as such usually have a symmetric topology. In contrast, wide area networks typically have irregular topologies.

A second possibility for a WAN is a satellite or ground radio system. Each router has an antenna through which it can send and receive. All routers can hear the output from the satellite, and in some cases they can also hear the upward transmissions of their fellow routers to the satellite as well. Sometimes the routers are connected to a substantial point-to-point subnet, with only some of them having a satellite antenna. Satellite networks are inherently broadcast and are most useful when the broadcast property is important.

Fig. 3-3  Some possible topologies for a point-to-point subnet

2. X.25 Networks

The packet switching protocol most widely used today is called X.25. Developed by the ITU-T in 1976, X.25 has been revised several times.

According to the formal definition given in the ITU-T, standard X.25 is an interface between data terminal equipment and data circuit terminating equipment for terminal operation at the packet mode on public data networks. Informally, we can say that X.25 is packet switching protocol used in a wide area network.

X.25 defines how a packet-mode terminal can be connected to a packet network for the exchange of data. It describes the procedures necessary for establishing, maintaining, and terminating connections (such as connection establishment, data exchange, acknowledgement, flow control, and data control). It also describes a set of services, called facilities, to provide functions such as reverse charge, call direct, and delay control.

Most X.25 networks work at speeds up to 64 Kbps. Which makes them obsolete for many purposes. Nevertheless, they are still widespread, so readers should be aware of their existence.

X.25 is connection-oriented and supports both switched virtual circuits and permanent ones. A switched virtual circuit is created when one computer sends a packet to the network asking to make a call to a remote computer. Once established, packets can be sent over the connection, always arriving in order X.25 provides now control, to make sure a fast sender cannot swamp a slow or busy receiver.

A permanent virtual circuit is used the same way as a switched one, but it is set up in advance by agreement between the customer and the carrier. It is always present, and no call setup is required to use it. It is analogous to a leased line.

3. Frame Relay

Frame relay is a service for people who want an absolute bare-bone connection-oriented way to move bits from A to B at reasonable speed and low cost. Its existence is due to changes in technology over the past two decades. Twenty years ago, communication using telephone lines was slow, analog, and unreliable, and computers were slow and expensive. As a result, complex protocols were required to mask errors, and the users' computers were too expensive to have them do this work.

The situation has changed radically. Leased telephone lines are now fast, digital, and reliable, and computers are fast and inexpensive. This suggests the use of simple protocols, with most of the work being done by the users, computers, rather than by the network. It is this environment that frame relay addresses.

Frame relay can best be thought of as a virtual leased line. The customer leases a permanent virtual circuit between two points and can then send frames (i.e., packets) of up to 1600 bytes between them. It is also possible to lease permanent virtual circuits between a given site and multiple other sites, so each frame carries a 10-bit number telling which virtual circuit to use.

The difference between an actual leased line and a virtual leased line is that with an actual one, the user can send traffic all day long at the maximum speed. With a virtual one, data bursts may be sent at all speed, but the long-term average usage must be below a predetermined level. In return, the carrier charges much less for a virtual line than a physical one.

In addition to competing with leased lines, frame relay also competes with X.25 permanent virtual circuits, except that it operates at higher speeds, usually 15 Mbps, and provides fewer features.

Frame relay provides a minimal service, primarily a way to determine the start and end of each frame, and detection of transmission errors. If a bad frame is received, the frame relay service simply discards it. It is up to the user to discover that a frame is missing and take the necessary action to recover. Unlike X.25, frame relay does not provide acknowledgements or normal flow control. It does have a bit in the header, however, which one end of a connection can set to indicate to the other end that problems exist. The use of this bit is up to the users.

4. Broadband ISDN and ATM

As application using the telecommunications networks advanced, however, the data rates of 64 Kbps to 2.048 Mbps, which narrowband ISDN provides, proved inadequate to support many applications. In addition, the original bandwidths proved too narrow to carry the large number of concurrent signals produced by a growing industry of digital service providers.

To provide for the needs of the next generation of technology, an extension of ISDN, called broadband ISDN (B-ISDN) is under study.

B-ISDN is a new wide area service. It will offer video on demand, live television from many sources, full motion multimedia electronic mail, CD-quality music, LAN interconnection, high-speed data transport for science and industry and many other services that have not yet even been thought of, all over the telephone line.

The underlying technology that makes B-ISDN possible is called ATM (Asynchronous Transfer Mode) because it is not synchronous (tied to a master clock), as most long distance telephone lines are. Note that the acronym ATM here has nothing to do with the Automated Teller Machines many banks provide (although an ATM Machine can use an ATM network to talk to its bank).

A great deal of work has already been done on ATM and on the B-ISDN system that uses it, although there is more ahead.

The basic idea behind ATM is to transmit all information in small, fixed-size packets called cells. The cells are 53 bytes long, of which 5 bytes are header and 48 bytes are payload. ATM is both a technology (hidden from the users) and potentially a service (visible to the users). Sometimes the service is called cell relay, as an analogy to frame relay.

The use of a cell-switching technology is a gigantic break with the l00-year old tradition of circuit switching (establishing a copper path) within the telephone system. There are a variety of reasons why cell switching was chosen, among which are the following. First, cell switching is highly flexible and can handle both constant rate traffic (audio, video) and variable rate traffic (data) easily. Second, at the very high speeds envisioned (gigabits per second are within mach), digital switching of cells is easier than using traditional multiplexing techniques, especially using fiber optics. Third, for television distribution, broadcasting is essential; cell switching can provide this and circuit switching cannot.

ATM networks are connection-oriented. Making a call requires first sending a message to set up the connection. After that, subsequent cell follow the same path to the destination. Cell delivery is not guaranteed, but their order is. If cells1 and cells2 are sent in that order, then if both arrive, they will arrive in that order, never first 2 then 1.

ATM networks are organized like traditional WAN, with lines and switches (routers). The intended speeds for ATM networks are 155 Mbps and 622 Mbps, with the possibility of gigabit speeds later. The 155 Mbps speed was chosen because this is about what is needed to transmit high definition television. The exact choice of 155.52 Mbps was made for compatibility with AT&T's SONET transmission system. The 622 Mbps speed was chosen so four 155Mbps channels could be sent over it. By now it should be clear why some of the gigabit testbeds operated at 622 Mbps: they used ATM.

KEYWORDS

Wide Area Network (WAN)

广域网

store-and-forward

存储转发

packet-switch

分组交换

Integrated Services Digital Network (ISDN)

综合业务数字网

N-ISDN

窄带ISDN

B-ISDN

宽带ISDN

teleservices

远程服务

switched virtual circuit

交换式虚电路

permanent virtual circuit

永久式虚电路

frame relay

帧中继

ATM

异步传输模式

cell

信元

cell relay

信元中继

NOTES

1integrated services digital networkISDN,综合业务数字网)。ISDN将电话语音和计算机多媒体数据集成到一条高速的数字传输网络线路中,从而仅通过一条“线路”就可为客户同时提供语音服务和数据服务。

2N-ISDN(窄带ISDN)。提供56Kbps2Mbps的低速服务。

3B-ISDN(宽带ISDN)。宽带ISDN运用于ATM技术,可以提供2Mbps600Mbps的高速连接。根据CCITT的定义,B-ISDN是指转移(传输、复用、交换)速率高于基群速率(2.048Mbps1.544Mbps)的系统。

4frame relay(帧中继)。帧中继是一种新型的数据传输网络,之所以称为帧中继是因为网上的操作大多是基于OSI参考模型的第2层,即数据链路层,也称帧层。

5ATM(异步传输模式)。其基本速率是150Mbps,可支持高清晰度电视(HDTV)、多媒体会议电视、彩色传真、远端交互教育系统等宽带业务。

EXERCISES

1. Fill in the following blanks.

1In most WANs, when a packet is sent from one router to another via one or more intermediate routers, the packet is received at each intermediate router in its entirety, stored there until the required output line is free, and then forwarded. A subnet using this principle is called a

         ,           ,or            subnet.

2The purpose of the N-ISDN is to provide fully integrated digital services to users. These services fall into three categories:         ,              , and          .

3X.25 is connection-oriented and supports both         virtual circuits and             

virtual circuits.

4A             is created when one computer sends a packet to the network asking to make a call to a remote computer. A            is set up in advance by agreement between the customer and the carrier. It is always present, and no call setup is required to use it.

5The basic idea behind ATM is to transmit all information in small, axed-size packets called        . The cells are            bytes long, of which            bytes are header and        bytes are payload. ATM is both a technology (hidden from the users) and potentially a service (visible to the users). Sometimes the service is called          , as an analogy to frame relay.

2. Single choice.

1ISDN is an acronym for          .

AInformation Services for Digital Networks

BInternetwork System for Data Networks

CIntegrated Services Digital Network

DIntegrated Signals Digital Network

2In ISDN       , the network can change or process the contents of the data.

Abearer services                             Bteleservices

Csupplementary service                   Dnone of the above

3In ISDN        , the network does not change or process the contents of the data.

Abearer services                             Bteleservices

Csupplementary service                   Dnone of the above

4Most X.25 networks work at speeds up to         .

A32Kbps                                       B64Kbps

C128Kbps                                      Dnone of the above

5X.25 protocol uses         for end-to-end transmission.

Amessage switching                        Bcircuit switching

Cthe datagram approach                  Dthe virtual circuit approach

3. List four possible topologies for a point-to-point subnet.

READING MATERIALS

WEB HARVESTING

As the amount of information on the Web grows, that information becomes ever harder to keep track of and use. Search engines are a big help, but they can do only part of the work, and they are hard-pressed to keep up with daily changes.

Consider that even when you use a search engine to locate data, you still have to do the following tasks to capture the information you need: scan the content until you find the information, mark the information (usually by highlighting with a mouse), switch to another application (such as a spreadsheet, database or word processor),paste the information into that application.

A better solution, especially for companies that are aiming to exploit a broad swath of data about markets or competitors, lies with Web harvesting tools.

Web harvesting software automatically extracts information from the Web and picks up where search engines leave off, doing the work the search engine can't. Extraction tools automate the reading, copying and pasting necessary to collect information for analysis, and they have proved useful for pulling together information on competitors, prices and financial data of all types.

There are three ways we can extract more useful information from the Web.

The first technique, Web content harvesting, is concerned directly with the specific content of documents or their descriptions, such as HTML files, images or e-mail messages. Since most text documents are relatively unstructured (at least as far as machine interpretation is concerned), one common approach is to exploit what's already known about the general structure of documents and map this to some data model.

Another approach to Web content harvesting involves trying to improve on the content searches that tools like search engines perform. This type of content harvesting goes beyond keyword extraction and the production of simple statistics relating to words and phrases in documents.

Another technique, Web structure harvesting, takes advantage of the fact that Web pages can reveal more information than just their obvious content. Links from other sources that point to a particular Web page indicate the popularity of that page, while links within a Web page that point to other resources may indicate the richness or variety of topics covered in that page. This is like analyzing bibliographical citations—a paper that's often cited in bibliographies and other papers is usually considered to be important.

The third technique, Web usage harvesting, uses data recorded by Web servers about user interactions to help understand user behavior and evaluate the effectiveness of the Web structure.

General access-pattern tracking analyzes Web logs to understand access patterns and trends in order to identify structural issues and resource groupings.

Customized usage tracking analyzes individual trends so that Web sites can be personalized to specific users. Over time, based on access patterns, a site can be dynamically customized for a user in terms of the information displayed, the depth of the site structure and the format of the resources presented.

ARPANET

After the Soviet launch of the Sputnik satellite in 1957, the US military set up the Advanced Research Projects Agency to fund research in things sometimes only vaguely related to military mattes. Originally, ARPA funded research by individual corporate researchers, although in 1962 it began to fund academic researchers.

One of the original ARPAnet engineers has commented that the purpose of the US military was to fund ARPA, whose purpose was to fund research. Over the years, ARPA has funded many projects in computer science research, many of which had a profound effect on the state of the art. None of the projects had such a profound effect as the ARPAnet project.

In 1962, the Rand Corporation published a report written by a Paul Baran, entitled “On Distributed Communications Networks”the first of many. This report recommended the establishment of a communications network with no obvious central control, and where surviving nodes could establish communication with each other after the destruction of a number of nodes. He also recommended the establishment of a nationwide public utility to transport computer data. Using “packet switching” to establish a “store and forward” network. At least one of his papers was secret, and the others were not widely circulated.

Donald W. Davies (an UK researcher) also did work in this field at roughly the same time, and is credited with the invention of the term “packet switching”.

Dr. J. C. R Licklider (or “Lick” as he asked people to call him) was aware of Barants work through his military contacts—he worked for ARPA from 1962 (as the head of “Information Processing Techniques Office”) with an engineering, and physiological psychology background. Lick was interested in how computers (and computer networks) could be used to help people communicate, and how computers could help people think. He and Robert Taylor wrote “In a few years men will be able to communicate more effectively through a machine than face to face”. His vision attracted others involved in computer research, and meant that from the start, a computer network was thought of something allowing people to communicate rather than just computers communicating.

In October 1967, ARPA announced that it was planning a computer network to link together 16 research groups at large US Universities, and research centers, and the competitive tendering began in the summer of 1968. In January 1969, Bolt, Beranek and Newman(BBN)in Cambridge, Massachusetts was awarded the contract for establishing the network.

The plan was to deliver four Interface Message Processors (IMPs, which were Honeywell DDP 516 minicomputers) to four centers. The IMPs were the interface between the ARPAnet, and each of the center's main “host” computers. Each center had its own responsibility in the project, and different host computers. The details are listed below:

·    University of California, Los Angeles(UCLA). Running the SEX operating system on an SDS Sigma 7, this site was responsible for network measurement.

·    Stanford Research Institute (SRI). Running the Genie operating system on an XDS-940, this site was responsible for network information. It was often known as NIC, and was at one time the organization that assigned network addresses.

·    University of California, Santa Barbara (UCSB). Running OS/MVT on an IBM 360/75. This site provided expertise in Culler-Fried interactive mathematics.

·    University of Utah. Running the TENEX operating system on a Digital PDP-10. They provided expertise in graphics (in particular, hidden line removal).

From the beginning of the project, things were left a bit loose with the expectation that the research groups would take some of the initiative. The research students involved in the project at all four sites formed an informal “Network Working Group”, and started to discuss various technical aspects-even without detailed information from BBN.

Dave Crocker mentions that they were very nervous of offending the “official protocol designers”, so when notes started to be written they were published under the title “Request For Comments”. Possibly one of the most important aspects of the early RFCs was the insistence on complete opennessRFCs were allowed to contain almost any subject provided that it had something to do with the network, and they were not held by the NWG as the “official standard”. In addition. The NWG encouraged publication of unpolished RFCs in the belief that rough ideas are sometimes as useful as fully worked out protocol standards. They also encouraged the free distribution of RFC's—a practice that continues to this day.

In February 1969, BBN supplied the research groups with some technical details, and the Network Working Group began working on the nuts and bolts of how the network was going to work—both how the IMP—host interface was going to work, and how the simple applications were going to work.

The first IMP was due to be delivered to UCLA on the 1st September, 1969, and the team there expected some extra time to complete the necessary software (1st September is a public holiday in the US, and there were rumors of timing problems at BBN's end that may have delayed delivery). In the end, BBN delivered the IMP on the 30th August 1969, causing a panic with the software writers. BBN delivered the second IMP to SPI at the beginning of October, and by the 21st of November it was possible to demonstrate a telnet-like connection between the two host computers to senior ARPA officials. The net had come “alive”. The first two “applications” to work between two host computers on ARPAnet, were a terminal connection program (telnet), and something to move files between the two hosts (ftp). Note the lack of electronic mail (which was first implemented by transferring messages as files using ftp into special areas, before a new protocol was implemented).

After the first four sites were connected, other sites were connected to implement ARPA's original intention of 16 connected research groups. The next 11 include some names that have contributed enormously to the Internet, and they are all listed here—BBN, MIT, RAND Corp, SDC, Harvard, Lincoln Lab, Stanford (the University), University of Illinois, Case Western Reserve University, Carnegie Mellon University, and NASA-AMES.

At this point, BBN came up with a simpler, slower and cheaper version of the IMP—the TIP (or Terminal IMP). The growth of ARPAnet continued beyond the original intention.

At the First International Conference on Computer Communications, which was held in Washington DC in 1972, delegates from all over the world were treated to a demonstration of the ARPAnet. They also discussed the need for a common set of networking protocols, and the Internetwork Working Group was set up. It was also realized that networks such as ARPAnet, and similar networks could be inter-connected, and with the use of the same networking protocols, it might be possible to link a number of individual networks into something that could be viewed as just one large network. It was the start of both the name “Internet”, and the start of what the Internet is today.

The ARPAnet Completion Report pinpointed the popularity of e-mail as the most surprising service by the pioneers. And the acknowledgements of Guy L. Steele's book “Common List” indicated why Lisp is a programming language well suited to, and well used by AI researchers, and as it happens many of those AI researchers have a tendency to tinker with the language they work with—by the time Common Lisp was being worked on, there were at least a dozen popular varieties of Lisp in use. Common Lisp was the varieties of Lisp into one standardan attempt (and a successful one) at bringing together agreeable to the majority. In his acknowledgements section, Guy suggests that Common Lisp would have been impossible without ARPAnet's email facilities. A mailing list was set up, where the issues at stake could be argued about from day to dayin excess of 3000 messages resulted, varying in size from one line to 20 pages.

ARPAnet made possible collaborations between people who were thousands of miles apart.

ARPAnet did have one very big disadvantage—it was difficult to get connected to as it required “political connections” and a large amount of money. Due to the difficulties, CSNET was set up by NSF (National Science Foundation) to provide an ARPAnet to those who couldn't get connected to the real thing, and it also proved to be very popular. It also extended the community of Internet users to people other than computer scientists.

As ARPAnet was being phased out, NSF then funded the NSFnet which served as the main US backbone for the Internet, until the US government disbanded it and allowed commercial Internet providers to fill the place.

ABOUT GIGABIT ETHERNET

Gigabit Ethernet was the natural evolution after the development and widespread deploy-

yment of networking equipment based on Fast Ethernet (100BASE-T) technology. During the development of 100BASE-T, many claims were made by industry analysts, observers, and vendors as to how quickly this new technology would be adopted, and whether it was needed or just perceived to be needed. However, once the 100BASE-T standard was completed, vendors immediately introduced a torrent of products based on it. Costs associated with deploying 100BASE-T quickly fell, and the installed base of latent 100BASE-T devices leaped.

While this was obviously good for the participating vendors it also had a negative effect. Few networks up to this point had been 100 Mbps capable, and where they were, these were primarily backbones based on FDDI. Widespread deployment of 100 Mbps to the desktop made the existing infrastructure look very feeble.

In addition, the other primary backbone network interconnection technology is ATM. ATM already has 622 Mbps (OC-12) capable links shipping, and a plan to upgrade to OC-48(2.4 Gbps) links in the early adoption phase.

But ATM and/or FDDI in the backbone require frame formats to change. For high-speed switching and routing, segmentation, reassembly, encapsulation/decapsulation and frame format conversions are very expensive, in terms of efficiency of implementation and speed of operation, for the silicon, software and systems capabilities to perform these operations at line rates.

1. Backbone Technology-Bandwidth Aggregation

Few(if any)applications or users currently need the bandwidth capabilities of Gbps technologies at the desktop(whether Ethernet or any other). However, backbone technologies, where aggregate bandwidth needs coalesce, will need these Gbps capabilities, as user demands migrate to use the 100 Mbps technologies that are being rapidly deployed to the desktop.

2. New and Emerging Bandwidth-Intensive Applications

New and emerging bandwidth-intensive applications will continue to drive the need for raw bandwidth, especially as Internet-related browser and “push” content capabilities continue to deliver enhanced services from a user perspective. As Internet, intranet, and Web content traffic continues to expand, and as the cost of 10/100 Mbps capable equipment is driven down, there is a growing potential that corporate backbones and aggregation points will be overwhelmed by bandwidth requirements alone.

Gigabit Ethernet offers the potential to upgrade the core of the network simply and efficiently.

3. Gigabit Media Independent Interface (GMII) Versus MII

When Ethernet moved from 10 to 100 Mbps operation, a new media-independent interface was required, principally because the original AUI at 10 Mbps would not easily scale to 100 Mbps operation. Similarly, in the move to 1000 Mbps, the MII defined for 100 Mbps operation was unsuitable. While many of the MII signaling concepts were maintained, the GMII transmit and receive data paths were widened to 8-bit (from the 4-bit paths provided by the MII) to allow reasonable frequency clocks and data path transition frequencies. Even with this modification, the transmit and receive data paths and clocks were required to operate at 125 MHz in order to achieve the 1 Gbps data rate.

4. Adoption of Fibre Channel Encoding

As the speed requirements for Ethernet have steadily increased, so has the challenge of providing a reliable signaling scheme over the media at these increasing rates. The IEEE Gigabit Ethernet committee looked for an existing signaling system at Gbps data rates, in order to speed the standard development cycle and to leverage existing industry experience at these high data rates. As a result, Fibre Channel (ANSI X3.230—1994) was adopted as the physical signaling scheme, similar to the wav in which 100BASE-TX merged the FDDI PHY and Ethernet MAC.

Fibre Channel uses an 8B/10B line code, encoding each 8-bit of data payload into a 10-bit line code, incorporating additional bits for error-checking robustness. The coding technique was initially developed by IBM (who licensed the patents for use in Fibre Channel initially, and later Gigabit Ethernet), for high-speed signaling over fiber optic cable. The Gigabit Ethernet PHY specifications based on 8B/10B coding are generically  referred to  as 1000BASE-X, with deri-

vatives for short-wave-length optics (1000BASE-SX), long-wavelength optics (1000BASE-LX), and copper media (1000BASE-CX).

At the time the Fibre Channel signaling scheme was initially presented to the Gigabit Ethernet committee, the closest appropriate signaling rate was 1.0625 Gbps, yielding an effective data rate of 850 Mbps. The requirement to achieve a data rate of 1 Gbps meant the signaling rate had to be increased to 1.25 Gbps and caused most existing silicon implementations to be reworked.