Address Translation

A subject that is often confused with gateway discovery is the translation of a telephone number into an IP address. Among the services that need such translation are phone-to-PC calls made from a PSTN telephone to a PC (or to any telephony-capable IP appliance) on the IP network. A way to accommodate such a call from the PSTN is to assign the PC a telephone number. This not only allows a PSTN operator to leverage its existing PSTN infrastructure to offer IP telephony services, but also makes it easy for the telephone user to place a call to a PC. Of course, telephone numbers for this purpose should adhere to a special numbering plan that is distinct from the ones used in traditional telephony services. Depending on its geographical scope, this special numbering plan is under the administration of ITU-T or the national (or regional) telephone numbering authority.

Another service in question is Internet Call-Waiting, with which an end user can be notified of an incoming call while using the telephone line for a dial-up connection to the Internet. Upon receipt of the notification, the user then has the option to reject or accept the call. Either way, there are further details on the disposition of the call. What is relevant here is that the notification is delivered to the called party (identified by a telephone number) over the Internet. Ensuring instant notification of a call in waiting typically requires the knowledge of the IP address of the PC connecting to the Internet via the telephone line.

In general, any directory-like technology can support the type of translation in question. An example is a domain name system, which is best known for mapping a domain name (for example, www.lucent.com) to an IP address. Another example is LDAP, which is used to look up information (for example, John’s e-mail address) in a directory that is organized in a special tree structure. A caveat is that whatever existing directory technology is used must be adapted to satisfy the special requirements posed by IP telephony. One such requirement is that the directory used must allow for frequent updates of its entries. This arises where an IP endpoint is assigned an IP address dynamically, as is often the case in dial-up connections. In contrast to the relatively static telephone number given to the IP endpoint, the IP address changes at each connection. Another special requirement has to do with the real-time performance need of IP telephony. The additional step for directory lookup must not introduce significant delay in the IP telephony setup procedure.

An immediate benefit of using the general directory technology for telephone-number-to-IP-address translation is that other attributes associated with the endpoint can also be obtained with the operation. Consider as an example Mary, who wishes to receive calls at her PC over the IP network only from her daughter between 5:00 and 6:00 p.m. every day. Such information can be stored in the directory. If so, a call to Mary could trigger a directory query whose response includes the IP address of Mary’s PC as well as her preference for receiving calls. The additional information can then be used to process all calls to Mary. For instance, a call from a friend at 5:15 p.m. will result in no attempt at call setup inside the network. Instead, the call will be redirected to Mary’s voice mail. If you find this scenario familiar, you are right. It is similar to an IN-supported service where the user’s policy (or service logic) plays a part in the overall call processing and the policy is stored somewhere inside the network. This similarity again suggests that a networked repository for policies and dynamic information (for example, IP addresses) with simultaneous access from the PSTN and IP network is an effective device for supporting integrated services.

The subject of telephone number translation has been addressed in the IETF Telephone Number Mapping (enum) working group (www.ietf.org/html.charters/enum-charter.html ), which has been chartered to “define a DNS-based architecture and protocols for mapping a telephone number to a set of attributes (for example, URLs) which can be used to contact a resource associated with that number.”

Gateway Location

What we have touched on so far are the core gateway functions—signaling and media conversion. Yet, to support an IP telephony call end to end, other things need to be taken care of. One such thing is gateway location.

Suppose a call originated in (or was passed on to) an IP network, and the call needs to be terminated in the PSTN. There are many gateways that could do the job, but only a few may be suitable for a particular call. For instance, the gateway definitely should be chosen so as to terminate the call at the point in the PSTN as close to the called party as possible. In this case, terminating at the gateway that is connected to the called party’s end office is clearly the optimal solution if other factors are not involved.

But other factors are typically involved, such as the load and capability of gateways in question, the IP network provider’s and the end user’s preferences, agreements with the PSTN service providers, and—to top it all—the costs to be incurred.

Thus, gateway location is an important activity that is bound to become more complex as more telephony gateways are built and deployed. A considerable amount of research on the subject has been done (Schulzrinne and Rosenberg, 1999) and more or less detailed proposals on that matter have been submitted to both ITU and the IETF [specifically, the IP Telephony (iptel) working group].

As it happens, the Internet has had a similar problem with interdomain routing, which has been solved. It should come as no surprise, then, that many ideas of a specialized routing protocol [called Border Gateway Protocol (BGP)] have been used to develop the gateway location technology.

In the tradition of IP routing, the process is dynamic and the knowledge is built using distributed computation performed by the gateways. Before the decision can be made regarding which gateway to choose, the database of eligible cooperating gateways has to exist. This database is built in the act of gateway discovery. Figure 1 shows a representative architecture that spans several administrative domains. Each domain, which contains several gateways and some number of endpoints (for example, PCs), has at least a logical entity, called a location server. The main job of the location server is to learn about the gateways in its own administrative domain as well as in other domains and to construct the database of the gateways. In the intradomain case, this is usually achieved in a registration process. Each gateway signs on with the location server when started up and updates its availability status whenever necessary. The information is propagated by means of an intradomain protocol, such as the Service Location Protocol.[7]

Figure 1: Architecture for gateway discovery.

In comparison to the intradomain case, discovery of gateways in other domains is not as straightforward a process. Complicating the process is the need for a business agreement between the administrative domains for any exchange of gateway information or any use of the gateways by users in a different domain. A location server can only propagate to another location server the gateway information that is permitted by the agreement between the administrative domains to which they each belong. Similarly, synchronization of the database in one domain with that of another and selection of the route of a call are subject to the intradomain agreement and policy. As a result, there cannot be a single global database of gateways where a user can look up and select a gateway as desired. Instead, different databases may be available in different domains based on the respective business agreements and associated policies. The database contains the IP address of a gateway and a range of telephone numbers that the gateway can terminate. In addition, it contains the attributes of the gateway, such as signaling protocols (for example, SIP and H.323), usage cost, and the provider’s identification number, which we have mentioned before. A location server uses the attributes to decide which gateways are to be used for terminating a call to a particular number or to be further advertised to other location servers.

Interdomain gateway discovery is carried out through an interdomain protocol, such as the Telephony Routing Information Protocol (TRIP), which is under development in the IETF. As mentioned before, an inter-domain gateway discovery protocol resembles a routing protocol. It also shows significant differences, however. The most important are that the inter-domain gateway discovery protocol runs at the application level and runs between location servers, which may not be adjacent and, unlike routers, do not forward IP packets.

As far as access to the gateway database is concerned, it is an activity that is independent of gateway discovery. There are several possible approaches for such database access using different protocols. One protocol is Lightweight Directory Access Protocol (LDAP), which, as its name suggests, is for access to any directory organized in a special tree structure. When LDAP is used, the gateway database is organized based on the LDAP data model. Another protocol is Service Location Protocol, which has been designed for locating a networked service based on service attributes rather than the name (or location) of the network host providing the service. Logically, it can be used to find gateways given a set of user preferences. Yet another protocol is HTTP, the communications enabler of the World Wide Web. In this case, the gateway database has a Web front end, which allows a user to query it through Web-based forms. We should note that an inter-domain gateway discovery protocol is actually another possible candidate. Nothing prevents it from being used for retrieving information in the gateway database.

Media Gateway Control

An application-level protocol over the N1 interface can be used by the media gateway controller to run the media gateway (as far as call control, connection control, and resource allocation are concerned). Industry approaches vary in the way that resources are represented and calls and connections are modeled. Huitema et al. (1999) describe an early approach. Supplanting it is the approach that has been pursued jointly by ITU-T and the IETF media gateway control (MEGACO) named after the IETF working group [see www.ietf .org/ html.charters/megaco-charter.html]. While bearing resemblance to Huitema’s approach, MEGACO differs in a considerable way as well.

At the heart of the MEGACO approach is its connection model, which consists of two types of objects: a termination and a context. A termination is a source or sink of one or more flows on a media gateway. It has properties (for example, media characteristics, a set of events that can be detected, and a set of signals that can be acted on) describing the nature of the termination. Bearer circuit channels and RTP streams are examples of terminations. In particular, bearer circuits, like other physical resources, are terminations that are persistent as long as they are provisioned on a gateway. In contrast, RTP streams, which are created on demand, are ephemeral terminations. Terminations may come in a wide number of types. The MEGACO approach includes a mechanism to define them in separate packages.

A context is an association of a collection of terminations that defines the directions of flows, if any, between the terminations. Terminations can be added to or removed from a context. A context is created when the first termination is added and is destroyed when the last termination is removed. (Similarly, an ephemeral termination is created when it is added to a context and destroyed when it is removed from the context.) Terminations can also be moved from one context to another. In addition, their properties can be modified within a context. Terminations without any association belong to a special type of context called the null context. Such terminations usually represent physical resources. Adding these terminations to a normal context removes them from the null context; removing them from a normal context returns them to the null context. Figure 1 shows examples of contexts and terminations.

Figure 1: Examples of contexts and terminations.

Based on the connection model, the MEGACO approach defines a set of commands for the overall purpose of call and connection control. Among them are the commands for direct manipulation of terminations and contexts, including addition of a termination to a context, removal of a termination from a context, modifying the properties of a termination, and moving a termination from one context to another. Other commands are for auditing of the current state of a termination and the range of the termination properties supported on the media gateway, event notification, and management of the association between the media gateway and the controller. Most of these commands are initiated by the media gateway controller and are sent to the media gateway under its control. The exceptions are the ones for event notification and association management. For notifying the controller of certain events (for example, off-hook and end-of-tone) received by the media gate way, the command for event notification is naturally initiated by the media gateway. In contrast, the command for association management can be initiated by either the media gateway or the controller. When initiated by the media gateway, it is used to notify the controller of the change of the availability status of terminations on the media gateway or the availability status of the gateway itself. When initiated by the controller, it is used to instruct the media gateway to establish an association with a new controller or to take certain terminations out of service.

Another important aspect of the MEGACO approach is its operational model, which recognizes transactions and actions. A transaction consists of one or more actions; an action is composed of a series of commands and is applicable only to a specific context. Invocation of transactions is done by way of messages. To reduce the load of the communications exchange between the media gateway and the controller, a message can hold multiple transac- tions. The transactions in a message are treated independently and can be processed in any order or concurrently. In contrast, commands within a transaction must be processed sequentially. When a command fails, the processing of the rest of the commands stops, unless the failed command is optional. Commands must be responded to upon completion. When a transaction takes a long time to complete, a provisional response should be sent periodically to its originator indicating that the transaction is being processed. Responses are sent also by way of messages. As expected, a message can hold multiple transaction responses, each of which consists of a series of action responses. An action response comprises the responses to the commands pertaining to the action.

An assumption of the MEGACO operational model is that messages are exchanged reliably over the network. For this reason, implementations must ensure that the media gateway and the controller use a reliable transport mechanism [such as Transmission Control Protocol (TCP)] for the relevant exchange. When an unreliable transport [such as User Datagram Protocol (UDP)] is used, the mechanisms that eliminate message duplication and ensure in-sequence transmission of transactions must be used. In addition, it is important to have mechanisms that detect network congestion and respond to it by reducing the traffic. On the other hand, when a reliable transport is used, simple application-level timers may be all that is needed to guard against component failure and undesirable use of the network.

As you have probably noticed the MEGACO technology is somewhat closer to supporting the old-type telephones than true IP telephones, which can establish an end-to-end call without the network even being aware that the call is established. Doing so requires signaling protocols such as SIP and H.323.

Gateway Decomposition

The different types of gateways described iare specific instances of a generic gateway notion. It is useful to decompose the generic gateway into several functional components. Figure 1 depicts a common view of such a decomposed gateway. Three components are identified: (1) the media gateway (MG) function, (2) the signaling gateway (SG) function, and (3) the media gateway control (MGC) function. We describe these functions next.

Figure 1: Components of a decomposed PSTN-Internet gateway.

Physically, the media gateway function terminates PSTN circuits and connections to IP routers (in relation to which it is a host). It also performs all the necessary transformation to convert bit streams received from the sending network into bit streams particular to the receiving network. The transformation occurs at two levels: transmission and application.

At the transmission level, the MG function converts the bit streams between two different framing schemes. This usually involves multiplexing of bit streams of distinct communication sessions and the reverse operation—demultiplexing. In the PSTN, fixed-size digital channels (each typically carrying a voice conversation) are multiplexed based on the time division multiplexing (TDM) scheme at various hierarchical levels (for example, T1 and E3) and packed into frames for transmission over high-capacity facilities. In the IP networks, bits representing a voice conversation are packetized according to the Real-time Transport Protocol (RTP) profile for audio and video payloads (RFC 1890).

At the application level, the transformation takes place between different media-encoding schemes (see the section on codecs for more information) and is commonly known as transcoding. In IP telephony, two prevalent speech encoding schemes are G.711 and G.729. Operating at a bit rate of 64 kbps, G.711 is used ubiquitously in the digital backbone of the PSTN and sets what is known as the toll-quality voice standard. G.729 operates at a much lower bit rate of 8 kbps but still supports near-toll-quality voice service. For this reason, it is widely used in IP networks where bandwidth is constrained. Note that transcoding is computationally intensive and thus causes delays. In addition, transcoding results in degradation of voice fidelity, in particular when a speech coder uses compression.

Another important task of the MG function is to support the use of the QoS facilities of the IP network. Other tasks include echo cancellation (if required), event detection, signaling generation, usage recording, and support of specialized resources such as conference bridges, fax machines, modem pools, and interactive voice response units.

The SG function receives and sends PSTN signaling (such as SS No. 7 or ISDN access) messages. Depending on the arrangement, it may relay, translate, or terminate the PSTN signaling. It exchanges signaling information with the MGC function over IP, and with the PSTN using the SS No. 7.

The MGC function provides control of the media gateway function, including call and connection control and resource management. To this end, it terminates and originates all the relevant signaling. In addition, the MGC function keeps an inventory of the MG resources (for example, bearer circuits and RTP streams) and instructs the MG to reserve or release resources as required. (Naturally, some sort of local policy will govern the use of resources.) With its central role in call and connection control, the MG function logically also provides support for Internet offloading or advanced services and features (such as freephone or call-forwarding). It has the ability to detect data calls from the PSTN and to direct the data traffic straight to a network access server as well as to launch queries to SCPs for instructions for further call processing.

IP Telephony Gateway


If a call is to be made from a PC or specialized IP phone to a regular PSTN telephone or vice versa, both the PSTN and IP network are involved in making the call. The PSTN may also use an IP network for so-called trunk replacement, or IP trunking, where the long-distance portion of the PSTN voice traffic between two PSTN telephones is carried over an IP network.
When it comes to delivering real-time voice, the PSTN and an IP network are different in a number of ways, as summarized in Table 1. For establishing a call, for example, SS No. 7 has been traditionally used within the PSTN, while H.323 is the most prevalent protocol suite [with Session Initiation Protocol (SIP) as a viable alternative] in the Internet to date. In general, the connection of two dissimilar networks is achieved through some sort of a device—called a gateway—that compensates for the differences in the networks. There is no exception when it comes to interconnecting the PSTN and IP networks for supporting IP telephony. In this case, the interconnecting device is called an IP telephony gateway and will link users of IP telephony with a billion or so PSTN users.

Table 1: Key Differences between Telephony over the PSTN and the Internet
Distinguishing Aspect
PSTN
Internet
Bandwidth allocation for voice transport
A dedicated circuit (e.g., 64 kbps) set up for each two-party communications session.
Best-effort delivery of IP packets.
Numbering and addressing scheme
14-bit point code for network nodes and E.164 for endpoints.
4-byte IP address, domain name system (DNS), e-mail address, uniform resource locator (URL), etc.
Voice representation
Typically analog in the loop and G.711 (either A-or m-law) in the backbone.
G.711, G.723.1, G.729, etc.
Signaling protocols
Signalling System No. 7 Q.931, etc.,
H.323, SIP, etc.
Availability
99.999% (5 min downtime per year).
99% (88 h downtime per year).
Figure 1 illustrates the integration of the PSTN and the Internet (or any IP network) through gateways in support of IP telephony. It distinguishes four types of IP telephony gateways based on the PSTN interfaces and certain specific functions that they support.
Figure 1: PSTN-Internet integration through gateways.

  1. Trunking gateway. Connects a central office (CO) switch to an IP router. Such a gateway typically has an SS No. 7 signaling interface and manages a large number of 64-kbps digital circuits and Real-Time Transport Protocol (RTP) streams. It is used in the trunk replacement application where the long-distance portion of a call between two telephones is made over the IP network instead of the PSTN. (In IP telephony parlance, such calls are known as phone-to-phone calls.)

  2. Access gateway. Connects telephones or PBXs to an IP router through an access interface [such as ISDN primary rate interface (PRI)]. It supports calls between two telephones with the IP network as an intermediary transport or between a telephone and a PC. (Again, in IP telephony parlance, calls between a telephone and a PC are also known as PC-to-phone calls or phone-to-PC calls.)

  3. Network access server. Connects a central office switch to an IP router. (Though previously discussed, it is included for completeness, because this type of gateway can be controlled in exactly the same manner as others.) Such a gateway may have an ISDN interface similar to that of the access gateway.

  4. Residential gateway. Connects analog phones to an IP router. Such a gateway typically supports a small number (two to four) of analog lines and is located on the customer premises. It brings the Internet interconnection point directly to the curb and maximizes the use of the IP network for calls between two telephones as well as between a telephone and a PC.

Active Networks | QoS

Some implementations of active networks (AN) exist (for example, see www.cccc.com), but no standards projects are currently associated with them. The area of the application of AN is larger than ensuring QoS, but AN is viewed with much interest in the research and development communities as a possible means of ensuring and supporting QoS.
As Calvert et al. (1998) observe, AN means different things to different people. In a sense, this is true, although everyone seems to agree that, in a nutshell, AN is about programmability of network elements (for example, routers) and—to an extent—bypassing, if not totally eliminating, standardized protocols, replacing them with dynamic, created-on-the-fly protocols. Marcus et al. (1998) lament that “existing protocols do not operate well for emerging applications or take advantage of novel network technologies,” citing “IP’s inability to capitalize on sub-networks which offer quality of service . . . guarantees.” While one could argue with this particular example, there is a point in the complaint. It is indisputable, however, as the authors further note, that “Forming a consensus within large groups is a slow process, and is likely to remain slow; therefore, protocol standards will continue to evolve at a slow pace.” The question, of course, is whether this pace is sufficient for the market development, and only the future will bring the answer. The idea behind AN is quite similar to (if not influenced by) the idea that resulted in the creation of Java. The language [Hypertext Markup Language (HTML)] and the protocol [Hypertext Transfer Protocol (HTTP)] that made possible the Internet killer application—the World Wide Web—do not support rapid interaction of the user with the page. Such interaction has been made possible by the invention of the principle by which a program (applet) written in Java is sent to the user’s personal computer (PC) or Internet appliance and then interpreted locally (by a Java interpreter). The user actually sees no difference (unless a silly message on a screen proudly announces that a Java program is being executed). The user simply clicks on an object, and HTTP carries the Java code to the user’s machine. Now, AN proposes pretty much the same mechanism, except that the active code is to be carried not in the application protocol message but in a network layer packet, and this code is to be executed not (or, in general, not only) at the host, but by the network elements themselves. Although many questions can be asked (most cannot yet be answered) regarding the security issues involved with this approach and its exact applications, it is relatively straightforward to see how in principle the QoS-related state of a router can be changed with unprecedented efficiency, and how the network-wide services could potentially be implemented. A specific and somewhat less futuristic application of AN to network management is described in Raz and Shavitt (1999).
The overall architecture for AN is being developed in the Defense Advanced Research Projects Agency (DARPA), the same organization that sponsored the development of what has now become the Internet. Several universities (notably Berkeley University, Columbia University, Georgia Tech, MIT, the University of Arizona, the University of Kansas, the University of Pennsylvania, and Washington University—by no means an exhaustive list), as well as the research facilities of major corporations, have AN projects.
There are two things on which the AN community agrees: (1) Networks must be service independent and (2) end-to-end service programs must be network independent. Do these sound like early IN principles? Exactly! After all, the more things change, the more they stay the same.

Label or Tag Switching | QoS

We now move to label switching or, as called by some implementations, tag switching, which is standardized by the IETF. The technology was initially developed for the purpose of interworking between the IP-based and ATM and frame relay networks, and it was later developed to apply to any network layer protocol (hence the multi- designation). The ATM (B-ISDN) switches follow the PSTN model in establishing and maintaining virtual circuits and virtual paths. The B-ISDN access protocol specifies the QoS, which is then guaranteed by the network.

To get an idea of MPLS, try to answer the following question: If the ATM and IP networks are to interwork, what should the router on the border of the ATM and IP networks do? The most straightforward answer is try to maintain the virtual circuits and virtual paths. To do so, the first router in a chain would need to “understand” the “ATM language” and act (that is, route the packet) based on the connection identifier established by the ATM switch. The next router on the path to the destination does not necessarily have to “understand” the same “ATM language,” but then it needs to understand whatever means the first router uses to identify a connection. The same applies for the rest of the routers on the path to the destination.

This is precisely how the MPLS routers work. They make forwarding decisions based on a fixed-length string called a label to decide how to forward the packet. The labels are meaningful only to the pair of routers sharing a link, and only in one direction—from a sender to the receiver. The receiver, however, chooses the label and negotiates its semantics with the sender by means of the Label Distribution Protocol (LDP). The label can indicate not only where to forward the packet (that is, which port to use), but also the QoS characteristics of the packet that specify its priority and suggest an appropriate treatment.

This approach is very different from the traditional (that is, non-MPLS routing) approach, where a router makes forwarding decisions based on the IP header. In the traditional approach, the routing table must be searched, which takes more time and processing power than a lookup in a label table, which the label-based forwarding requires. Furthermore, the routers that are not capable of analyzing the network layer packet can still perform the label lookup and replacement (a much simpler operation). Another advantage of MPLS is that using the labels (that is, in a sense, maintaining the history of the path) allows the forwarding decisions to be made based on the identity of the router at which the packet enters the network—packets entering the network via different routers are likely to be assigned different labels. Finally, when a packet is to be forced to follow a particular explicit route (rather than be left to the mercy of routing algorithms), the MPLS label can be used to represent the route. RSVP can be extended to complement MPLS by associating labeled paths with the flows. With that, resource reservations associated with an explicit route can be made to guarantee QoS.

We should mention one more important MPLS application: MPLS provides an excellent mechanism for tunneling by stacking the labels and thus supporting nested routing decision making. One important potential application of combining RSVP and MPLS is that the resulting tunnels can be routed away from the points of network failure or congestion. We highly recommend the work of Armitage (2000) as a comprehensive review of the subject.

Another means for ensuring QoS is network-wide enforcement policies, which are rules for control of the network resources and services. In describing these, we follow Kozik et al. (1998). Quality of service is only one aspect of policy-based networks; others are security, authorization, and accounting. These aspects are often inseparable—the accounting function, for example, may determine whether the present level of use has been paid for (by keeping track of the use of the resource). If use has not been paid for, policies can restrict access to the resource or affect QoS by downgrading the level of use.

The architecture of policy-based networks—sometimes also called directory-enabled networks. The architecture actually repeats the IN conceptual pattern in both the way that the policies are stored and the way that they are accessed by network elements (for example, routers, access servers, or telephony gateways). The policies are stored centrally in a policy database by a policy management system. When a network element detects an event that requires policy access (such as a request to provide bandwidth in order to establish an IP telephony call), the network element queries a policy server, which in turn consults the policy database and then either denies the request or carries it through by instructing all concerned entities to perform the actions that would enforce the policy.

Figure 1: The architecture of policy-based networks.

The IETF is addressing the subject of policy-based networks in the Policy Framework (policy) working group.

Fair Queuing and Weighted Fair Queuing | QoS

Using the new queuing schemes, each flow now has its own queue. With the fair queuing policy, the packets are transmitted round-robin in order to guarantee each flow an equal share of the capacity (possibly penalizing flows that have large packets at times of network congestion). Weighted fair queuing—an algorithm that is widely used in today’s advanced QoS-capable routers—assigns each different type of flow its (by no means necessarily identical) share of bandwidth. Figure 1 illustrates the concept: In Figure 1a, with the first-come, first-served queue, airplanes, cars, and elephants move in the same order in which they have arrived (a scheme that would cause plane crashes and annoy the drivers of the cars following elephants!). In Figure 1b, with fair queuing, the queues are formed per each flow (defined here as a formation of planes or cars or a caravan of elephants), but they are preempted so that bigger things have to wait until an equivalent number of smaller things passes (still, a maddening experience for elephants!). In Figure 1c, with weighted fair queuing, the planes are given the right of way, so they move through the queue almost without slowing down and always keeping formation; the planes are followed by cars, and the cars by the caravan of elephants. This property of keeping the packet “formation” eliminates delay variance (called jitter).

Figure 1: Queuing and scheduling in routers. (a) First-come, first-served queuing. (b) Fair queuing. (c) Weighted fair queuing.

In 1992, A. Parekh and R. Gallager of MIT demonstrated that a flow that experiences a service rate slightly higher than the flow’s data rate has a bounded delay. In other words, by requesting that a flow not exceed a certain rate, the network can guarantee that the delay experienced by the flow does not exceed a certain value. (A good example of a similar result is green streets in cities, where stoplights are adjusted so that a car traveling at a certain speed—for example, 25 mph—is guaranteed a green light at about 9 out of 10 intersections.)

The scientists then augmented the weighted fair queuing with the specification of guaranteed delay for each flow. This work resulted in a new architecture for what its creators called integrated services packet networks [compare with the expansion of the integrated services digital network (ISDN)] in Clark et al. (1992). Two types of services—guaranteed (which supports real-time traffic with determined latency and jitter bounds) and controlled-load (which deals with relaxed traffic)—were defined. At that point, the groundwork was laid for the standardization work in the Internet Engineering Task Force (IETF). The protocol that defines integrated services, called the Resource Reservation Setup Protocol (RSVP)—which is not a routing protocol. In a nutshell, RSVP, which was designed with multicasting (that is, sending a message to multiple receivers) in mind, makes bandwidth reservations—from destination to source—in the routers along the spanning tree covering multicast group members. The routers store the necessary state information, which is then maintained by sending specific RSVP messages in both directions.

The integrated services approach has been comprehensive, but apparently far too ambitious to implement widely. One recurring sentiment is that the overhead associated with reservations is far too large; the other is that it is overkill as far as the short-lived flows (of which most of the present Internet traffic consists) are concerned. (The counterargument to the latter is, of course, that the model was not created with the short-lived flows in mind; but then, something needs to be done about the short flows, too.) The third concern (Weiss, 1998) regarding the integrated services approach is that it would make charging those who request a higher QoS difficult. In any event, while the applicability of the RSVP to wide area networks and the Internet is questioned, it is being implemented for smaller enterprise networks. In essence, the integrated services approach has been a top-down one—guaranteeing absolute QoS in the network on a per-flow basis.

A bottom-up alternative technology, where QoS building blocks (which routers can recognize and act on) are defined, is called differentiated services (Kumar et al., 1998; Weiss, 1998). This technology has been actively addressed by the IETF and has resulted in a standard. The concept behind the technology is definition of various classes of services. The service provider establishes with each customer a service level agreement (SLA). Among other things, an SLA specifies how much traffic a user may send within any given class of service. A particular class of service of a packet is encoded in its IP header. The traffic is then policed at the border of the service provider’s network. Once the traffic enters the network, specialized routers provide it with differentiated treatment, but—unlike the case with the integrated services approach—the treatment is based not on a per-flow basis, but solely on the indicated class of service. The overall network is set up so as to meet all SLAs.

Quality of Service (QoS) | Converged Networks

The Glossary of the Telecommunications Terms of the U.S. Federal Standard 10377 (available at www.its.bldrdoc.gov/fs-1037/fs-1037c.htm) defines quality of service (QoS) as:
1. The performance specification of a communications channel or system. . . . 2. A subjective rating of telephone communications quality in which listeners judge transmissions by qualifiers, such as excellent, good, fair, poor, or unsatisfactory.
This definition best expresses both the objective (that is, something based on a computable metrics) and subjective (that is, perception-based) aspects of the QoS concept. Three objectives that drive the need to integrate the Internet and the PSTN relate to QoS: (1) carrying voice across both the IP networks and the PSTN, (2) combining the PSTN transport and IP access to services, and (3) accessing the IP networks over the PSTN lines. The first item is associated with the most perceivable QoS requirements. Nevertheless, the IP network access aspect (and connected to it, issues of supporting VPNs) is equally important, as we will demonstrate.

To begin with, different applications (or, rather, their users) have different perceptions of what the QoS is. Using an application called telemedicine, a doctor may expect a copy of a brain tomogram taken in a remote laboratory. The transmission can be delayed for a few minutes; however, the doctor cannot afford to have any detail of the tomogram compromised (a missing detail could wrongly suggest a tumor or leave it undetected)—the chief QoS requirement here is that no data arrive in error. On the other hand, in an IP telephony application, an occasional error in the signal would cause no problem; however, a long delay (or a variable delay, called jitter) is likely to be unacceptable.
The model of routing for the PSTN is based on the concept of circuits, which are created end to end for the duration of the session. Circuits are mapped into fixed switched physical connections. Thus, any message between two end users in a session always traverses the same physical path for the duration of a session. [For conferencing, such circuits can be bridged (that is, joined) by switches or other devices that have switching fabric; thus, any multicast message will follow the same, predetermined set of physical circuits.] With this routing model, it is possible to determine whether a session that requires certain characteristics (such as bandwidth or loss tolerances) can be established. Once the session is established, it is relatively straightforward to guarantee that the requested characteristics will remain constant for the duration of the session.
One important factor in PSTN routing is the time that it takes to set up a circuit; the call setup time has traditionally been an essential QoS metric in the PSTN. Incidentally, this model, which naturally grew out of telephony, was applied by the ITU-T to the definition of virtual circuit for data communications standards. First, this concept was defined in X.25 and, subsequently, frame relay and asynchronous transfer mode (ATM) networks. ISDN access guarantees certain bandwidth (depending on a particular national standard) to the subscriber. Broadband ISDN (B-ISDN) access, in addition, specifies parameters that are needed by the ATM network. At this very moment, the mechanisms are being developed to back up B-ISDN by Intelligent Network, which could ensure policy-based networkwide QoS enforcement. Overall, in today’s PSTN, the main QoS metric is, as mentioned before, the call blocking rate.
The Internet routing model, on the other hand, has traditionally avoided stressing any built-in mechanism for creation and maintenance of virtual circuits. In the Internet, QoS issues (which also define their respective metrics) include bandwidth availability, latency (that is, end-to-end delay) control, jitter (that is, delay variation) control, and packet loss. Historically, the IP networks have been supporting what is called the best effort (but no guarantees) of packet transmission. In this system, no differentiation among different types of traffic is made, and neither the sequence of packet arrivals nor the arrival itself is guaranteed.
Whatever the end-to-end QoS requirements may be, at the network layer the packets travel (similarly to those of us who take airplanes) from hub to hub (that is, from router to router). Each router queues newly arriving packets for retransmission over the link to the most suitable (according to the routing table) router or destination host. Until very recently, most routers used a first-come, first-served queuing discipline, which is fair to all packets and, for this very reason, cannot make some packets more equal than others.
Overall, for applications such as voice or video over IP, a new network layer model was clearly needed, and such models have been researched and implemented since the 1990s. Two new approaches proposed mechanisms that are now called fair queuing and weighted fair queuing . With fair queuing and weighted fair queuing, routers are no longer required to treat packets equally. The incoming traffic is separated into well-defined flows. (A TCP connection is an example of a flow, although it may be difficult to detect by a router—all TCP connections between the same pair of hosts is a more realistic example; a voice session is another one.)

Internet-Supported PSTN Services

Last year a colleague of ours was called by a reporter from a well-known technical publication and asked to describe the effort of the PSTN/Internet Internetworking (pint) working group in the Internet Engineering Task Force (IETF). Our cautious colleague wisely decided that the best he could do under the circumstances was to read out to the reporter a few selected sentences from the working group charter published by the IETF on its Web page. Specifically, he stressed—prompted by the pint Web page—that the purpose of pint was to “address connection arrangements through which Internet applications can request and enrich PSTN . . . telephony services.” The reporter wrote down what she heard. Later, in accordance with her agreement with our colleague, she sent to him the draft of the article. The article was accurate except for one word: enrich had turned into unleash—this is what the reporter heard over the telephone line (an atypically imperfect telephone line, we presume). When the amazed author called the reporter with the correction, she seemed to be disappointed. (And so are we. We wish the pint charter really did talk about unleashing the telephone services, because this is precisely what the Internet-supported PSTN services are all about!)

In his recent article on Intelligent Network, Scott Bradner correctly observes that the intelligence of IN is strictly in the network, not on its edges. This lack of edge intelligence is precisely what the interworking of the Internet and IN is to change—once and forever!

The basic goal of the IN technology is to allow the user of an Internet host to create, access, and control the PSTN services. A simple example of what the technology can do is the click-to-dial-back service, where you click on an icon displayed on a Web page and the PSTN call is established as the result.

As simple as it sounds (and a crude implementation of this service is just as simple), the unleashing quality of this service alone should not be underestimated. The competition among the long-distance service providers is such that they would do almost anything to get a call flowing through their networks. With Web-based access, their customers are around the world! There are still countries that protect their networks by forbidding the call-back service; people may get angry about such backward and anticompetitive practices, but click-to-dial-back, which technically is not a call-back service, is a way to get even.

Once you start thinking about the possibilities of tweaking this basic concept, you will find that the possibilities for creating new services are virtually unlimited. We will demonstrate a straightforward extension of click-to-dial-back to drastically improve on a PSTN-only service.

The service in question, interestingly enough, came to life as an application of the Web business model (but not the Web technology) to the PSTN. Telephony service providers in Europe started to market free telephone calls to those who would agree to listen to several minutes of audio advertising. The advertisers have so far found the approach ineffective—pure audio is hardly the best means of delivering advertising today. This already bad effect is further worsened by the “push” nature of audio advertising.

Now, with the technology just described, the prospective caller can access the service provider’s page on the Web, where he or she can also subscribe to the service and register a profile stating the preference for the types of products he or she wishes to learn about. Every time a call is to be made, the caller can then be walked through a video presentation of the advertisement on his or her Internet appliance—possibly accompanied by the audio portion over the PSTN line. The caller can control the pace of the advertisement; when it is finished, the caller will be prompted for the number he or she wishes to call and then connected to that number.

There are several early developments in this area (Hubaux et al., 1998). In the relevant architecture, the SCPs and SNs are connected to the Internet (which is fairly easy to achieve because they are almost invariably implemented on the Unix system platform).

Figure 1: The architecture for Internet-supported PSTN service delivery.

It is important to repeat that with this arrangement only SCPs and SNs—but by no means the switches—are connected to the Internet and thus can communicate with other IP hosts.

The service control function can be distributed to Internet hosts as much as the PSTN service provider allows and the owner of a particular IP host (who may be the same PSTN service provider) wishes to handle. In the WebIN, the IP host is actually a Web server, and part of the service control function (called WebSCP) is moved into the Internet (Low, 1997; Low et al., 1996). This arrangement can be used to provide the main features of the PSTN VPN service—the private numbering plan and closed user group in particular (Hubaux et al., 1998). The translation map of the enterprise-significant numbers to the PSTN-significant ones, as well as the specification of the closed groups (including the calling privileges of each group member), are kept in the databases accessible through the Internet. Part of the SCP service logic is executed by the WebSCP. While there is very little interoperability among the legacy IN implementations, integrating them with the Internet immediately establishes a common language for interworking. Even more significant is that the service creation and service management are also moved (via the Internet) to the edge of the network.

As exciting as the opening of PSTN call control to the Internet hosts may be, there is a serious and not completely solved problem associated with it. This problem is security, and it is ubiquitous in the Internet. While trusted relations between the PSTN and IP entities can be established between the enterprise networks, opening IN control fully to anyone on the Internet remains problematic. There may be no need for IN control to be fully opened, except in the cases of some well-understood services.

Potential security issues also prevent (at least for the time being) direct connection of the switching offices to the Internet. If that were done, the SCP itself could be placed on the Internet, and subsequently anything that IN does presently could be done in the Internet. Although there is a single ITU-T standard, it specifies different options. These options reflect implementations that differ not only between different continents (that is, Europe and North America) but also among the network operators in the United States. (Bell Operating Companies use the option corresponding to the Bellcore AIN; some IXCs use proprietary or European versions of IN.) Thus, direct interconnection of the PSTN switches and Internet SCPs, even if secure, would not provide global interoperability. Only interworking of the PSTN service control with the Internet hosts holds the promise (on which it has already started to deliver) of universal, global access to service control of the PSTN.

To conclude this section, we would like to repeat that so far we have taken an intentionally one-sided approach to the role of IN in the integration of PSTN and the Internet. Our only goal was to demonstrate how IN can be used to give more control in creating and executing services to the edge of the network. At this point, it is important to observe an interesting duality: While the PSTN benefits from Internet-based service creation and control, the IP networks greatly benefit from the existing PSTN-based service control for at least three different reasons: (1) efficiency of the access to IP networks; (2) provision of certain combined PSTN-Internet services; and (3) support of the existing PSTN services in the IP telephony environment.

The PSTN Access to IP Networks

Most of the technologies in the area of PSTN access to IP networks have been relatively well understood—that is, supported by the standards and widely implemented in products. For this reason, much material on this subject resides in the next two parts (which cover available standards and products, respectively). The technologies we describe here relate to physical access to the network. We have already described the ISDN; with the growing demand for the Internet access, residential subscription to the ISDN has grown (although not necessarily for the purposes for which the ISDN was invented). Typically, users bundle the B and D channels to get one big data pipe, and use this pipe for Internet access. Other types of access technologies are described in the following section.
An important problem facing the PSTN today is the data traffic that it carries to IP networks; the PSTN was not designed for data traffic and therefore needs to offload this traffic as soon as possible. We describe the problem and the way it is tackled by the industry in a separate section, which, to make the overall picture more complete, we tie in with the technique of tunneling as the paradigm for designing IP VPNs. Both technologies have been developed independently and for different purposes; both, however, work together to resolve the access issues.

Physical Access

We talk about approaches to integration of the Internet with telephony in which the action occurs at the network layer or higher—things like carrying voice over IP or using control signals originating within the Internet to cause connections to appear and disappear within the telephony network. However, integration at the lowest level—the physical level—is also of great practical importance, and nowhere more so than in the access portion of the network. Here, advances in digital signaling processing techniques and in high-speed electronics have resulted in remarkable progress in just the last few years, allowing access media originally deployed more than a century ago for telephony to also support access to the Internet at previously unimagined speeds. In our brief survey of these new access technologies, we will first provide an overview of the access environment, and then go on to describe both the 56-kbps PCM modem and the xDSL class of high-speed digital lines.
The Access Environment
Today it is quite possible, and not at all uncommon, for business users to obtain direct high-speed optical fiber access to telephony and data networks, including the Internet. For smaller locations, such as individual homes and small business sites, despite experiments in the 1980s with fiber to the home and in the early 1990s with hybrid fiber coax, physical access choices mostly come down to twisted pair telephone line and cable TV coax. We will not cover business fiber access or the cable modem story here, on the grounds that the former is a relatively well understood if impressively capable technology and that the latter is somewhat outside the scope of our Internet/ telephony focus. Instead, we will look at recent developments in greatly speeding up access over ordinary telephone lines.
The twisted pair telephone line was developed in the 1880s as an improvement over earlier single-wire and parallel-wire designs. The single-wire lines, which used earth return, were noisy and subject to the variable quality of grounding connections, while the parallel-wire lines were subject to cross talk from one line to another. The twists in a twisted pair set up a self-canceling effect that reduces electromagnetic radiation from the line and thus mitigates cross talk. This simple design creates a very effective transmission medium that has found many uses in data communication (think of 10BaseT LANs and their even higher-speed successors) as well as in telephony. Two-wire telephone access lines are also called loops, as the metallic forward and return paths are viewed as constituting a loop for the current that passes through the telephone set.
In modern telephone networks, homes that are close enough to the central office are directly connected to it by an individual twisted pair (which may be spliced and cross-connected a number of times along the way). The twisted pair from a home farther away is connected instead to the remote terminal of a digital loop carrier (DLC) system. The DLC system then multiplexes together the signals from many telephone lines and sends them over a fiber-optic line (or perhaps over a copper line using an older digital technology like T1) to the central office. In the United States, close enough for a direct twisted pair line generally means less than 18,000 feet (18 kft). For a variety of reasons (including installation prior to the invention of DLCs), there are a fair number of twisted pair lines more than 18 kft in length. These use heavy-gauge wire, loading coils, or even amplifiers to achieve the necessary range. The statistics of loop length and the incidence of DLC use vary greatly among countries depending on demographic factors. In densely populated countries, loops tend to be short and DLCs may be rare. Another loop design practice that varies from country to country is the use of bridged taps. These unterminated twisted pair stubs are often found in the United States, but rarely in Europe and elsewhere.
From the point of view of data communication, the intriguing thing about this access environment is that in general it is less band-limited than an end-to-end telephone network connection, which of course is classically limited to a 4-kHz bandwidth. While there is indeed a steady falloff in the ease with which signals may be transmitted as their frequency increases, on most metallic loops (the exceptions are loops with loading coils and, more rarely, loops with active elements such as amplifiers) there is no sharp bandwidth cutoff. Thus, the bandwidth of a twisted pair loop is somewhat undefined and subject to being extended by ingenious signal processing techniques.
For decades, the standard way of pumping data signals over the telephone network was to use voiceband modems. Depending on their vintage, readers may remember when the data rate achievable by such devices was limited to 2400, 4800, or 9600 bps. This technology finally reached its limit a few years ago at around 33.6 kbps. By exploiting the extra bandwidth available in the loop plant, xDSL systems are able to reach much higher access speeds. We will describe these systems shortly, but first will take a small detour to talk about another intriguing recent advance in access that exploits a somewhat more subtle reservoir of extra bandwidth in the telephone network: the 56-kbps PCM modem.
The PCM Modem
Conventional voiceband modems are designed under the assumption that the end-to-end switched or private line connection through the telephone network is an analog connection with a bandwidth of just under 4 kHz, subject to the distortion of additive white Gaussian noise (AWGN). When the first practical voiceband modems were designed about 40 years ago, this was literally true. The path seen by a signal traveling from one telephone line to another over a long-distance switched network connection might be something like this: First over an analog twisted-pair loop to an electromechanical step-by-step switch, then over a metallic baseband or wireline analog carrier system to an electromechanical crossbar toll switch, then over a long-haul analog carrier system physically implemented as multiple microwave shots from hill to hill across a thousand miles, to another electromechanical crossbar toll switch, and back down through another analog carrier system to a local crossbar switch to the terminating analog loop. Private line connections were the same, except that permanently soldered jumper wires on cross-connect fields substituted for the electromechanical switches. Noise, of course, was added at every analog amplifier along the way for both the switched and private line cases.
A remarkable fact is that although when modeled as a black box the modern telephone network at the turn of the twenty-first century looks exactly the same as it did 40 years ago (a band-limited analog channel with some noise added to it), the interior of the network has been completely transformed to a concatenation of digital systems—mostly fiber-optic transmission systems and digital switches. Voice is carried through this network interior as sequences of 8-bit binary numbers produced by pulse-code modulation (PCM) encoders. Only the analog loops on both ends remain as a physical legacy of the old network.
By the way, what is it that makes these loops analog? After all, they are only long thin strands of copper metal—the most passive sort of electrical system imaginable. How does the loop know whether a signal impressed upon it is analog or digital? The answer is that it doesn’t know! In fact, in addition to the smoothly alternating electrical currents of analog voice, loops can carry all sorts of digital signals produced by modems and by all the varieties of digital subscriber line (DSL) systems. Ironically, the analog quality of the loop really derives from the properties of the analog telephone at the premises end of the loop and of the PCM encoder/decoder at the central office end—or, more precisely, from the assumption that the job of the PCM encoder is to sample a general band-limited analog waveform and produce a digital approximation of it, distorted by quantization noise—inevitable because the finite-length 8-bit word can only encode the signal level with finite precision.
It is this quantization noise, which averages about 33 to 39 dB, in combination with the bandwidth limitation of approximately 3 to 3.5 kHz, that limits conventionally designed modems to just over 33 kbps as calculated using the standard Shannon channel capacity formula (Ayanoglu et al., 1998).
Enter the PCM modem. Quoting Ayanoglu et al., who developed this technology at Bell Labs in the early 1990s: “The central idea behind the PCM modem is to avoid the effects of quantization distortion by utilizing the PCM quantization levels themselves as the channel symbol alphabet.” In other words, rather than designing the modem output signals without reference to the operation of the PCM encoder and then letting them fall subject to the distortion of randomly introduced quantization noise, the idea is to design the modem output so that “the analog voltage sampled by the codec passes through the desired quantization levels precisely at its 8-kHz sampling instants.” In theory, then, a pair of PCM modems attached to the two analog loops in an end-to-end telephone connection could commandeer the quantization levels of the PCM codecs at the central office ends of the loops and use them to signal across the network at something approaching the 64-kbps output rate of the voice coders. Actually, filters in the central office equipment limit the loop bandwidth to 3.5 kHz, and this in turn means that no more than 56 kbps can be achieved. Also, it turns out that there are serious engineering difficulties with attempting to manipulate the output of the codecs by impressing voltage levels on the analog side.
Fortunately, there is an easier case that is also of great practical importance to the business of access to data networks—including the Internet. Most ISPs and corporate remote access networks employ a system of strategically deployed points of presence at which dial-up modem calls from subscribers to their services are concentrated. At these points, the calls are typically delivered from the telephone company over a multiplexed digital transmission system, such as a T1 line. The ISP or corporate network can then be provided with a special form of PCM modem at the POP site that writes or reads 8-bit binary numbers directly to or from the T1 line (or other digital line), thus permitting the modem on the network side to directly drive the output of the codec on the analog line side as well as to directly observe the PCM samples it produces in the other direction. The result is that, in the direction from the network toward the consumer (the direction in which heavy downloads of things like Web pages occur), a rate approaching 56 kbps can be achieved. The upstream signal, originating in an analog domain where direct access to the PCM words is not possible, remains limited to somewhat lower speeds.
So hungry are residential and business users for bandwidth that 56-kbps modems became almost universally available on new PCs and laptops shortly after the technology was reduced to silicon—and even before the last wrinkles of standards compatibility were ironed out. The standards issues have since been worked through by ITU-T study group (SG) 16, and the 56-kbps modem is now the benchmark for dial-up access over the telephony network to the Internet.
Digital Subscriber Lines
Digital subscriber line (DSL) is the name given to a broad family of technologies that use clever signal design and signal processing to exploit the extra bandwidth of the loop plant and deliver speeds well in excess of those achievable by conventional voiceband modems. The term is often given as xDSL, where x stands for any of many adjectives used to describe different types of DSL. In fact, so many variations of DSL have been proposed and/or hyped, with so many corresponding values of x, that it can be downright confusing—too bad, really, since DSL technology has so much to offer. We will attempt to limit the confusion by describing the types of DSL that appear to be of most practical importance in the near term, with a few words about promising new developments.
The term DSL first appeared in the context of ISDN—which struggled with low acceptance rates and slow deployment until it enjoyed a mini-Renaissance in the mid-1990s, buoyed by the unrelenting demand for higher-speed access to the Internet. The ISDN DSL sends 160 kbps in both directions at once over a single twisted pair. The total bit rate accommodates two 64-kbps B channels, one 16-kbps D channel, and 16 kbps for framing and line control. Bidirectional transmission is achieved using an echo-canceled hybrid technology in most of the world. In Japan, bidirectionality is achieved using Ping Pong, called time compression multiplexing by the more serious-minded, in which transmission is performed at twice the nominal rate in one direction for a while, and then, after a guard time, the line is turned around and the other direction gets to transmit. ISDN DSLs can extend up to 18 kft, so they can serve most loops that go directly to the central office or to a DLC remote terminal. Special techniques may be used to extend the range in some cases, at a cost in equipment and special engineering. ISDN DSL was a marvel of its day, but is relatively primitive in comparison to more recently developed varieties of DSL.
HDSL
The next major type of DSL to be developed was the high-bit-rate digital subscriber line (HDSL). The need for HDSL arose when demand accelerated for direct T1 line interconnection to business customer locations providing for 1.544-Mbps access. T1 was a technology for digital transmission over twisted pairs that was originally developed quite a long time ago (the early 1960s, in fact) with application to metropolitan area telephone trunking in mind. With its 1.544-Mbps rate, a T1 line could carry twenty-four 64-kbps digital voice signals over two twisted pairs (one for each transmission direction). In this application, T1 was wildly successful, and by the late 1970s it had largely displaced baseband metallic lines and older analog carrier systems for carrying trunks between telephone central offices within metropolitan regions—distances up to 50 miles or so. However, applying T1 transmission technology directly to twisted pairs going to customer premises presented several difficulties. A basic one was that T1 required a repeater every 3000 to 5000 feet. This represented a major departure from practice in the loop plant, which was engineered around the assumption that each subscriber line was connected to the central office by a simple wire pair with no electronics along the way—or at least for up to 18 kft or so when a DLC system might be encountered. Also, T1 systems employ high signal levels that present problems of cross talk and difficulties for loop plant technicians not used to dealing with signals more powerful than those produced by human speech impinging on carbon microphones.
A major requirement for the HDSL system was therefore to provide for direct access to customer sites over the loop plant without the use of repeaters. The version of HDSL standardized by the ITU-T as G.991.1 in 1998 achieves repeaterless transmission over loops up to 12 kft long at both the North American T1 rate of 1.544 Mbps and the E1 rate of 2.048 Mbps used in Europe and some other places. Repeaters can be used to serve longer loops if necessary. When employed, they can be spaced at intervals of 12 kft or so, rather than the 3 to 5 kft required in T1. The repeaterless (or few-repeater) feature greatly reduces line conditioning expenses for deployment in the loop plant compared to traditional T1. In addition, HDSL can tolerate (within limits) the presence of bridged taps, avoiding the expense of sending out technicians to remove these taps.
HDSL systems typically use two twisted pairs, just as does T1. However, rather than simply using one pair for transmitting from east to west and the other for west to east, HDSL reduces signal power at high frequencies by sending in both directions at once on each pair, but at only half the total information rate. The two transmission directions are separated electronically by using echo-canceled hybrids, just as in ISDN DSL.
Overall, HDSL provides a much more satisfactory solution for T1/E1 rate customer access than the traditional T1-type transmission system. Work is currently under way in the standards bodies on a second-generation system, called SDSL (for “symmetric” or “single-pair” DSL) or sometimes HDSL2, which will achieve the same bit rates over a single wire pair. To do this without recreating the cross talk problems inherent in T1 requires much more sophisticated signal designs borrowed from the most advanced modem technology, which in turn requires much more powerful processors at each end of the loop for implementation. By now the pattern should be familiar—to mine the extra bandwidth hidden in the humble loop plant, we apply high-speed computation capabilities that were quite undreamed of when Alexander Graham Bell began twisting pairs of insulated wire together and observing what a nice clean medium they produced for the transmission of telephone speech!
ADSL
The second major type of DSL of current practical significance is asymmetric digital subscriber line (ADSL). Compared to HDSL, ADSL achieves much higher transmission speeds (up to 10 Mbps) in the downstream direction (from the central office toward the customer) and does this over a single wire pair. The major trade-off is that speeds in the upstream direction (from the customer toward the central office) are reduced, being limited to 1 Mbps at most. ADSL is also capable of simultaneously supporting analog voice transmission.
Considering these basic characteristics, it is clear that ADSL is particularly suited to residential service in that it can support:

  • High-speed downloading in applications like Web surfing

  • Rather lower speeds from the consumer toward the ISP

  • Ordinary voice service on the same line
On the other hand, these characteristics also meet the needs of certain small business (or remote business site) applications as well. The basic business proposition of ADSL is that these asymmetric characteristics, which are the key to achieving the high downstream rate, represent a significant market segment. Time will tell how ADSL fares against other access options such as cable modems and fixed wireless technologies, but the proposition seems to be a plausible one.
The way ADSL exploits asymmetry to achieve higher transmission rates has to do with the nature of cross talk and with the frequency-dependent transmission characteristics of telephone lines. Earlier we noted that there is not a sharp frequency cutoff on unloaded loops, but there is a steady decline in received signal power with increasing frequency. If a powerful high-frequency (high-bit-rate) transmitter is located near a receiver trying to pick up a weak incoming high-frequency signal, the receiver will be overwhelmed by near-end cross talk. The solution is to transmit the high-frequency (high-bit-rate) signal in only one direction. A basic ADSL system is thus an application of classic frequency division multiplexing, in which a wide, high-frequency band is used for the high-bit-rate downstream channel, a narrower and lower-frequency channel is used for the moderate-bit-rate upstream transmission, and the baseband region is left clear for ordinary analog voice (see Figure 1).
Figure 1: A basic ADSL system.
The basic concept of ADSL is thus rather simple. However, implementations utilize some very advanced coding, signal processing, and error control techniques in order to achieve the desired performance. Also, a wide variety of systems using differing techniques have been produced by various manufacturers, making standardization something of a challenge. Key ITU-T standards are G.992.1 and G.992.2. The latter provides for splitterless ADSL, which deserves some additional description.
ADSL Lite
In the original ADSL concept, a low-pass filter is installed at the customer end of the line to separate the baseband analog voice signal from the high-speed data signals (see Figure 2). In most cases, this filter requires the trouble and expense of an inside wiring job at the customer premises. To avoid this expense, splitterless ADSL, also known more memorably as ADSL Lite, eliminates the filter at the customer end. This lack of a filter can create some problems, such as error bursts in the data transmission when the phone rings or is taken off hook, or hissing sounds in some telephone receivers. However, the greatly simplified installation was viewed as well worth the possible small impairments by most telephone companies, and they pushed hard for the adoption of splitterless ADSL in standards.
Figure 2: The original ADSL concept.
Factors Affecting Achieved Bit Rate
Like ISDN DSL and HDSL, a basic objective of ADSL is to operate over a large fraction of the loops that are up to 18 kft long. However, the actual bit rate delivered to the customer may vary depending on the total loss and noise characteristics of the loop. The ANSI standard for ADSL (T1.413) provides for rate-adaptive operation much like that employed by high-speed modems. The downstream rate can be as high as 10 Mbps on shorter, less noisy loops, but may go down to 512 kbps on very long or noisy loops. Upstream rates may be as high as 900 kbps or as low as 128 kbps.
Future DSL Developments
We have already mentioned that work is under way on an improved version of HDSL, called HDSL2. Another name for this sometimes seen in the literature is symmetric DSL or single-pair DSL (SDSL).
Another new system, called very-high-rate DSL (VDSL), is under discussion in standards bodies. It will provide for very high downstream rates of up to 52 Mbps. VDSL would work in combination with optical transmission into the neighborhood of the customer. High-speed transmission over the copper loop would only be used for the last kilometer or so.
Applicability
We’ve described a number of advanced access technologies that can support remarkably high-data-rate access to data networks (including the Internet) over the existing telephone plant. How do you decide which ones, if any, to use?
In the case of the 56-kbps PCM modem, the decision will likely be made for you by the manufacturer of your PC or laptop. It’s simply the latest in modems and is often supplied as a standard feature.
For xDSL, the situation is a bit more complex. In most cases, you obtain a service from a telephone company or other network provider that uses HDSL or ADSL as an underlying transmission technology. The technology may or may not be highlighted in the service provider’s description of the offering. Essentially, the decision comes down to weighing the price of the service against how well it satisfies the needs of the application, including speed but also such factors as guarantees of reliability, speed of installation, whether an analog voice channel is included or needed, and so on. If you are more adventurous, you may try obtaining raw copper pairs from a service provider and applying your own xDSL boxes. If you contemplate going this route, you really need to learn a lot more about the transmission characteristics of these systems than we’ve covered here, and you should perhaps start by consulting some of the references listed in our bibliography.

Internet Offload and Tunneling

Internet traffic has challenged the foundation of the PSTN—the way it has been engineered. Contrary to the widespread view (based on the perceived high quality that users of telephony have enjoyed for many years), which holds that the telephone networks can take any calls of any duration, the PSTN has actually been rather tightly engineered to use its resources so as to adapt to the patterns of voice calls. Typical Internet access calls last 20 minutes, while typical voice calls last between 3 and 5 minutes (Atai and Gordon, 1997). The probability of the duration of a voice call exceeding one hour is 1 percent, versus 10 percent for Internet access calls. As the result, the access calls tie up the resources of local switches and interoffice trunks, which in turn increases the number of uncompleted calls on the PSTN. (As we mentioned in the section on network traffic management, the PSTN can block calls to a switch with a high number of busy trunks or lines. The caller typically receives a fast busy signal in this case.) In today’s PSTN, the call blocking rate is the principal indicator of the quality of service. The actual bandwidth of voice circuits is grossly wasted—Internet users consume only about 20 percent of the circuit bandwidth. The situation is only further complicated by flat-rate pricing of online services—believed to encourage Internet callers to stay on line twice as long as they would with a metered-rate plan.
The three problem areas identified in Atai and Gordon (1997) are (1) the local (ingress) switch from which the call has originated; (2) the tandem switch and interoffice trunks; and (3) the local (egress) switch that terminates calls at the ISP modem pool (Atai and Gordon, 1997). (The cited document does not take into account the IXC issues, but it is easy to see they are very similar to the second problem area.) The third problem area is the most serious because it can cause focused overload. Presently, such egress switches make up roughly a third of all local switches. The acuteness of the problem has been forcing the carriers to segregate the integrated traffic and offload it to a packet network as soon as possible.
The two options for carrying out the offloading are (1) to allow the Internet traffic to pass through the ingress switch, where it would be identified, and (2) to intercept the Internet traffic on the line side of the ingress switch. In all cases, however, the Internet traffic must first be identified. Identifying Internet traffic is best done by IN means. One way (which is unlikely to be implemented) is to collect all the ISP and enterprise modem pool access numbers and trigger on them—not a small feat, even if a feasible one. This triggering would slow down all local switches to a great extent. The other solution is to use local number portability queries; to implement the solution, all modem pool numbers would have to be configured as ported numbers. The third, and much better, way to carry out the offloading is for ISPs and enterprise modem pools to use a single-number service (an example is an 800 number in the United States) and let the IN route the call. The external service logic would inform the switch about the nature of the call (this information would naturally be stored). Many large enterprises already use 800 numbers for their modem pools. The fourth solution is to assign a special prefix to the modem pool number; then the switch would know right away, even before all the digits had been dialed, that it was dealing with an Internet dial-up. (Presently, however, switches often identify an Internet call by detecting the modem signals on the line.)
Two post-switch offloading solutions are gaining momentum. The first is terminating all calls in a special multiservice module—effectively a part of the local switch—in the PSTN. The multiservice module would then send the data traffic (over an ATM, frame relay, or IP network) to the ISP or enterprise access server (which would no longer need to be involved with the modems). The other solution is to terminate all calls at network access servers that would act as switches in that they would establish a trunk with the ingress switch. The access servers would then communicate with the ISP or enterprise over the Internet. One problem with this solution is that access servers would have to be connected to the SS No. 7 network, which is expensive and, so far, hardly justified. To correct this situation, a new SS No. 7 network element, the SS7 gateway, acts as a proxy on behalf of several access servers (thus significantly cutting the cost). The access servers communicate with the SS7 gateway via an enhanced (that is, modified) ISDN access protocol, as depicted in Figure 3.
Figure 3: Internet offload with the SS7 gateway.
At this point you may ask: How are the network access servers connected to the rest of the ISP or enterprise network? Until relatively recently, this was done by means of leased telephone lines (permanent circuits) or private lines, both of which were (and still are) quite expensive. Another way to connect the islands of a network is by using tunneling, that is, sending the packets whose addresses are significant only to a particular network. These packets are encapsulated (as a payload) into the packets whose addresses are significant to the whole of the Internet, and they travel between the two border points of the network through what is metaphorically called a tunnel. Again, the packets themselves are not looked at by the intermediate nodes, because to nodes the packets are nothing but the payload encapsulated in outer packets. Only the endpoints of a tunnel are aware of the payload, which is extracted by and acted on by the destination endpoint. Tunnels are essential for an application called the virtual private network (VPN).With tunneling, for example, the two nodes of a private network that have no direct link between them may use the Internet or another IP network as a link.We will address tunneling systematically as far as security and the use of the existing protocols is concerned. Another essential aspect of tunneling is quality of service (QoS), so we address that issue again when reviewing the multiprotocol label switching (MPLS) technology. As you have probably noticed, we have already ventured into a purely IP area. This is one example where it is virtually impossible to describe a PSTN solution without invoking its IP counterpart.
Going back to the employment of the SS7 gateway, we should note one important technological development: With the SS7 gateway, an ISP can be connected to a LEC as a CLEC

Telecom Made Simple

Related Posts with Thumbnails