Evolution of Operation, Administration, and Maintenance (OA&M)

There are several functions performed in the PSTN under the common name OA&M. These functions include provisioning (that is, distributing all the necessary software to make systems available for delivering services), billing, maintenance, and ensuring the expected level of quality of service. The scope of the OA&M field is enormous—it deals with transmission facilities, switches, network databases, common channel signaling network elements, and so on. Because of its scope, referring to OA&M as a single task would be as much a generalization as referring to a universal computer application. As we show later in this section, the development of the PSTN OA&M has been evolutionary; as new pieces of equipment and new functions were added to the PSTN, new OA&M functions (and often new pieces of equipment) were created to deal with the administration of these new pieces of equipment and new functions. This development has been posing tremendous administrative problems for network operators. Many hope that as the PSTN and Internet ultimately converge into one network, the operations of the new network will be simpler than they are today.
Initially, all OA&M functions were performed by humans, but they have been progressively becoming automated. In the 1970s, each task associated with a piece of transmission or switching equipment was run by a task-specific application developed only for this task’s purpose. As a result, all applications were developed separately from one another. They had a simple text-based user interface—administrators used teletype terminals connected directly to the entities they administered.

In the 1980s, many tasks previously performed by humans had become fully automated. The applications were developed to emulate humans (up to the point of the programs exchanging text messages as they would appear on the screen of a teletype terminal). These applications, called operations support systems (OSSs), have been developed for the myriad OA&M functions. In most cases, a computer executing a particular OSS was connected to several systems (such as switches or pieces of transmission equipment) by RS-232 lines, and a crude ad hoc protocol was developed for the purpose of this OSS. Later, these computers served as concentrators, and they were in turn connected to the mainframes executing the OSSs. Often, introduction of a new OSS meant that more computers and more lines to connect the computers to the managed elements were needed.

You may ask why the common channel signaling network was not used for interconnection to operations systems. The answer is that this network was designed only for signaling and could not bear any additional—and unpredictable—load. As a matter of fact, a common network for interconnecting OSSs and managed elements has never been developed, although in the late 1980s and early 1990s there was a plan to develop such a network based on the OSI model. In some cases, X.25 was used; in others proprietary data networks developed by the manufacturers of computer equipment were used by telephone companies. A serious industry attempt to create a common mechanism to be used by all OA&M applications has resulted in a standard called Telecommunications Management Network (TMN) and, specifically, its part known as the Common Management Identification Protocol (CMIP), developed jointly by the International Organization for Standardization (ISO) and ITU-T.

We could not possibly even list all existing OA&M tasks. Instead we review one specific task called network traffic management (NTM). This task is important to the subject for the following three reasons. First, the very problem this task deals with is a good illustration of the vulnerability of the PSTN to events it has not been engineered to handle. (One such event—overload of PSTN circuits because of Internet traffic—has resulted in significant reengineering of the access to the Internet.) Second, the problems of switch overload and network overload are not peculiar to the PSTN—they exist (and are dealt with) today in data networks. Yet, the very characteristics of voice traffic are likely to create in the Internet and IP networks exactly the same problems once IP telephony takes off. Similar problems have similar solutions, so we expect the network traffic management applications to be useful in IP telephony. Third, IN and NTM often work on the same problems; it has been long recognized that they need to be integrated. The integration has not taken place in the PSTN yet, so it remains among the most important design tasks for the next-generation network.
NTM was developed to ensure quality of service (QoS) for PSTN voice calls. Traditionally, quality of service in the PSTN has been defined by factors like postdial delay or the fraction of calls blocked by one of the network switches. The QoS problem exists because it would be prohibitively expensive to build switches and networks that would allow us to interconnect all telephone users all the time. On the other hand, it is not necessary to do so, because not all people are using their telephones all the time. Studies have determined the proportion of users making their calls at any given time of the day and day of the week in a given time zone, and the PSTN has consequently been engineered to handle just as much traffic as needed. (Actually, the PSTN has been slightly overengineered to make up for potential fluctuations in traffic.) If a particular local switch is overloaded (that is, if all its trunks or interconnection facilities are busy), it is designed to block (that is, reject) calls.

Initially, the switches were designed to block calls only when they could not handle them independently. By the end of the 1970s, however, the understanding of a peculiar phenomenon observed in the Bell Telephone System—called the Mother’s Day phenomenon—resulted in a significant change in the way calls were blocked (as well as other aspects of the network operation).

Figure 1 demonstrates what happens with the toll network in peak circumstances. The network, engineered at the time to handle a maximum load of 1800 erlangs (an erlang is a unit measuring the load of the network: 1 erlang = 3600 calls x sec), was supposed to behave in response to ever increasing load just as depicted in the top line in the graph—to approach the maximum load and more or less stay there. In reality, however, the network experienced inexplicably decreasing performance way below the engineered level as the load increased. What was especially puzzling was that only a small portion of switches were overloaded at any time. Similar problems occurred during natural disasters—earthquakes and floods. (Fortunately, disasters have not occurred with great frequency.) Detailed studies produced an explanation: As the network attempted to build circuits to the switches that were overloaded, these circuits could not be used by other callers—even those whose calls would pass through or terminate at the underutilized switches. Thus, the root of the problem was that ineffective call attempts had been made that tied up the usable resources.

Figure 1: The Mother’s Day phenomenon.

The only solution was to block the ineffective call attempts. In order to determine such attempts, the network needed to collect in one place much information about the whole network. For this purpose, an NTM system was developed. The system polled the switches every now and then to determine their states; in addition, switches could themselves report certain extraordinary events (called alarms) asynchronously with polling. For example, every five minutes the NTM collects the values of attempts per circuit per hour (ACH) and connections per circuit per hour (CCH) from all switches in the network. If ACH is much higher than CCH, it is clear that ineffective attempts are being made. The NTM applications have been using artificial intelligence technology to develop the inference engines that would pinpoint network problems and suggest the necessary corrective actions, although they still rely on a human’s ability to infer the cause of any problem.

Overall, the problems may arise because of transmission facilities malfunction (as in cases when rats or moles chew up a fiber link—sharks have been known to do the same at the bottom of the ocean) or a breakdown of the common channel signaling system. In a physically healthy network, however, the problems are caused by use above the engineered level (for example, on holidays) or what is called focused overload, in which many calls are directed into the same geographical area. Not only natural disasters can cause overload. A PSTN service called televoting has been expected to do just that, and so is—for obvious reasons—the freephone service, such as 800 numbers in the United States. (Televoting has typically been used by TV and radio stations to gauge the number of viewers or listeners who are asked a question and invited to call either of the two given numbers free of charge. One of the numbers corresponds to a “yes” answer; the other to “no.” Fortunately, IN has built-in mechanisms for blocking such calls to prevent overload.)

Once the cause of the congestion in the network is detected, the NTM OSS deals with the problem by applying controls, that is, sending to switches and IN SCPs the commands that affect their operation. Such controls can be restrictive (for example, directionalization of trunks, making them available only in the direction leading from the congested switch; cancellation of alternative routes through congested switches; or blocking calls that are directed to congested areas) or expansive (for example, overflowing traffic to unusual routes in order to bypass congested areas). Although the idea of an expansive control appears strange at first glance, this type of control has been used systematically in the United States to fix congestion in the Northeast Corridor between Washington, D.C., and Boston, which often takes place between 9 and 11 o’clock in the morning. Since during this period most offices are still closed in California (which is three hours behind), it is not unusual for a call from Philadelphia to Boston to be routed through a toll switch in Oakland.

Overall, the applications of global network management (as opposed to specific protocols) have been at the center of attention in the PSTN industry. This trend continues today. The initial agent/manager paradigm on which both the Open Systems Interconnection (OSI) and Internet models are based has evolved into an agent-based approach, as described by Bieszad et al. (1999). In that paper, an (intelligent) agent is defined as computational entity “which acts on behalf of others, is autonomous, . . . and exhibits a certain degree of capabilities to learn, cooperate and move.” Most of the research on this subject comes in the form of application of artificial intelligence to network management problems. Agents communicate with each other using specially designed languages [such as Agent Communication Language (ACL)]; they also use specialized protocols [such as Contract-Net Protocol (CNP)]. As the result of the intensive research, two agent systems—Foundation for Intelligent Physical Agents (FIPA) and Mobile Agent System Interoperability Facilities (MASIF)—have been proposed. These specifications, however, are not applicable to the products and services described, for which reason they are not addressed here. Consider them, though, as an important reference to a technology in the making.

Intelligent Network (IN)

The first service introduced in the PSTN with the help of network databases in 1980 was calling card service; soon after that, a series of value-added services for businesses called inward wide area telecommunications service (INWATS) were introduced. When the U.S. Federal Communications Commission (FCC) approved a tariff for expanded 800 service in 1982, the Bell system was ready to support it with many new features due to the distributed nature of the implementation. For example, a customer dialing an 800 number of a corporation could be connected to a particular office depending on the time of day or day of week. As the development of such features progressed, it became clear that in many cases it would be more efficient to decide how to route a customer’s call after prompting the customer with a message that provided several options, and instructions on how to select them by pushing dial buttons on the customer’s telephone. For the purpose of customer interaction, new devices that could maintain both the circuit connections to customers (in order to play announcements and collect digits) and connections to the SS No. 7 network (to receive instructions and report results to the databases) were invented and deployed. The network database ceased to be just a database—its role was not simply to return responses to the switch queries but also to instruct the switches and other devices as to how to proceed with the call. Computers previously employed only for storing the databases were programmed with the so-called service logic, which consisted of scripts describing the service. This was the historical point at which the service logic started to migrate from the switches.

After the 1984 court decree broke up the Bell System, the newly created Regional Bell Operating Companies (RBOCs) ordered their R&D arm, Bell Communications Research, to develop a general architecture and specific requirements for central, network-based support of services. An urgent need for such an architecture was dictated by the necessity of buying the equipment from multiple vendors. This development resulted in two business tasks that Bellcore was to tackle while developing the new architecture: (1) The result had to be equipment-independent and (2) as many service functional capabilities as possible were to move out of the switches (to make them cheaper). The tasks were to be accomplished by developing the requirements and getting the vendors to agree to them. As Bellcore researchers and engineers were developing the new architecture, they promoted it under the name of Intelligent Network. The main result of the Bellcore work was a set of specifications called Advanced Intelligent Network (AIN), which went through several releases.

AT&T, meanwhile, continued to develop its existing architecture, and its manufacturing arm, AT&T Network Systems, built products for the AT&T network and RBOCs. Only the latter market, however, required adherence to the AIN specifications. In the second half of the 1980s, similar developments took place around the world—in Europe, Japan, and Australia. In 1989, a standards project was initiated in ITU to develop recommendations for the interfaces and protocols in support of Intelligent Network (IN).

To conclude the historical review of IN, we give you some numbers: Today, in the United States, at least half of all interexchange carrier voice calls are IN supported. This generates on the order of $20 billion in revenue for IXCs. LECs use IN to implement local number portability (LNP), calling name and message delivery, flexible call waiting, 800 service carrier selection, and a variety of other services (Kozik et al., 1998). The IN technology also blends wireless networks and the PSTN, is being used strategically in the PSTN-Internet convergence.

We are ready now to formulate a general definition of IN: IN is an architectural concept for the real-time execution of network services and customer applications. The architecture is based on two main principles: network independence and service independence. Network independence means that the IN function is separated from the basic switching functions as well as the means of interconnection of the switches and other network components. Service independence means that the IN is to support a wide variety of services by using common building blocks.

The IN execution environment includes the switches, computers, and specialized devices, which, at the minimum, can communicate with the telephone user by playing announcements and recognizing dial tones. (More sophisticated versions of such devices can also convert text to voice and even vice versa, send and receive faxes, and bridge teleconferences). All these components are interconnected by means of a data communications network. The network can be as small as the local area network (LAN), in which case the computers and devices serve one switch (typically a PBX), or it can span most switches in an IXC or LEC. In the latter case, the data network is SS No. 7, and usually the term IN means this particular network-wide arrangement. [In the case of a single switch, the technology is called computer-telephony integration (CTI).]

The overall IN architecture also includes the so-called service creation and service management systems used to program the services and distribute these programs and other data necessary for their execution among the involved entities.

Figure 1 depicts the network-wide IN execution environment. We will need to introduce more jargon now. The service logic is executed by a service control point (SCP), which is queried—using the SS No. 7 transaction mechanism—by the switches. The switches issue such queries when their internal logic detects triggers (such as a telephone number that cannot be translated locally, a need to authorize a call, an event on the line—such as called party being busy, etc.). The SCP typically responds to the queries, but it can also start services (such as wake-up call) on its own by issuing an instruction to a switch to start a call.
Figure 1: The IN architecture.

As we noted before, to support certain service features (such as 800 number translation), the SCP may need to employ special devices (in order to play announcements and collect digits or establish a conference bridge). This job is performed by the intelligent peripheral (IP). The IP is connected to the telephone network via a line or trunk, which enables it to communicate with a human via a voice circuit. The IP may be also connected to the SS No. 7 network, which allows it to receive instructions from the SCP and respond to them. (Alternatively, the SCP instructions can be relayed to the IP through the switch to which it is connected.) As SCPs have become executors of services (rather than just the databases they used to be), the function of the databases has been moved to devices called service data points (SDPs).

Finally, there is another device, called a service node (SN), which is a hybrid of the IP, the SCP, and a rather small switch. Similar to the SCP, the SN is a general-purpose computer, but unlike the SCP it is equipped with exotic devices such as switching fabric and other things typically associated with an IP. The SN connects to the network via the ISDN access mechanism, and it runs its own service logic, which is typically engaged when a switch routes a call to it. An example of its typical use is in voice-mail service. When a switch detects that the called party is busy, it forwards the call to the SN, which plays the announcement, interacts with the caller, stores voice messages and reads them back, and so on. The protocols used for the switch-to-SCP, SCP-to-SDP, and SCP-to-IP communications are known as Intelligent Network Application Part (INAP), which is the umbrella name. INAP, has evolved from the CCS switch-to-database transaction interactions; it is presently based on the Transaction Capabilities (TC) protocol of Signalling System No. 7.

Because the SCP and SN are general-purpose computers, they can be easily connected to the Internet and thus engage the Internet endpoints in the PSTN services. This observation was made as early as 1995, and it has already had far-reaching consequences, as will be seen in the material that follows.

Integrated Services Digital Network (ISDN)

The need for data communications services grew throughout the 1970s. These services were provided (mostly to the companies rather than individuals) by the X.25-based packet switched data networks (PSDNs). By the early 1980s it was clear to the industry that there was a market and technological feasibility for integrating data communications and voice in a single digital pipe and opening such pipes for businesses (as the means of PBX access) and households. The envisioned applications included video telephony, online directories, synchronization of a customer’s call with bringing the customer’s data to the computer screen of the answering agent, telemetrics (that is, monitoring devices—such as plant controls or smoke alarms—and automatic reporting of associated events via telephone calls), and a number of purely voice services. In addition, since the access was supposed to be digital, the voice channels could be used for data connections that would provide a much higher rate than had ever been possible with the analog line and modems.

The ISDN telephone (often called the ISDN terminal) is effectively a computer that runs a specialized application. The ISDN telephone always has a display; in some cases it even looks like a computer terminal, with a screen and keyboard in addition to the receiver and speaker. Several such terminals could be connected to the network terminator (NT) device, which can be placed in the home or office and which has a direct connection to the ISDN switch. Non-ISDN terminals (telephones) can also be connected to the ISDN via a terminal adapter. As far as the enterprise goes, a digital PBX connects to the NT1, and all other enterprise devices (including ISDN and non-ISDN terminals and enterprise data network gateways) terminate in the PBX.

These arrangements are depicted on the left side of Figure 1. The right side of the figure shows the partial structure of the PSTN, which does not seem different at this level from the pre-ISDN PSTN structure. This similarity is no surprise, since the PSTN had already gone digital prior to the introduction of the ISDN. In addition, bringing the ISDN to either the residential or enterprise market did not require much rewiring because the original twisted pair of copper wires could be used in about 70 percent of subscriber lines (Werbach, 1997). What has changed is that codecs moved at the ultimate point of the end-to-end architecture—to the ISDN terminals—and the local offices did need to change somewhat to support the ISDN access signaling standardized by ITU-T. Again, common channel signaling predated the ISDN, and its SS No. 7 version could easily perform all the functions needed for the intra-ISDN network signaling.

Figure 1: The ISDN architecture.

As for the digital pipe between the network and the user, it consists of channels of different capacities. Some of these channels are defined for carrying voice or data; others (actually, there is only one in this category) are used for out-of-band signaling. (There is no in-band signaling even between the user and the network with the ISDN.) The following channels have been standardized for user access:

  • A. 4-kHz analog telephone channel.

  • B. 64-kbps digital channel (for voice or data).

  • C. 8- or 16-kbps digital channel (for data, to be used in combination with channel A).

  • D. 16- or 64-kbps digital channel (for out-of-band signaling).

  • H. 384-, 1536-, or 1920-kbps digital channel (which could be used for anything, except that it is not part of any standard combination of channels).

The major regional agreements support two combinations:

  • Basic rate interface. Includes two B channels and one D channel of 16 kbps. (This combination is usually expressed as 2B+D.)

  • Primary rate interface. Includes 23 B channels and 1 D channel of 64 kbps. (This combination is accordingly expressed as 23B+D, and it actually represents the primary rate in the United States and Japan. In Europe, it is 30B+D.)

The ISDN has been deployed throughout mostly for enterprise use. The residential market has never really picked up, although there has been a turnaround because of the demand for fast Internet access (it is possible to use the 2B+D combination as a single 144-kbps digital pipe) and because ISDN connections are becoming less expensive.

Even before the ISDN standardization was finished, the ISDN was renamed narrowband ISDN (N-ISDN), and work began on broadband ISDN (B-ISDN). B-ISDN will offer an end-to-end data rate of 155 Mbps, and it is based on the asynchronous transfer mode (ATM) technology. B-ISDN is to support services like video on demand—predicted to be a killer application; however, full deployment of B-ISDN means complete rewiring of houses and considerable change in the PSTN infrastructure.

Although the ISDN has recently enjoyed considerable growth owing to Internet access demand, its introduction has been slow. The United States until recently trailed Europe and Japan as far as deployment of the ISDN is concerned, particularly for consumers. This lag can in part be explained by the ever complex system of telephone tariffs, which seemed to benefit the development of the business use in the United States. Another explanation often brought up by industry analysts is leapfrogging: by the time Europe and Japan developed the infrastructure for total residential telephone service provision, the ISDN technology was available, while in the United States almost every household already had at least one telephone line long before the ISDN concept (not to mention ISDN equipment) existed

Evolution of Signaling

Now that we know what the voice circuit between the switches is, we can talk about how it is established. In the so-called plain old telephone service (POTS), establishing a call is routing, for once the call (for example, an end-to-end virtual circuit) is established, no routing decisions are to be made by the switches. There are three aspects to call establishment: First, a switch must understand the telephone number it receives in order to terminate the call on a line or route the call to the next switch in the chain; second, a switch must choose the appropriate circuit and let the next switch in the chain know what it is; third, the switches must test the circuit, monitor it, and finally release it at the end of the call. We will address the (quite important) concept of understanding the telephone number later. The other two circuit-related steps require that the switches exchange information. In the PSTN, this exchange is called signaling.
Initially, the signaling procedure was much closer to the original meaning of the word—the pieces of electric machinery involved were exchanging electrical signals. The human end user was (and still is) signaled with audio tones of different frequencies and durations.
As far as the switches are concerned, in the past, signaling was not unlike what our telephones do when we push the buttons to dial: switches exchanged audio signals using the very circuit (that is, trunk) over which the parties to the call were to speak. This type of signaling is called in-band signaling, and quite appropriately so, because it uses the voice band. There are quite a few problems with in-band signaling. Not only is it slow and quite annoying to the people who have to listen to meaningless tones, but also telephone users can produce the same tones the switches use and thereby deceive the network provider or disrupt the network.
To prevent fraud and also to improve efficiency, another form of signaling that would not use the voice band was needed. This could be achieved by using for signaling the frequencies that were out of the voice band (thus called out-of-band frequencies). Nevertheless, a channel in the telephone network is limited to the voice band, so there is no physical way to send frequencies beyond the voice band on such a channel. This limitation necessitated out-of-channel rather than out-of-band signaling. It was also obvious that much more information (concerning the characteristics of the circuits to be established, calling and called parties’ numbers, billing information, and so on) was required, and that this information could be stored and passed in the same form that was used for data processing. Hence (1) the information had to be encoded into a set of data structures and (2) these data structures had to be transformed over a separate data communications network. Thus, the concept of common channel signaling was born. Common channel signaling is signaling that is common to all voice channels but carried over none of them. Although it is clearly a misnomer, this type of signaling is often still called out-of-band signaling.
Let’s get back to the question of the switch understanding the telephone number. First of all, there are two types of numbers: those that actually correspond to the telephones that can be called and those that must be translated to the numbers of the first type. An example of the first type is a U.S. number +1-732-555-0137, which translates to a particular line in a particular central office (in New Jersey). An example of the second type is any U.S. number that starts with 1-800. The 800 prefix signals to the switch that the number by itself does not identify a particular switch or line (there is no 800 area code in the United States). Such a number designates a service (called toll-free in the United States or freephone in Europe) that is free to the caller but paid by the organization or person who receives calls.
Handling numbers of the first type is relatively straightforward—they end up in a switch’s routing table, where they are associated with the trunks or lines to be used in the act of establishing a call. The other (toll-free) numbers need translation. Naturally, a switch could translate the toll-free number, too, but such a solution would require tens of thousands of switches to be loaded with this information. The only feasible solution is to let a central database do the translation. The switch then needs to communicate with the database. [Note: The solution was figured out as early as 1979—see Faynberg et al. (1997) for the history.]
Another example where a database lookup is needed is implementation of local number portability (LNP). In the United States, the Telecommunications Act passed by the U.S. Congress in 1996 mandates the right of telephony service subscribers to keep their telephone numbers even when they change service providers. With that, subscribers can keep not only the numbers but also the features (such as call waiting) originally associated with the numbers. In the United States, the solutions are based on switches’ capabilities to query databases so as to locate the terminating switch when they encounter numbers marked as ported. (To be precise, this process requires two database dips—one to determine whether a dialed number is portable and the other to find the terminating switch.)
For both types of communications—out-of-band signaling among the switches and querying the database—the Bell Telephone System has designed a special data network called a common channel interoffice signaling (CCIS) network. When this network was introduced—in 1976—it was used only for out-of band signaling (hence interoffice). Thus the network served as a medium for communicating information about any trunk (channel) without being associated with that particular trunk. In other words, it was a medium common to all trunks, hence the term common channel. In the early 1980s, the network databases were connected to the network; thus signaling ceased to be strictly interoffice, and the I was taken away from the CCIS. Both the network and the concept became known as common channel signaling (CCS).
The architecture of the CCS network is depicted in Figure 1. The endpoints of the system are switches and network databases. The CCS routers are called signaling transfer points (STPs). Since all signaling has been outsourced to it, the CCS network must be as fast and as reliable as the network of the telephone switches. The reliability has been achieved through high redundancy: All STPs within the network are fully interconnected. Furthermore, each STP has a mated STP, with which it is connected through a high-speed link (C-link). Interconnection with other STPs is achieved through a backbone link (B-link). Finally, switches and databases are connected to STPs by A-links.
Figure 1: The common channel signaling (CCS) architecture.
Historically, there are two distinct types of protocols within common channel signaling: (1) interactions between the switches and databases that started as simple query/response messages for number translation and have evolved into service-independent protocols that support multiple services for IN technology; and (2) the protocols by means of which the switches exchange information necessary to establish, maintain, and tear down calls.
The CCS network has evolved through several releases and enhancements in the Bell System, and subsequently other telephone companies, which eventually resulted in multiple CCS networks. To ensure the interoperability of these networks as well as multivendor equipment interoperability in each of them, ITU-T has developed an international standard for common channel signaling. The latest release of this standard is called Signalling System No. 7 (SS No. 7).
Note that the official ITU-T abbreviation of this term is SS No. 7; however, the unofficial (but much easier to write and pronounce) term SS7 is used throughout the industry.We use the official term whenever we refer to the standard or its implementation in the network; we use SS7 when we refer to new classes of products (such as the SS7 gateway).

Evolution of Switching

As noted, the first switch was a switching matrix (board) operated by a human. The 1890s saw the introduction of the first automatic step-by-step systems, which responded to rotary dial pulses from 1 to 10 (that is, digits 1 through 9 to 0). Cross-bar switches, which could set up a connection within a second, appeared in late 1930s. Step-by-step and cross-bar switches are examples of space-division switches; later, this technology evolved into that of time-division. A large step in switching development was made in the late 1960s as a consequence of the computer revolution. At that time computers were used for address translation and line selection. By 1980, stored program control as a real-time application running on a general-purpose computer coupled with a switch had become a norm.

At about the same time, a revolution in switching took place. Owing to the availability of digital transmission, it became possible to transmit voice in digital format. As the consequence, the switches went digital. For the detailed treatment of the subject, we recommend Bellamy (2000), but we are going to discuss it here because it is at the heart of the matter as far as the IP telephony is concerned. In a nutshell, the switching processes end-to-end voice in these four steps:

  1. A device scans in a round-robin fashion all active incoming trunks and samples the analog signal at a rate of 8000 times a second. The sampled signal is passed to the coder part of the coder/decoder device called a pulse-code modulation (PCM) codec, which outputs an 8-bit string encoding the value of the electric amplitude at the moment of the sample

  2. Output strings are fed into a frame whose length equals 8 times the number of active input lines. This frame is then passed to the time slot interchanger, which builds the output frame by reordering the original frame according to the connection table. For example, if input trunk number 3 is connected to output trunk number 5, then the contents of the 3rd byte of the input frame are inserted into the 5th byte of the output frame. (There is a limitation on the number of lines a time slot interchanger can support, which is determined solely by the speed at which it can perform, so the state of the art of computer architecture and microelectronics is constantly applied to building time slot interchangers. The line limitation is otherwise dealt with by cascading the devices into multistage units.)

  3. On outgoing digital trunk groups, the 8-bit slots are multiplexed into a transmission carrier according to its respective standard. (We will address transmission carriers in a moment.) Conversely, a digital switch accepts the incoming transmission frames from a transmission carrier and switches them as described in the previous step.

  4. At the destination switch, the decoder part of the codec translates the 8-bit strings coming on the input trunk back into electrical signals.

Note that we assumed that digital switches were toll offices (we called both incoming and outgoing circuits trunks). Indeed, initially only the toll switches on the top of the hierarchy went digital, but then digital telephony moved quickly down the hierarchy, and in the 1980s it migrated to the central offices and even PBXs. Furthermore, it has been moving to the local loop by means of the ISDN and digital subscriber line (DSL) technologies addressed further in this part.

The availability of digital transmission and switching has immediately resulted in much higher quality of voice, especially in cases where the parties to a call are separated by a long distance (information loss requires the presence of multiple regenerators, whose cumulative effect is significant distortion of analog signal, but the digital signals are fairly easy to restore—0s and 1s are typically represented by a continuum of analog values, so a relatively small change has no immediate, and therefore no cumulative, effect).

We conclude this section by listing the transmission carriers and formats. The T1 carrier multiplexes 24-voice channels represented by 8-bit samples into a 193-bit frame. (The extra bit is used as a framing code by alternating between 0 and 1.) With data rates of 8000 bits per second, the T1 frames are issued every 125 ms. The T1 data rate in the United States is thus 1.544 Mbps. (Incidentally, another carrier, called E1, which is used predominantly outside of the United States, carries thirty-two 8-bit samples in its frame.)

T1 carriers can be further multiplexed bit by bit into higher-order carriers, with extra bits added each time for synchronization:

  • Four T1 frames are multiplexed into a T2 frame (rate: 6.312 Mbps)

  • Six T2 frames are multiplexed into a T3 frame (rate: 44.736 Mbps)

  • Six T3 frames are multiplexed into a T4 frame (rate: 274.176 Mbps)

The ever increasing power of resulting pipes is depicted in Figure 1

Figure 1: The T-carrier multiplexing nomenclature.

Structure of the PSTN


At the very beginning of the telephony age, telephones were sold in pairs; for a call to be made, the two telephones involved had to be connected directly. So, in addition to the grounding wire, if there were 20 telephones you wanted to call (or that might call you), each would be connected to your telephone by a separate wire. At certain point, it was clear that a better long-term solution was needed, and such a solution came in the form of the first Bell Company switching office in New Haven, Connecticut. The office had a switching board, operated by human operators, to which the telephones were connected. An operator’s job was to answer the call of a calling party, inquire as to the name of the called party, and then establish the call by connecting with a wire two sockets that belonged to the calling and called party, respectively. After the call was completed, the operator would disconnect the circuit by pulling the wire from the sockets. Note that no telephone numbers were involved (or needed). Telephone numbers became a necessity later, when the first automatic switch was built. The automaton was purely mechanical—it could find necessary sockets only by counting; thus, the telephones (and their respective sockets in the switch) were identified by these telephone numbers. Later, the switches had to be interconnected with other switches, and the first telephone network—the Bell System in the United States—came to life. Other telephone networks were built in pretty much the same way.

Many things have happened since the first network appeared—and we are going to address these things—but the structure of the PSTN in terms of its main components remained unchanged as far as the establishment of the end-to-end voice path is concerned. The components are:

  • Station equipment [or customer premises equipment (CPE)]. Located on the customer’s premises, its primary functions are to transmit and receive signals between the customer and the network. These types of equipment range from residential telephones to sophisticated enterprise private branch exchange systems (PBXs).

  • Transmission facilities. These provide the communications paths, which consist of transmission media (atmosphere, paired cable, coaxial cable, light guide cable, and so on) and various types of equipment to amplify or regenerate signals.

  • Switching systems. These interconnect the transmission facilities at various key locations and route traffic through the network. (They have been called switching offices since the times of the first Connecticut office.)

  • Operations, administration, and management (OA&M) systems. These provide administration, maintenance, and network management functions to the network.

Until relatively recent times, switching boards remained in use in relatively small organizations (such as hotels, hospitals, or companies of several dozen employees), but finally were replaced by customer premises switches called private branch exchanges (PBXs). The PBX, then, is a nontrivial, most sophisticated example of station equipment. On the other end of the spectrum is the ordinary single-line telephone set. In addition to transmitting and receiving the user information (such as conversation), the station equipment is responsible for addressing (that is, the task of specifying to the network the destination of the call) as well as carrying other forms of signaling [idle or busy status, alerting (that is, ringing), and so on].

As Figure 1demonstrates, the station equipment is connected to switches. The telephones are connected to local switches (also interchangeably called local offices, central offices, end offices, or Class 5 switches) by means of local loop circuits or channels carried over local loop transmission facilities. The circuits that interconnect switches are called trunks. Trunks are carried over interoffice transmission facilities. The local offices are, in turn, interconnected to toll offices (called tandem offices in this case). Finally, we should note that in all this terminology the word office is interchangeable with exchange, and, of course, switch. It is very difficult to say which word is more widely used.

Figure 1: Local and tandem offices.

The trunks are grouped; it is often convenient to refer to trunk groups, which are assigned specific identifiers, rather than individual trunks. Grouping is especially convenient for the purposes of network management or assignment to transmission facilities. (A trunk is a logical abstraction rather than a physical medium; a trunk leaving a switch can be mapped to a fiber-optic cable on the first part of its way to the next switch, microwave for the second part, and four copper wires for the third part.)

In the original Bell System, there were five levels in the switching hierarchy; this number has dropped to three due to technological development of nonhierarchical routing (NHR) in the long-distance network. NHR was not adopted by the local carriers, however, so they retained the two levels—local and tandem—of switching hierarchy.

Local switches in the United States are grouped into local access and transport areas (LATAs). You can find a current map of LATAs at www.611.net/NETWORKTELECOM/lata_map/index.htm. A LATA may have many offices (on the order of 100), including tandem offices. Service within LATAs is typically provided by local exchange carriers (LECs). Some LECs have existed for a considerable time (such as original Bell Operating Companies, created in 1984 as a result of breakup of the Bell system), and so are called incumbent LECs (ILECs); others appeared fairly recently, and are called competitive LECs (CLECs). Inter-LATA traffic is carried by inter-exchange carriers (IXCs). The IXCs are connected to central or tandem offices by means of points of presence (POPs).

Figure 1 depicts an interconnection of an IXC with one particular LATA. The IXC switches form the IXC network, in which routing is typically nonhierarchical. Presently, IXCs are providing local service, too; however, since their early days IXCs have had direct trunks to PBXs of large companies to whom they provided services like virtual private networks (VPNs).[3] We should mention that IXCs in the United States can (and do) interconnect with the overseas long-distance service providers by means of complex gateways that perform call signaling translations, but the IXCs in the United States are typically not directly interconnected with each other.

Figure 2: The interconnection of LATAs and IXCs.

Telecom Made Simple

Related Posts with Thumbnails