Cell switching in communications networks can be one of two entirely different things. One use of the term applies to mobile communications backbones where the switching takes place as the mobile telephone passes from one coverage area to an adjacent area or cell. The other use of the term is more commonly known as ATM. One of the more astute and practical observations that can be made about ATM is that it’s easy to get confused upfront by the name. The asynchronous part has to do with the interface between the network, which is truly synchronous and the nature of the disparate traffic, which is asynchronous, even though it maybe from a single, highly accurate digital clock source. Asynchronous also means the traffic, or data, is handled in a start-stop mode, similar to asynchronous serial interface with its start-and-stop bits.
ATM functional elements include a fixed length 53-byte cell, transmission links, and a switching machine. Unlike circuit switching, where the intelligence is resident in an underlying common channel signaling system, ATM intelligence is embedded in each cell and distributed throughout the network in edge and core switches.
Like many other technologies along the way, ATM evolved after such things as time division multiplexing (TDM), plesiochronous network hierarchy, X.25 packet networks, maybe in the same timeframe as SONET/SDH, but before wide acceptance and initial growth of Ethernet and IP networking. Early on, ATM was supposed to be the Holy Grail of all-purpose communications networks. However, that was not to be, as IEEE 802 Ethernet is now testimony to that fact, to say nothing of the past few years rampage to put IP directly on SONET/SDH transport, making IP over ATM obsolete.
ATM was crafted out of a desire to accommodate as much offered traffic as possible from the maximum possible number of users, while at the same time ensuring safe, effective (profitable) traffic movement. A casual look at the network during the time ATM was created would show a pattern much like hotels and airplanes. Equating time slots in digital transmission facilities with rooms in hotels and seats in airplanes, it was easy to see there was significant space available except during peak demand times. It could be seen that if a way to use the idle capacity could be found, it meant incremental revenue. If not, and the facilities in use could be reduced, reducing overall operating cost. Either or both of the two favorably impact the financial bottom line. So it is with Internet and Telecom facilities, not just airplanes and hotels.
From the beginning, computer communications were facilitated with either dial-up modems using PSTN voice grade services and facilities, or private leased lines. Generally the rule of thumb was to use dial-up if it was a local call. If the connection required a long distance connection, the cost was tolerated to the extent necessary to justify a full-time, private, or leased connection at a fixed cost. Even though the private line was a fixed cost and available 24/7, actual traffic passing over the facility was usually far less than full-time at whatever data rate the line was capable of.
In situations where multiple terminals connected to the same central computer, or an enterprise operated several computers in separate locations, data communications networks evolved through various forms of simple aggregate and, later, statistical multiplexing techniques. Even when a terminal is connected to a computer or system hosting an application, traffic between the terminal and host is very asymmetric. Keystroke generation is rarely more than 75 bps, compared to information and screens generated by the host application and sent back to the terminal that can require hundreds of kilobits per second, or even megabits per second. Statistical multiplexing techniques found their way into data communications equipment and networks. As more and more traffic was aggregated and transported by data networks, the same techniques were used to gain efficiencies and utilization in those networks.
Essentially, ATM was conceived and designed to cope with the bandwidth limitations in POTS and ISDN for the data communications user, while at the same time improving use of lower layer transmission facilities providing leased or private line services. The basic characteristics of ATM are built around the virtual circuit concept, including virtual paths and virtual circuits. Because source and destination information is included in each cell, switching and routing of the traffic can be accomplished by network switching equipment examining each cell as it arrives in a network, and then determining where to route it on the outbound side. This basic capability allows configuration of a virtual path through the network between any two or more points connected to the network, and to set up connections between any two or more of those locations. Thus, a virtual circuit exists inside a virtual path.
ATM also provides a capability for customer or user control of the network, enabling the user to configure switched virtual circuits within a path, and paths accommodating two or more circuits or connections. Switched virtual circuit (SVC) means the user pays a fee to gain access to the network, usually a fixed monthly charge based on port capacity and any local loop or access line cost. The user also pays an additional fee each time the network is used, similar to the long distance phone call model. Sometimes called bandwidth on demand, the service is billed according to amount of bandwidth, class of service, and time used. It can be very cost effective within a range of practical, day-to-day content transport needs.
A permanent virtual circuit (PVC) means that the carrier or service provider configures the network to provide service between two or more locations on a permanent basis. Depending on the way the service is ordered and configured, it can be flexible and complex to use, or rigid and easy to use. If the service is configured as a permanent virtual path (PVP), this means the user can configure multiple circuits aggregating up to the maximum amount of bandwidth available on the path. If that’s one circuit on one path, so be it. If that’s x number of circuits with equal or unequal amounts of bandwidth, that works too.
On the other hand, if the carrier configures PVC service, it can’t be changed by the user, but only by the carrier after an order for changed or new service is issued. This has economic and operational implications, which may be significant or insignificant. For example, if the circuit is in use 24/7, as might be the case with a studio-to-transmitter link (STL), it may be necessary to go off air to make changes. If that is not acceptable, then establishing a new set of access and transport facilities and then moving the traffic to the new facility, followed by decommission of the other facility in previous use.
Both PVC and SVC have configuration parameters that should be considered carefully when specifying and commencing use of ATM transport. The classes of service include constant bit rate (CBR), variable bit rate, real time (VBR-rt), variable bit rate, non–real time (VBR-nrt), available bit rate, and unspecified bit rate. Each class of service has different performance and cost characteristics.
1 comment:
Thanks a lot for enjoying this beauty article with me. I am enjoy it very much! Looking forward to another great article. Good luck to the author! all the best!
hassy
Post a Comment