Fair Queuing and Weighted Fair Queuing | QoS

Using the new queuing schemes, each flow now has its own queue. With the fair queuing policy, the packets are transmitted round-robin in order to guarantee each flow an equal share of the capacity (possibly penalizing flows that have large packets at times of network congestion). Weighted fair queuing—an algorithm that is widely used in today’s advanced QoS-capable routers—assigns each different type of flow its (by no means necessarily identical) share of bandwidth. Figure 1 illustrates the concept: In Figure 1a, with the first-come, first-served queue, airplanes, cars, and elephants move in the same order in which they have arrived (a scheme that would cause plane crashes and annoy the drivers of the cars following elephants!). In Figure 1b, with fair queuing, the queues are formed per each flow (defined here as a formation of planes or cars or a caravan of elephants), but they are preempted so that bigger things have to wait until an equivalent number of smaller things passes (still, a maddening experience for elephants!). In Figure 1c, with weighted fair queuing, the planes are given the right of way, so they move through the queue almost without slowing down and always keeping formation; the planes are followed by cars, and the cars by the caravan of elephants. This property of keeping the packet “formation” eliminates delay variance (called jitter).

Figure 1: Queuing and scheduling in routers. (a) First-come, first-served queuing. (b) Fair queuing. (c) Weighted fair queuing.

In 1992, A. Parekh and R. Gallager of MIT demonstrated that a flow that experiences a service rate slightly higher than the flow’s data rate has a bounded delay. In other words, by requesting that a flow not exceed a certain rate, the network can guarantee that the delay experienced by the flow does not exceed a certain value. (A good example of a similar result is green streets in cities, where stoplights are adjusted so that a car traveling at a certain speed—for example, 25 mph—is guaranteed a green light at about 9 out of 10 intersections.)

The scientists then augmented the weighted fair queuing with the specification of guaranteed delay for each flow. This work resulted in a new architecture for what its creators called integrated services packet networks [compare with the expansion of the integrated services digital network (ISDN)] in Clark et al. (1992). Two types of services—guaranteed (which supports real-time traffic with determined latency and jitter bounds) and controlled-load (which deals with relaxed traffic)—were defined. At that point, the groundwork was laid for the standardization work in the Internet Engineering Task Force (IETF). The protocol that defines integrated services, called the Resource Reservation Setup Protocol (RSVP)—which is not a routing protocol. In a nutshell, RSVP, which was designed with multicasting (that is, sending a message to multiple receivers) in mind, makes bandwidth reservations—from destination to source—in the routers along the spanning tree covering multicast group members. The routers store the necessary state information, which is then maintained by sending specific RSVP messages in both directions.

The integrated services approach has been comprehensive, but apparently far too ambitious to implement widely. One recurring sentiment is that the overhead associated with reservations is far too large; the other is that it is overkill as far as the short-lived flows (of which most of the present Internet traffic consists) are concerned. (The counterargument to the latter is, of course, that the model was not created with the short-lived flows in mind; but then, something needs to be done about the short flows, too.) The third concern (Weiss, 1998) regarding the integrated services approach is that it would make charging those who request a higher QoS difficult. In any event, while the applicability of the RSVP to wide area networks and the Internet is questioned, it is being implemented for smaller enterprise networks. In essence, the integrated services approach has been a top-down one—guaranteeing absolute QoS in the network on a per-flow basis.

A bottom-up alternative technology, where QoS building blocks (which routers can recognize and act on) are defined, is called differentiated services (Kumar et al., 1998; Weiss, 1998). This technology has been actively addressed by the IETF and has resulted in a standard. The concept behind the technology is definition of various classes of services. The service provider establishes with each customer a service level agreement (SLA). Among other things, an SLA specifies how much traffic a user may send within any given class of service. A particular class of service of a packet is encoded in its IP header. The traffic is then policed at the border of the service provider’s network. Once the traffic enters the network, specialized routers provide it with differentiated treatment, but—unlike the case with the integrated services approach—the treatment is based not on a per-flow basis, but solely on the indicated class of service. The overall network is set up so as to meet all SLAs.

No comments:

Post a Comment