Voice traffic and data traffic have different characteristics. Unlike data traffic, voice traffic occurs in real time and is delay-sensitive. Voice packets tend to be smaller than data packets. When voice and data networks are merged, it is important to deliver an acceptable QoS for the voice traffic.
Voice traffic must be prioritized to minimize delay and jitter. Delay is the amount of time between the original transmission of the voice information and the final processing by the receiving station. Jitter is the variation in the delay between successive voice packets. Packet loss due to network errors or congestion will impact jitter. QoS depends on the ability to control these two factors that impact voice quality.
QoS tools can be divided into three categories:
-
Classification Voice packets can be classified or marked with a specific priority to enhance QoS.
-
Queuing Use separate queues for voice and date to ensure consistency and QoS for voice.
-
Provisioning Circuits carrying voice traffic should be provisioned with enough bandwidth or capacity to minimize delay and jitter.
The increasing deployment of VoIP can be attributed to the improvements made in QoS. QoS is a set of ideas, procedures, practices, and numerous protocols that provide for reliable and efficient transportation across data networks.
What Is Quality of Service?
QoS is simply a set of tools to ensure that a minimum level of service will be provided to certain traffic. Many protocols and applications are not critically sensitive to network congestion. File Transfer Protocol (FTP), for example, has a rather large tolerance for network delay or bandwidth limitation.
Applications such as voice and video are particularly sensitive to network delay. If voice packets take too long to reach their destination, the resulting speech sounds choppy or distorted. QoS can be used to assure services to these applications. Critical business applications can also use QoS.
Applications for Quality of Service
When would a network engineer consider designing QoS into a network? Here are a few reasons to deploy QoS in a network topology:
-
To prioritize certain mission-critical applications in the network.
-
To maximize the use of the current network investment in infrastructure.
-
To provide better performance for delay-sensitive applications such as voice and video.
-
To respond to changes in network traffic flows.
When deploying QoS, analyze the traffic flowing through the bottleneck, determine the importance of each protocol and application, and determine a strategy to prioritize the access to the bandwidth. QoS allows control over bandwidth, latency, and jitter and minimizes packet loss within the network by prioritizing. Bandwidth is the measure of capacity on the network or a specific link. Latency is the delay of a packet traversing the network, and jitter is the change of latency over a given period of time.
Deploying certain types of QoS techniques can control these three parameters. QoS is not widely deployed within many networks. With the push for applications such as multicast, streaming multimedia, and VoIP, the need for QoS is more apparent, especially since these applications are susceptible to jitter and delay. Poor performance is immediately noticed by the end-user. However, QoS is not the magic solution to every congestion problem; it may very well be that upgrading the bandwidth of a congested link is the proper solution to the problem.
Levels of QoS
QoS can be divided into three different levels, also referred to as service models. These service models describe a set of end-to-end QoS capabilities. End-to-end QoS is the network's ability to provide a specific level of service to network traffic from one end of the network to the other. The three service levels are best-effort service, integrated service, and differentiated service.
Best-Effort Service
Best-effort service is when the network will make every possible attempt to deliver a packet to its destination. With best-effort service, there are no guarantees that the packet will ever reach its intended destination. An application can send data in any amount, whenever it needs to, without requesting permission or notifying the network.
Integrated Service
The integrated service model provides applications with a guaranteed level of service by negotiating network parameters end to end. Applications request the level of service necessary for them to operate properly and rely on the QoS mechanism to reserve the necessary network resources prior to the beginning of transmission. The application will not send traffic until it receives confirmation that the network can handle the load and provide the requested QoS end to end. To accomplish this task, the network uses a process called admission control.
Cisco IOS uses RSVP and intelligent queuing. RSVP is currently in the process of being standardized by the IETF in one of its working groups. Intelligent queuing includes technologies such as Weighted Fair Queuing and Weighted Random Early Detection (WRED).
RSVP works in conjunction with the routing protocols to determine the best path through the network that will provide the QoS required. RSVP routers create dynamic access lists to provide the QoS requested to ensure that packets are delivered at the prescribed minimum quality parameters.
Differentiated Service
Differentiated service includes a set of classification tools and queuing mechanisms to provide certain protocols or applications with a certain priority over other network traffic. Differentiated services rely on edge routers to perform the classification of the types of packets traversing a network. Network traffic can be classified by network address, protocols and ports, ingress interfaces, or whatever classification that can be accomplished through the use of a standard or extended access list.
Why QoS Is Essential in VOIP Networks
The challenge facing a converged infrastructure is to provide the efficiency of a packet-switched network with the reliability of a legacy network. This is the role that QoS fills.
QoS, through a variety of methods, gives reliability and availability to a converged infrastructure and still affords it the same benefits of efficient utilization of resources by providing the following:
-
Managed response times
-
Jitter (variation in delay) control
-
Prioritization of delay-sensitive traffic
-
Congestion management
-
Congestion avoidance
-
Support and enforcement of dedicated bandwidth requirements
-
Management and recovery of packet loss
With QoS, converged infrastructures can provide end users with a convenient, low-cost, scalable, and above all, reliable solution for the majority of their communications. Without QoS, a converged infrastructure would be comparable to anarchy, with little to no reliability, convenience, or scalability—to a level where a single FTP session could shut down your entire VoIP infrastructure.
No comments:
Post a Comment