Handoff Breaks | What Makes Voice over IP Quality Suffer


Handoffs cause consecutive packet losses. As mentioned in our previous discussion on packet loss, the impact of a handoff glitch can become large. The E-model does not make the best measurement of handoff break consternation, because it takes into account only the average burst length. Handoffs can cause burst loss far longer than the average, and these losses can delete entire words or parts of sentences.
Later chapters explore the details of where handoff breaks can occur. The two general categories are for intratechnology handoffs, such as Wi-Fi access-point to access-point, and intertechnology handoffs, such as from Wi-Fi to cellular. Both handoffs can cause losses ranging for up to a second, and the intertechnology handoff losses can be potentially far higher, if the line is busy or the network is congested when the handoff takes place.
The exact tolerance for handoff breaks depends on the mobility of the user, the density or cell sizes of the wireless technology currently in use, and the frequency of handoffs. Mobility tends to cut both ways: the more mobile the user is at the time of handoff, the more forgiving the user might be, so long as the handoff glitches stop when the user does. The density of the network base stations and the sizes of the cells determine how often a station hands off and how many choices a station has when doing so. These both add to the frequency of the glitches and the average delays the glitches see. Finally, the number of glitches a user sees during a call influences how they feel about the call and the technology.
There are no rules for how often the glitches should occur, except for the obvious one that the glitches should not be so many or for so long that they represent a packet loss rate beginning to approach a half of a percentage point. That represents one packet loss in a four second window, for 20ms packets. Therefore, a glitch of 100ms takes five packets, and so the glitch should certainly not occur more than once every 20 seconds. Glitches longer than that also run the risk of increasing the burst loss factor, and even more so run the risk of causing too many noticeable flaws in the voice call, even if they do not happen every few seconds. If, every two minutes, the caller is forced to repeat something because a choice word or two has been lost, then he would be right to consider that there is something wrong with the call or the technology, even though these cases do not fit well in the E-model.
Furthermore, handoff glitches may not always result in a pure loss, but rather in a loss followed by a delay, as the packets may have been held during the handoff. This delay causes the jitter buffer (jitter is explained in Section 3.2.4) to grow, and forces the loss to happen at another time, possibly with more delay accumulated.
A good rule of thumb is to look for technologies that keep handoff glitches less than 50ms. This keeps the delaying effect and the loss effect to reasonable limits. The only exception to this would be for handoffs between technologies, such as a fixed-mobile convergence handoff between Wi-Fi and cellular. As long as those events are kept not only rare but predictable, such as that they happen only on entering or exiting the building, the user is likely to forgive the glitch because it represents the convenience of keeping the phone call alive, knowing that it would otherwise have died. In this case, it is reasonable to not want the handoff break to exceed two seconds, and to have it average around a half of a second.

No comments:

Telecom Made Simple

Related Posts with Thumbnails