RSNA with 802.11i | Wi-Fi Security Technologies



802.11i addresses the major problems with WEP. The first problem, the inability to establish per-connection keys, and the inability to use different encryption algorithms, was fixed by a better protocol.
On top ofthat, 802.11i introduced two new encryption and integrity algorithms. Wi-Fi Protected Access (WPA), version one, was created to quickly work around the problems of WEP without requiring significant changes to the hardware that devices were built out of. WPA introduced the Temporal Key Integrity Protocol (TKIP), which sits on top of WEP and fixes many of the problems of WEP without requiring new hardware. TKIP was designed intentionally as a transition, or stopgap, protocol, with the hopes that devices would be quickly retired and replaced with those that supported the permanent solution, the second of the two algorithms.
Wi-Fi Protected Access version 2 (WPA2), as that permanent solution, required completely new hardware by not worrying about backwards compatibility. WPA2 uses AES to provide better security and eliminate the problems of using a linear stream cipher. A better integrity algorithm ensures that the packet has not been altered, and eliminates some of the denial-of-service weaknesses that needed to be introduced into TKIP to let it ward off some of the attacks that can't be directly stopped.
A word, first, on nomenclature. For those of you in the know, you might know that WPA has both TKIP and AES modes, 802.11i has slightly different TKIP and AES modes, and that both were harmonized in WPA2. However, practically, there really is no need to know that. For the remainder of this chapter, I will use WPA to mean TKIP as defined in WPA, WPA2 to mean AES as defined in the standard, and 802.11i to mean the framework under which WPA and WPA2 operate. This is actually industry convention—WPA and TKIP go hand in hand, and WPA2 and AES go hand in hand—so product documentation will most likely match with this use of the terms, but when there is doubt, ask your vendors whether they mean TKIP or AES.
802.11i first introduced the idea of a per-connection key negotiation. Each client that comes into the network must first associate. For WEP, which has no per-connection key, the client always used the user-entered WEP key, which is the same for every connection. But 802.11i introduces an additional step, to allow for a fresh set of per-connection keys every time, yet still based on the same master key.
Networks may still used preshared keys. These are now bumped up to be 128 bits long. For WPA or WPA2, this mode of security is known as Personal, because the preshared key method was intended for home use. Enterprises can also use 802.1X and a RADIUS server to negotiate a unique key per device. This mode of security is known as Enterprise. For example, "WPA2 Enterprise" refers to using WPA2 with 802.1X. Either way, the overall key is called the pairwise master key (PMK). This is the analog to the original WEP key.
Now, when the client associates, it has to run a four-message protocol, known as the four-way handshake, to determine what should be used as the key for the connection, known as the PTK (the pairwise temporal key or pairwise transient key). This whole concept of derived keys is known as a key hierarchy.
The four way handshake is made of unencrypted data frames, with Ethernet type of EAPOL (0×888E), and show up as the specific type of Extensible Authentication Protocol over LAN (EAPOL) message known as an EAPOL Key message. These four messages can be seen by wireless capture programs, and mark the opening of the data link between the client and the access point. Before the four-way handshake, clients and access points cannot exchange any data besides EAPOL frames. After the handshake, both sides can use the agreed-upon key to send data.
Message 1 of the four-way handshake is sent by the access point to the client, and signals the security settings of the access point (as contained in something called theRSN IE, shown in Table 1). The RSN IE contains the selection of encryption and integrity algorithms. The message also contains something called a nonce, which is a random number that the access point constructs (more on this shortly) and which will be mixed in with the PMK to produce the PTK.
Table 1: The security settings in the RSN IE 
Element ID
Length
Version
Group Cipher Suite
Pairwise Cipher Suite Count
Pairwise Cipher Suite List
AKM Suite Count
AKM Suite List
RSN Capabilities
PMKID Count
PMKID List
1 bytes
1 byte
2 bytes
4 bytes
2 bytes
n bytes
2 bytes
m bytes
2 bytes
2 bytes
pbytes
Message 2 is sent in response, from the client to the access point, and includes the same information, but from the client: a client RSN IE, and a client nonce. Once the client has chosen its nonce, it has enough information to produce the PTK on its end. The PTK is derived from the two nonces, the addresses of the access point and client, and the PMK. At this point, it might seem like the protocol is done: the client knows enough to construct a PTK before sending Message 2, and the access point, once it gets the message, can use the same information to construct its own PTK. If the two devices share the same PMK—the master key—then they will pick the same PTK, and packets will flow. This is true, but the protocol needs to do a little bit more work to handle the case where the PMKs do not agree. To do this, the client "signs" Message 2 with a message integrity code (MIC). The MIC used is a cryptographic hash based on both the contents of the message and the key (PTK). Thus, the access point, once it derives its own PTK from its PMK and the nonces, can check to see whether the client's sent MIC matches what it would generate using its own PTK. If they match, then the access point knows that message 2 is not a forgery and the client has the right key. If they do not match, then the access point drops the message.
If Message 2 is correct, then Message 3 is sent by the access point, and is similar to Message 1 except that it too is now "signed" by the MIC. This lets the client know that the access point has the right key: at Message 2, only the access point could detect an attacker, but not the client. Also, the client can now verify that the access point is using the same security algorithms as the client—a mismatch would only occur if an attacker is injecting false RSN IEs into the network to try to get one side or both to negotiate to a weaker algorithm (say, TKIP) if a stronger algorithm (say, AES) is available. Finally, for WPA2, the client learns of the multicast key, the group temporal key(GTK), this way, as it is encrypted with the PTK and sent as the last part of the message.
Message 4 is a response from the client to the access point, and validates that the client got Message 3 and installed all of the correct keys.
The nonces exist to prove to each side that the other side is not replaying these messages— that is, that the other side is alive and is not an attacker. Imagine that the access point sends its nonce. An attacker trying to replay a previous, valid handshake for the same client could send an old Message 2, but the MIC on that Message 2 can never be correct, because it would always be based on the access point nonce recorded previously and was used in that previous handshake, and not the new one that the access point just created. Thus, the access point always can tell the difference between a client that is really there, and one that is just replayed from the past. The client can use its nonce to do the same thing. Also, if either side has the wrong PMK—which would happen with preshared keys if someone typed one of the keys wrong—the devices can catch it in the four-way handshake and not pretend to have a working connection.
Overall, the four-way handshake lets the two sides come together on a fresh connection key every time. The four way handshake is the same, except for some minor details such as choice of algorithm, for WPA and WPA2.
By the way, keep in mind that the four-way handshake is only designed to provide a new PTK every time based on the same PMK, to provide a fresh PTK and eliminate the problem of old or stale keys that WEP has. The four-way handshake is not designed to hide the PTK from attackers who have the PMK. This is an important point: if an attacker happens to know the PMK already—such as a preshared key that he or she stole or remembered—then every PTK ever generated from that PMK, in the past and in the future, can be broken with minimal effort. This is known as a lack of forward secrecy and is a major security flaw in preshared key networks.
In other words, you must keep the PMK secret. Do not share preshared keys, ever—even if you have stopped using that preshared key and moved to a new one long ago. If an attacker had been recording your past conversations, when the old preshared key was in use, and someone leaks the preshared key to this attacker, your old conversations are in jeopardy.

WEP (Wired Equivalent Privacy) | Wi-Fi Security Technologies


WEP

WEP (Wired Equivalent Privacy) was the first attempt to secure 802.11. Unfortunately, the privacy it provided was neither equivalent to wired nor very good. Its very design does not protect against replays, meaning that an attacker can record prior valid traffic and replay it later, getting the network to repeat actions (such as charging credit cards) without detecting it. Furthermore, WEP uses for encryption RC4, an algorithm that was not designed to be used in the way WEP uses it, leading to ways of reverse-engineering and cracking the encryption without the key. Finally, WEP uses a very poor message integrity code.
All of that said, WEP is a good place to look to learn the mechanics of security in 802.11, as the later and better security additions replaced the broken pieces but did not destroy the framework.
Note 
It is the author's recommendation to not use WEP in existing or new networks, under any circumstances, because of the known flaws. Please consider the studying of WEP to be an academic exercise at this point, and do not allow vendors to talk you into using it.
1) Keying
WEP starts off with an encryption key, or a piece of knowledge that is known by the access point and the client but is sufficiently complicated that outsiders—attackers, that is— shouldn't be able to guess it.
There is one and may be two or more WEP keys. These keys are each either 40 bits (WEP-40) or 104-bits (WEP-104) long, and are usually created usually from text passwords, although they can be entered directly as hexadecimal numbers. Manually entered keys are called pre-shared keys (PSK). WEP provides very little signaling to designate that encryption is in use, and there is no way to denote whether the short or long keys are being used. If any security, at all, is used in the network, the "Privacy" flag in the network's beacons are set. Clients that want to use WEP had to associate to the network and start sending encrypted traffic. If the keys matched, the network made forward progress and the user was happy. If the keys did not match, the user would not be able to do much, but would otherwise not know what the error was. As you can see, this is not an ideal situation, and is avoided in the modern, post-WEP protocols.
There are some more complicated possibilities, which are not worth going over, except to note that the origin of the confusing 802.11 term "authentication" for the first phase of a client's connection to the network came from an old method of using WEP to verify the key before association. This security method is completely ignored by post-WEP protocols, which use a different concept to ensure that clients have the right key. Therefore, the two Authentication frames are now considered vestigial, and carry no particularly useful information in them.
2) Encryption
The encryption key is not used directly to encrypt each packet. Instead, it is concatenated with a per-packet number, called the initialization vector (IV), to create the key that RC4 uses to encrypt the data. The initialization vector can be any number. Transmitters would start at zero, and add one for each frame sent, until it hit the end of the three-byte sequence, where it would start over at zero again. Why have a per-packet key, when the original key was supposedly secret? To answer this, let's look at the encryption algorithm for WEP, which is based on RC4.
RC4 is a stream cipher, meaning that it is designed to protect a large quantity of flowing, uninterrupted data, with minimal overhead. It is used, for example, to protect secure web traffic (HTTPS), because web traffic goes across in a stream of HTML. RC4 is really a pseudorandom number generator, with cryptographic properties to ensure that the stream of bits that comes out is hard to reverse-engineer. When given a key, RC4 generates an infinite number of bits, all appearing to be random. These bits are then matched up, bit-by-bit, to the incoming plaintext, or not yet encrypted, data. Each bit of the plaintext is added to each matching bit of the RC4 stream, without carry. This is also known as taking the exclusive or of two bits, and the logic goes that the resulting "sum" bit is 1 if either of the incoming bits are 0, and 0 otherwise. The mathematical operation is represented by the  symbol, and so the four possibilities for the exclusive or are as follows: 0  0 = 0, 0  1 = 1, 1  0 = 1, and 1  1 = 0. When applied to the plaintext and RC4 together, the resulting stream looks as random as the original RC4 stream, but has the real data in it. Only a receiver with the right key can recreate the RC4 stream, do the same bitwise exclusive or to the encrypted data, and recover the original data. (The exclusive or operation has the property that any number that has any other number added twice provides the same number back: n  d  d = n. Therefore, applying the exclusive or of the RC4 stream twice to the original data, once by the encryption algorithm and once by the decryption algorithm, gets the plaintext data back.)
So far, so good. However, an attacker can use the properties of the exclusive or to recover the plaintext in certain cases, as well. If two frames come using the same per-frame key— meaning the same IV and WEP key—an eavesdropper can just add the two encrypted frames together. Both frames have the same per-frame key, so they both have the same RC4 stream, causing the exclusive or of the two encrypted frames to cancel out the identical RC4 stream and leave just the exclusive or of the two original, plaintext frames. The exclusive or of two plaintext frames isn't terribly different from having the original plaintext: the attacker can usually guess at the contents of one of the frames and make quick work discovering the contents of the other.
This isn't a flaw with RC4 itself, as it is with using any exclusive or cipher—a type of linear cipher, because  is really addition modulo 2—as they are vulnerable to bit-by-bit attacks unless other algorithms are brought in as well.
Okay, so that explains the per-frame keying and the IV, and why it is not a good solution for security. In summary, replays are allowed, and the IV wraps and key reuse reveals the original plaintext. Finally, the per-frame key doesn't include any information about the sender or receiver. Thus, an attacker can take the encrypted content from one device and inject it as if it were from another. With that, three of the problems of WEP are exposed. But the per-frame keying concept in general is sound.
3) Integrity
To attempt to provide integrity, WEP also introduces the integrity check value (ICV). This is a checksum of the decrypted data—CRC-32, specifically—that is appended to the end of the data and encrypted with it. The idea is that an attacker might want to capture an encrypted frame, make possibly trivial modifications to it (flipping bits or setting specific bits to 0 or 1), and then send it on. Why would the attacker want to do this? Most active attacks, or those that involve an attacker sending its own frames, require some sort of iterative process. The attacker takes a legitimate frame that someone else sends, makes a slight modification, and sees if that too produces a valid frame. It discovers if the frame was valid by looking for some sort of feedback—an encrypted frame in the other direction—from the receiver. As mentioned earlier, RC4 is especially vulnerable to bit flipping, because a flipped bit in the encrypted data results in the flipping of the same bit in the decrypted data. The ICV is charged with detecting when the encrypted data has been modified, because the checksum should hopefully be different for a modified frame, and the frame could be dropped for not matching its ICV.
As mentioned before, however, WEP did not get this right, either. CRC-32 is not cryptographically secure. The effect of a bit flip on the data for a CRC is known. An attacker can flip the appropriate bits in the encrypted data, and know which bits also need to be flipped in the CRC-32 ICV to arrive at another, valid CRC-32, without knowing what the original CRC-32 was. Therefore, attackers can make modifications pretty much at will and get away with it, without needing the key. But again, the concept of a per-framemessage integrity code in general is sound.
4) Overall
WEP alters the data packet, then, by appending the ICV, then encrypting the data field, then prepending the unencrypted IV. Thus, the frame body is replaced with what is in Table 1.
Table 1: 8.02.11 Frame Body with WeP 
IV
Key ID
Data
ICV
3 bytes
1 byte
n—8 bytes
4 bytes
The issues described are not unique to RC4, and really applies to how WEP would use any linear cipher. There are also some problems with RC4 itself that come out with the way RC4 is used in WEP, which do not come out in RC4's other applications. All in all, WEP used some of the right concepts, but a perfect storm of execution errors undermined WEP's effectiveness. Researchers and attackers started publishing what became an avalanche of writings on the vulnerability of WEP. Wi-Fi was at risk of becoming known as hopelessly broken, and drastic action was needed. Thus, the industry came together and designed 802. Hi.

Security for 802.11



Security is a broad subject, and there is an entire chapter dedicated to the unique challenges with security for voice mobility later. But any component of voice mobility over Wi-Fi will require some use of 802.11's built-in encryption. Keep in mind that securing the wireless link is not only critical, but may be the only encryption used to prevent eavesdroppers from listening in on sensitive voice calls for many networks.
802.11 security has both a rich and somewhat checkered past. Because of the initial application of 802.11 to the home, and some critical mistakes by some of the original designers, 802.11 started out with inadequate protection for traffic. But thankfully, all Wi-Fi-certified devices today are required to support strong security mechanisms.
Nevertheless, administrators today do still need to keep in mind some of the older, less secure technologies—often because the mobile handset might not correctly support the latest security, and it may fall to you to figure out how to make an old handset work without compromising the security of the rest of the network.
A secure wireless network provides at least the following (borrowed from Chapter 8):
  • Confidentiality: No wireless device other than the intended recipient can decrypt the message.
  • Outsider Rejection: No wireless device other than a trusted sender can send a message correctly encrypted.
  • Authenticity and Forgery Protection: The recipient can prove who the original composer of the message is.
  • Integrity: The message cannot be modified by a third party without the message being detected as having been tampered with.
  • Replay Protection: A older but valid message cannot be resent by an attacker later, thus preventing attackers from replaying old transactions.
Some of these properties are contained in how the encryption keys get established or sent from device to device, and the rest are contained in how the actual encryption or decryption operates.

Collisions, Backoffs, and Retries



Multiple radios that are in range of each other and have data to transmit need to take turns. However, the particular flavor of 802.11 that is used in Wi-Fi devices does not provide for any collaboration between devices to ensure that two devices do take turns. Rather, a probabilistic scheme is used, to allow for radios to know nothing about each other at the most primitive level and yet be able to transmit.
This process is known as backing off, as is the basis of Carrier Sense Multiple Access with Collision Avoidance, or CSMA-CA. The process is somewhat involved, and is the subject of quite a bit of research, but the fundamentals are simple. Each radio that has something to send waits until the channel is free. If they then transmitted immediately, then if any two radios had data to transmit, they would transmit simultaneously, causing a collision, and a receiver would only pick up interference. Carrier sense before transmission helps avoid a radio transmitting only when another radio has been transmitting for a while. If two radios do decide to transmit at roughly the same time—within a few microseconds—then it would be impossible for the two to detect each other.
To partially avoid the collisions, each radio plays a particular well-scripted game. They each pick a random nonnegative integer less than a value known as the contention window (CW), a small power of 2. This value will tell the radio the number of slots, or fixed microsecond delays, that the radio must wait before they can transmit. The goal of the random selection is that, hopefully, each transmitter will pick a different value, and thus avoid collisions. When a radio is in the process of backing off, and another radio begins to transmit during a slot, the backing-off radio will stop counting slots, wait until the channel becomes free again, and then resume where it left off. That lets each radio take turns (see Figure 1).

 
Figure 1: The backoff procedure for two radios
However, nothing stops two radios from picking the same value, and thus colliding. When a collision occurs, the two transmitters find out not by being able to detect a collision as Ethernet does, but by not receiving the appropriate acknowledgments. This causes the unsuccessful transmitters to double their contention window, thus reducing the likelihood that the two colliders will pick the same backoff again. Backoffs do not grow unbounded: there is a maximum contention window. Furthermore, when a transmitter with an inflated contention window does successfully transmit a frame, or gives up trying to retransmit a frame, it resets its contention window back to the initial, minimum value. The key is to remember that the backoff mechanism applies to the retransmissions only for any one given frame. Once that frame either succeeds or exceeds its retransmission limit, the backoff state is forgotten and refreshed with the most aggressive minimums.
The slotted backoff scheme had its origin in the educational Hawaiian research network scheme known as Slotted ALOHA, an early network that addressed the problem of figuring out which of multiple devices should talk without using coordination such as that which token-based networks use. This scheme became the foundation of all contention-based network schemes, including Ethernet and Wi-Fi.
However, the way contention is implemented in 802.11 has a number of negative consequences. The denser and busier the network, the more likely that two radios will collide. For example, with a contention window of four, if five stations each have data, then a collision is assured. The idea of doubling contention windows is to exponentially grow the window, reducing the chance of collisions accordingly by making it large enough to handle the density. This would allow for the backoffs to adapt to the density and business of the network. However, once a radio either succeeds or fails miserably, it resets its contention window, forgetting all adaptation effects and increasing the chance of collisions dramatically.
Furthermore, there is a direct interplay between rate adaptation—where radios drop their data rates when there is loss, assuming that the loss is because the receiver is out of range and the transmitter's choice of data rate is too aggressive—and contention avoidance. Normally, most devices do not want to transmit data at the same time. However, the busier the channel is, the more likely that devices that get data to send at different times are forced to wait for the same opening, increasing the contention. As contention goes up, collisions go up, and rate adaptation falsely assumes that the loss is because of range issues and drops the data rate. Dropping the data rate increases the amount of time each frame stays on air—a 1Mbps data frame takes 300 times the amount of time a 300Mbps data frame of the same number of bytes takes—thus increasing the business of the channel. This becomes a vicious cycle, in a process known as congestion collapse that causes the network to spend an inordinate amount of time retransmitting old data and very little time transmitting new data. This is a major issue for voice mobility networks, because the rate of traffic does not change, no matter what the air is doing, and so a network that was provisioned with plenty of room left over can become extremely congested by passing over a very short tipping point.

Hidden Nodes | Wi-Fi's Approach to Wireless



Carrier sense lets the transmitter know if the channel near itself is clear. However, for one transmitter's wireless signal to be successfully received, the channel around the receiver must be clear—the transmitter's channel doesn't matter. The receiver's channel must be clear to prevent interference from multiple signals at the same time. However, the transmitter can successfully transmit with another signal in the air, because the two signals will pass through each other without harming the transmitter's signal.
So why does 802.11 require the transmitter to listen before sending? There is no way for the receiver to inform the transmitter of its channel conditions without itself transmitting. In networks that are physically very small—well under the range of Wi-Fi transmissions—the transmitter's own carrier sensing can be a good proxy for the receiver's state. Clearly, if the transmitter and receiver are immediately next to each other, the transmitter and receiver pretty much see the same channel. But as they separate, they experience different channel conditions. Far enough away, and the transmitter has no ability to sense if a third device is transmitting to or by the receiver at the same time. This is called the hidden node problem.
Figure 1 shows two transmitters and a receiver in between the two. The receiver can hear each transmitter equally, and if both transmitters are sending at the same time, the receiver will not be able to make out the two different signals and will receive interference only. Each transmitter will perform carrier sense to ensure that the channel around it is clear, but it won't matter, because the other transmitter is out of range. Hidden node problems generally appear this way, where the interfering transmitters are on the other side of the receiver, away from the transmitter in question.

 
Figure 1: Hidden Nodes: The receiver can hear both transmitters equally, but neither transmitter can hear the other
802.11 uses RTS/CTS as a partial solution. As mentioned when discussing the 802.11 protocol itself, a transmitter will first send an RTS, requesting from the receiver a clear channel for the entire length of the transmission. By itself, the RTS does not do anything for the transmitter or receiver, because the data frame that should have been sent would have the same effect, of silencing all other devices around the sender. However, what matters is what the receiver does. The CTS it sends will silence the devices on the far side from the sender, using the duration value and virtual carrier sense to cause those devices to not send, even though they cannot detect the following real data frame (seeFigure 2).

 
Figure 2: RTS/CTS for Hidden Nodes: The CTS silences the interfering devices
This is only a partial solution, as the RTSs themselves can get lost because of hidden nodes. The advantage of the RTS, however, is that it is usually somewhat shorter than the data frame or frames following. For the RTS/CTS protocol to be the most effective against hidden nodes, the RTS and CTS must go out at the lowest data rate. However, many devices send the RTSs at far higher rates. This is done mostly to just take advantage of RTSs determining whether the receiver is in range, and not to avoid hidden nodes.
Furthermore, the RTS/CTS protocol has a very high overhead, as many data packets could be sent in the time it takes for an RTS/CTS transmission to complete.

Clear Channel Assessment and Details on Carrier Sense



Now that we've covered the preamble, you can begin to understand what the term carrier sense would mean in wireless.
The term clear channel assessment (CCA) represents how a radio determines if the air is clear or occupied. Informally, this is referred to as carrier sense. As mentioned previously, transmitters are required to listen before they transmit, to determine whether someone else is also speaking, and thus to help avoid collisions.
When listening, the receiver has a number of tools to help discover if a transmission is under way. The most basic concept is that of energy detection. A radio can figure out whether there is energy in the channel by using a power meter. This power meter is usually the one responsible for determining the power level, often stated as the Receive Signal Strength Indication (RSSI) of a real signal. When applied to an unoccupied channel, the power meter will detect the noise floor, often around 95dBm, depending on the environment. However, when a transmission is starting, the power meter will detect the signal being sent, and the power level measured will jump—let's say, to 70dBm for this example. That difference of 25dB can be used by the radio to clue in that it should attempt to turn on its modem and seek out the preamble. This allows the radio to have its modem off until real signals come by.
Energy detection can be used as a form of carrier sense to trigger the CCA. When done that way, non-802.11 noise that crosses a certain threshold, determined by the radio, will show up as an occupied channel for as long as the noise is present. This allows the radio to avoid transmitting into a channel at the same time as interference is present. In the 2.4GHz band, microwave ovens can often trigger the energy detection thresholds on radios, causing the radios to stop transmitting at that time.
On the other hand, energy detection for CCA has its limitations. If the noise coming in is something that would not interfere with the transmission, but does trip the energy detection threshold, then airtime is being wasted. Therefore, the carrier acquisition portion of CCA comes into play. Radios know to look for specific bit patterns in a transmission, such as the preamble. When they detect these bit patterns, they can assert CCA as well. Or, more importantly, when they detect some energy in the channel but cannot detect these bit patterns, they can conclude that there is no legitimate 802.11 signal and suppress CCA.

Preambles | Wi-Fi's Approach to Wireless



Because 802.11 allows transmitters to choose from among multiple data rates, a receiver has to have a way of knowing what the data rate a given frame is being transmitted at. This information is conveyed within the preamble (see Figure 1).

 
Figure 1: 802.11 Preambles Illustrated
The preamble is sent in the first few microseconds of transmission for 802.11, and announces to all receivers that a valid 802.11 transmission is under way. The preamble depends on the radio type, but generally follows the principle of having a fixed, well-known pattern, followed by frame-specific information, then followed by the actual frame. The fixed pattern at the beginning lets the receiver train its radio to the incoming transmission. Without it, the radio might not be able to be trained to the signal until it is too late, thus missing the beginning of the frame. The training is required to allow the receiver to know where the divisions between bits are, as well as to adjust its filters to get the best version of the signal, with minimum distortion. The frame-specific information that is included with the preamble (or literally, the Physical Layer Convergence Procedure (PLCP) following the preamble, although the distinction is unnecessary for our purposes) names two very important properties of the frame: the data rate the frame will be sent at, and how long the frame will be.
All preambles are sent at the lowest rate the radio type supports. This ensures that no matter what the data rate of the packet, every radio that would be interfered with by the transmission will know a transmission is coming and how long the transmission will last. It also tells the receiver what data rate it should be looking for when the actual frame begins. All devices within range of the transmitter will hear the preamble, the length field, and the data rate. This range is fixed—because the preamble is sent at the lowest data rate in every case, the range is fixed to be that of the lowest data rate. Note that there is no way to change the data rate at which the preamble is sent. The standard intentionally defines it to be a fixed value—1Mbps for 802.11b, and 6Mbps for everything else.
When a radio hears a preamble with a given data rate mentioned, it will attempt to enable its modem to listen for that data rate only, until the length of the frame, as mentioned in the preamble, has concluded. If the receiver is in range of the transmitter, the modem will be able to properly detect the frame. If, however, the receiver is out of range, the receiver will hear garbage. The garbage will not pass the checksum (also garbage), and so will be discarded.
To prevent radios from interpreting noise as a preamble, and locking to the wrong data rate for a possibly very long length, the frame-specific information has its own checksum bit or bits, depending on the radio type. Only on rare occasions will the checksum bit fail and cause a false reception; thus, there is no concern for real deployments.
In summary, a receiver then works by first setting its radio to the lowest common denominator: the lowest data rate for the radio. If the fixed sequence of a preamble comes in, followed by the data rate and length, then the radio moves its modem up to the data rate of the frame and tries to gather the number of bits it calculates will be sent, from the length given. Once the amount of time necessary for the length of the frame has concluded, the radio then resets back to the lowest data rate and starts attempting to receive again.

Telecom Made Simple

Related Posts with Thumbnails