Key Caching | 802.1X, EAP, and Centralized Authentication

Because the work required establishing a PMK when 802.1X and RADIUS are used is significant, WPA2 provides for a way for the PMK to be cached for the client to use, if it should leave the access point and return before the PMK expires.
This is done using key caching. Key caching works because each PMK is given a label, called a PMKID, that represents the name of the RADIUS association and the PMK that was derived from it. The PMKID is specifically a 128-bit string, produced by the function

where AA is the BSSID Ethernet address, SPA is the Ethernet address of the client, and HMAC-SHA1-128 is the first 128 bits of the well-known SHA1-based HMAC function for producing a cryptographic one-way signature with the PMK as the key. The double-pipes ("∥") represent bitwise concatenation. The "PMK Name" ASCII string is used to prevent implementers from putting the wrong function results in the wrong places and having it work by accident.
From this, it is pretty clear to see that a client and access point can share the same PMKID only if they have the same PMK and are referring to each other.
When the client associates, it places into its Reassociation message's RSN information element (Table 5.16) the PMKID it may have remembered from a previous association to the access point. If the access point also remembers the previous association, and still has the PMK, then the access point will skip starting 802. IX and will proceed to sending the first message in the four-way handshake, basing it on the remembered PMK.
This caching behavior is not mandatory, in the sense that either side can forget about the PMK and the connection will still proceed. If the client does not request a PMKID, or the access point does not recognize or remember the PMKID, the access point will still send an EAP Request Identity message, and the 802.1X protocol will continue as if no caching had taken place.

802.1X | Wi-Fi Radio Types

802.1X, also known as EAPOL, for EAP over LAN, is a basic protocol supported by enterprise-grade Wi-Fi networks, as well as modern wired Ethernet switches and other network technologies. The idea behind 802.1X is to allow the user's device to connect to the network as if the RADIUS server and advanced authentication systems did not exist, but to then block the network link for the device for all other protocols except 802. IX, until authentication is complete. The network's only requirements are twofold: prevent all data traffic from or to the client except for EAPOL (using Ethernet protocol 0×888E) from passing; and taking the EAPOL frames, removing the EAP messages embedded within, and tunneling those over the RADIUS protocol to the AAA server.
The job of the network, then, is rather simple. However, the sheer number of protocols can make the process seem complex. We'll go through the details slowly. The important thing to keep in mind is that 802.1X is purely a way of opening what acts like a direct link between the AAA server and the client device, to allow the user to be authenticated by whatever means the AAA server and client deem necessary. The protocols are all layered, allowing the highest-level security protocols to ride on increasingly more specific frames that each act as blank envelopes for its contents.
Once the AAA server and the client have successfully authenticated, the AAA server will use its RADIUS link to inform the network that the client can pass. The network will tear down its EAPOL-only firewall, allowing generic data traffic to pass. In the same message that the AAA server tells the network to allow the client (an EAP Success), it also passes the PMK—the master key that the client also has and will be used for encryption—to the network, which can then drop into the four-way handshake to derive the PTK and start the encrypted channel. This PMK exchange goes in an encrypted portion of the EAP response from the RADIUS server, and is removed when the EAP Success is forwarded over the air. The encryption is rather simple, and is based on the shared password that the RADIUS server and controller or access point have. Along with the PMK comes a session lifetime. The RADIUS server tells the controller or access point how long the authentication, and subsequent use of the keys derived from it, is valid. Once that time expires, both the access point and the client are required to erase any knowledge of the key, and the client must reauthenticate using EAP to get a new one and continue using the network.
For network administrators, it is important to keep in mind that the EAP traffic in EAPOL is not encrypted. Because the AAA server and the client have not agreed on the keys yet, all of the traffic between the client and the RADIUS server can be seen by passive observers. This necessarily limits the EAP methods—the specific types of authentication—that can be used. For example, in the early days of 802.1X, an EAP method known as EAP-MD5 was used, where the user typed a password (or the client used the user's computer account password), which was then hashed with the MD5 one-way cryptographic hash algorithm, and then sent across the network. Now, MD5 is flawed, but is still secure enough that an attacker would have a very hard time reverse-engineering the password from the hash of it. However, the attacker wouldn't need to do this, as he could just replay the same MD5 hashed version himself, as if he were the original user, and gain access to the network. For this reason, no modern wireless device supports EAP-MD5 for wireless authentication.

What is Authentication in 802.1X?

Let's first define exactly what authentication is, and what the technology expects out of the authentication process. We've mentioned credentials immediately preceding this section. An authentication credential is something that one party to communication has that the other parties can use to verify whether the user is really who he claims he is and is authorized to join the network.
In the preshared key case, the authentication credential is just the preshared key, a global password that every user shares. This is not very good, because every user appears identical, and there is no way for users to know that their networks are also authentic. Authentication should be a two-way street, and it is important for the clients to know that the network they are connecting to is not a fraud. With preshared keys, anyone with the key can set up a fraudulent rogue access point, install the key, and appear to be real to the users, just as they can arbitrarily decrypt over-the-air traffic.
Normal computer account security, such as what is provided by email servers, enterprise personal computers, and Active Directory (AD) networks, generally uses the notion that a user has a unique, secret password. When the user wants to access the network, or the machine, or the email account, she enters her password. If this password matches, then the user is allowed in. Otherwise, he or she is not.
(In fact, to prevent the system administrators from having access to the user's password, which the user might use in other systems and might not want to share, these systems will record a cryptographically hashed version of the password. This version, such as the MD5-hashed one mentioned in the next section, prevents anyone looking at it from knowing what the original password is, yet at the same time allows the user to type their password at any time, which leads to a new MD5-hashed string that will be identical to the one recorded by the system if and only if the passwords are identical.)
This identifies the user, but what about the network, which can't type a password to prove itself to the user? More advanced authentication methods use public key cryptography to provide more than a password. The background is quite simple, however. Public key cryptography is based on the notion of a certificate. A certificate is a very small electronic document, of an exact and precise format, containing some basic information about the user, network, or system that the certificate represents. I might have a certificate that states that it is written for, pretending for a moment that that is the name of my user account at some company. The network might have a certificate that states it is written for, using the DNS name of the server running the network. To ensure that the contents of the certificate are not downright lies made up in the moment, each certificate is signed using another certificate, that of a certificate authority who both parties need to trust in advance. Finally, each certificate includes some cryptographic material: a public key, that is shouted out in the certificate, and a private key, which the owner of the certificate keeps hidden and tells no one. This private key is like a very big, randomly generated password. The difference is that the private key can be used to encrypt data that the public key can decrypt, and the public key can be used to encrypt data that the private key can decrypt. This allows the holder of the certificate to prove his or her identity by encrypting something using his or her private key. It also allows anyone else in the world to send the holder of the certificate a private message that only the holder can decrypt.
Certificates are necessary for network authentication. When the user tries to authenticate to the network, the network will prove its identity by using its private key and certificate, and the client will accept it only if the network gives the right information based on that certificate. Certificates are also useful for user authentication, because the same properties work in reverse. The EAP method known as EAP-TLS requires client certificates. Most of the other Wi-Fi-appropriate EAP methods use only server certificates, and require client passwords instead.
To recap, authentication over Wi-Fi means that the user enters a password or sends his certificate to the AAA server, which proves his identity, while the network sends its certificate to the client, whose supplicant automatically verifies the network's identity—just like how web browsers using HTTPS verify the server's identity.
It is the EAP method's job to specify whether passwords or certificates are required, how they are sent, and what other information may be required. The EAP method also is required to allow the AAA server and the client to securely agree to a master key—the PMK—which is used long after authentication to encrypt the user's data. The EAP method also must ensure that the authentication process is secure even though it is sent over an open, unencrypted network, as you will see in the following section on 802.1X.
The administrator is allowed to control quite a bit about what types of authentication methods are supported. The AAA administrator (not, you may note, the networkadministrator, unless this is the same person) determines the EAP methods, and thus the certificate and authentication requirements. The AAA administrator also chooses how long a user can keep network access until he or she has to reauthenticate using EAP. The network administrator controls the encryption algorithm—whether to use WPA or WPA2. Together, the two administrators can use extensions to RADIUS to also introduce network access policies based on the results of the AAA authentication.

802.1X, EAP, and Centralized Authentication | Security for 802.11

Wi-Fi's self contained security mechanisms. With WPA2, the encryption and integrity protection of the data messages can be considered strong. But we've only seen preshared keys, or global passwords, as the method the network authenticates the user, and preshared keys are not strong enough for many needs.
The solution is to rely on the infrastructure provided by centralized authentication using a dedicated Authentication, Authorization, and Accounting (AAA) server. These servers maintain a list of users, and for each user, the server holds the authentication credentials required by the user to access the network. When the user does attempt to access the network, the user is required to exercise a series of steps from the authentication protocol demanded by the AAA server. The server drives its end of the protocol, challenging the user, by way of a piece of software called a supplicant that exists on the user's device, to prove that the user has the necessary credentials. The network exists as a pipe, relaying the protocol from the AAA server to the client. Once the user has either proven that she has the right credentials—she apparently is who she says she is—the AAA server will then tell the network that the user can come in.
The entire design of RADIUS was originally centered around providing password prompts for dial-up users on old modem banks. However, with the addition of the Extensible Authentication Protocol (EAP) framework on top of RADIUS, and built into every modern RADIUS server, more advanced and secure authentication protocols have been constructed. See Figure 1.

Figure 1: The Components of RADIUS Authentication over Wi-Fi
The concept behind EAP is to provide a generic framework where the RADIUS server and the client device can communicate to negotiate the security credentials that the network administrator requires, without having to concern or modify the underlying network access technology. To accomplish this last feat, the local access network must support 802.1X.

Wi-Fi Link Security

To summarize, 802.11 security is provided by three different grades of technology: the outdated and broken WEP, the transition-providing WPA, and the secure and modern WPA2.
WPA and WPA2 are both built on the same framework of 802. Hi, which provides a rich protocol for 802.11 clients and access points to communicate and negotiate which over-the-air encryption and integrity algorithms should be used.
Networks start off with a master key—either a preshared key, entered as text by the user into the access point and mobile device, or generated in real time by enterprise-grade authentication systems. This master key is then used to derive a per-connection key, called the PTK. The PTK is then used to encrypt and provide integrity protection for each frame, using either TKIP for WPA or AES for WPA2.
It bears repeating that preshared keys, for all grades of 802.11 security, have problems that cause both security and management headaches. The biggest security headache is that the privacy of the entire network is based on that PSK being kept private for eternity. If a PSK is ever found out by an attacker—even if that key has been retired or changed a long time ago—then the attacker can use that key to decrypt any recordings of traffic that were taken when the PSK had been in use. Furthermore, because preshared keys are text and are common for all devices, they are easy to share and impossible to revoke. Good users can be fooled into giving the PSK away, or bad users—such as employees who have left the organization—can continue to use the preshared keys as often as they desire.
These problems are solved, however, by moving away from preshared keys to using 802.1X and EAP. Recently, some vendors have been introducing the ability to create per-user preshared keys. The advantage of having per-user keys is that one user's access can be revoked without allowing that user to compromise the rest of the network. The problem with this scheme, however, is the continued lack of forward secrecy, meaning that a user who has his password stolen can still have decrypted every packet ever sent or will send using that key. For this reason, 802.1X is still recommended, using strong EAP methods that provide forward secrecy.

WPA2 and AES | Security for Wi-fi Radio

WPA2 introduces a new encryption algorithm, using the Advanced Encryption Standard (AES). This cipher was produced to be used as a standard algorithm wherever encryption is needed.
AES is a block cipher, unlike RC4. A block cipher takes blocks of messages—fixed chunks of bytes—and encrypts each block, producing a new block of the same size. These are nonlinear ciphers, and so the bit-flip attacks are significantly harder. AES was specifically designed and is believed to be practically impervious to those styles of attacks. With block ciphers, each block starts off independently, a bit of a downside compared to stream ciphers. To remove that independence, WPA2 also uses what is called Counter mode, a simple concept where later blocks are made to depend on previous blocks.
The MIC used is also based on AES, but is used as a cryptographic hash. This use is called cipher block chaining (CBC), and essentially uses the same concept of making later blocks depend on earlier ones, but only outputting the last block as the result. This small block (128 bits) is dependent on every bit of the input, and so works as a signature, or hash.
The overall algorithm used is known as Counter Mode with Cipher Block Chaining-Message Authentication Code (CCMP).
Table 1 shows the frame body used with WPA2. As with WPA, WPA2 has essentially the same expanded IV. Because WPA2 isn't using TKIP, the name has been changed to the packet number (PN), but serves the same purpose, starting at 0 and counting up. The PN is used for replay detection, as well as ensuring per-frame keying. The MIC is also eight bytes, but uses CBC-MAC rather than Michael. With new hardware, the last vestige of WEP can be dropped, and the old ICV is removed.
Table 1: 8.02.11 Fram Body with WPA2 
8 bytes
n—8 bytes
8 bytes
Because the WPA2 MIC is considered to be cryptographically strong, the designers of WPA2 eliminated the countermeasures that WPA has. It is still true that no frame should come in with an invalid MIC; however, the administrator can be alerted to deal with it in his own time, as there are not any known exploits that can be successfully mounted against WPA2 using an invalid MIC to date.

WPA and TKIP | Security for 802.11

TKIP was designed to run on WEP hardware without slowing the hardware down significantly. To do this, TKIP is a preprocessing step before WEP encryption. RC4 is still the encryption algorithm, and the WEP CRC-32 could not be eliminated. However, TKIP adds features into the selection of the per-frame key, and introduces a new MIC to sit beside the CRC-32 and provide better integrity.
The first change is to expand the IV and key ID fields to eight bytes total (see Table 1). The expanded fields gives a six-byte IV, now called the TKIP sequence counter(TSC). The goal is to give plenty of room so that the TSC nearly never needs to wrap. Furthermore, if it does get close to wrapping, the client is required to renegotiate a new PTK. This prevents key reuse. Finally, the TSC is used to provide the replay protection missing in WEP. The TSC is required to go up by one for each message. Each side keeps the current TSC that it is sending with, and the one it last received successfully from the other side. If a frame comes in out of order—that is, if it is received with an old TSC—the receiver drops it. An attacker can no longer replay valid but old frames. And, of course, although it can try to invent new frames, even with higher TSCs, the receiver won't update the last good TSC unless the frame is decryptable, and it will not be because the attacker does not know the key.
Table 1: 8.02.11 Fram Body with WPA 
Expanded IV
8 bytes
n - 8 bytes
8 bytes
4 bytes
The second change is to come up with a better way of producing the per-frame key. The per-frame key for TKIP uses a new algorithm that takes into account not only the now larger IV and the PTK, but the transmitter's address as well. This algorithm uses a cryptographic device known as an S-box to spread out the per-frame key in a more even, random-looking pattern. This helps avoid the problems with weak RC4 per-frame keys, which were specific WEP per-frame keys that caused RC4 to leak information. The result of this algorithm is a brand new per-frame key for each frame, which avoids many of the problems with WEP.
Unfortunately, the underlying encryption is still WEP, using a linear cipher vulnerable to bit flipping. To catch more of the bit flips, a new, cryptographically "better" MIC was needed. WPA uses Michael, a special MIC designed to help with TKIP without requiring excessive computation. It is not considered to be cryptographically secure in the same sense as is WPA2, but is considered to be significantly better than CRC-32, and thus can be used to build secure networks with some caveats. In this case, the designers were aware of this limitation up front, and designed Michael to be good enough to provide that transition from WEP to something more secure down the road (which became AES).
The Michael MIC is not just a function of the data of the packet. It also depends on the sender's address, the receiver's address, and the priority of the packet, as well as the PTK. Michael is designed to avoid the iterative guessing and bit flipping that WEP is vulnerable to. Furthermore, it is based on the entire frame, and not just individual fragments, and so avoids some fragmentation attacks that can be used against WEP. The result of Michael is the eight-byte MIC, which is placed at the end of the frame before it is sent for WEP encryption.
Because the designers know that Michael isn't enough, they also built in a provision for detecting when an attack is under way. Attackers try to modify frames and submit them, and see if the modified frames get mistaken as being authentic. Most of the time, they fail, and these modified frames are not decryptable. With WEP, a nondecryptable frame is silently dropped, with no harm. However, a frame with a bad MIC should never happen in a properly functioning system, and is a sign that the network is under attack. To help prevent these attacks from being successful, WPA adds the concept of countermeasures. If two frames with bad MICs (but good FCSs, so that we know they are not corrupted by radio effects) are received in a 60-second interval, the access point kicks all of the clients off and requires them to renegotiate new keys. This drastic step introduces a painful denial-of-service vulnerability into TKIP, but is necessary to prevent attackers from getting information easily. Of course, having countermeasures doesn't increase the robustness of the underlying algorithms, but kicking off all of the clients ensures that the attacker has to start from scratch with a new PTK.
Overall, TKIP was an acceptable bridge from WEP to WPA2. The designers rightfully recognize that TKIP is itself flawed, and is subject to a few vulnerabilities of its own. Besides the obvious denial-of-service attacks, TKIP also still allows for attacks that attempt to guess at certain parts of the particular messages and make some minor, but arbitrary, alterations to the packets successfully. Although workarounds exist for these types of attacks, TKIP will never be entirely hassle-free.
Therefore, I recommend that you migrate to WPA2 for every device on the network.

RSNA with 802.11i | Wi-Fi Security Technologies

802.11i addresses the major problems with WEP. The first problem, the inability to establish per-connection keys, and the inability to use different encryption algorithms, was fixed by a better protocol.
On top ofthat, 802.11i introduced two new encryption and integrity algorithms. Wi-Fi Protected Access (WPA), version one, was created to quickly work around the problems of WEP without requiring significant changes to the hardware that devices were built out of. WPA introduced the Temporal Key Integrity Protocol (TKIP), which sits on top of WEP and fixes many of the problems of WEP without requiring new hardware. TKIP was designed intentionally as a transition, or stopgap, protocol, with the hopes that devices would be quickly retired and replaced with those that supported the permanent solution, the second of the two algorithms.
Wi-Fi Protected Access version 2 (WPA2), as that permanent solution, required completely new hardware by not worrying about backwards compatibility. WPA2 uses AES to provide better security and eliminate the problems of using a linear stream cipher. A better integrity algorithm ensures that the packet has not been altered, and eliminates some of the denial-of-service weaknesses that needed to be introduced into TKIP to let it ward off some of the attacks that can't be directly stopped.
A word, first, on nomenclature. For those of you in the know, you might know that WPA has both TKIP and AES modes, 802.11i has slightly different TKIP and AES modes, and that both were harmonized in WPA2. However, practically, there really is no need to know that. For the remainder of this chapter, I will use WPA to mean TKIP as defined in WPA, WPA2 to mean AES as defined in the standard, and 802.11i to mean the framework under which WPA and WPA2 operate. This is actually industry convention—WPA and TKIP go hand in hand, and WPA2 and AES go hand in hand—so product documentation will most likely match with this use of the terms, but when there is doubt, ask your vendors whether they mean TKIP or AES.
802.11i first introduced the idea of a per-connection key negotiation. Each client that comes into the network must first associate. For WEP, which has no per-connection key, the client always used the user-entered WEP key, which is the same for every connection. But 802.11i introduces an additional step, to allow for a fresh set of per-connection keys every time, yet still based on the same master key.
Networks may still used preshared keys. These are now bumped up to be 128 bits long. For WPA or WPA2, this mode of security is known as Personal, because the preshared key method was intended for home use. Enterprises can also use 802.1X and a RADIUS server to negotiate a unique key per device. This mode of security is known as Enterprise. For example, "WPA2 Enterprise" refers to using WPA2 with 802.1X. Either way, the overall key is called the pairwise master key (PMK). This is the analog to the original WEP key.
Now, when the client associates, it has to run a four-message protocol, known as the four-way handshake, to determine what should be used as the key for the connection, known as the PTK (the pairwise temporal key or pairwise transient key). This whole concept of derived keys is known as a key hierarchy.
The four way handshake is made of unencrypted data frames, with Ethernet type of EAPOL (0×888E), and show up as the specific type of Extensible Authentication Protocol over LAN (EAPOL) message known as an EAPOL Key message. These four messages can be seen by wireless capture programs, and mark the opening of the data link between the client and the access point. Before the four-way handshake, clients and access points cannot exchange any data besides EAPOL frames. After the handshake, both sides can use the agreed-upon key to send data.
Message 1 of the four-way handshake is sent by the access point to the client, and signals the security settings of the access point (as contained in something called theRSN IE, shown in Table 1). The RSN IE contains the selection of encryption and integrity algorithms. The message also contains something called a nonce, which is a random number that the access point constructs (more on this shortly) and which will be mixed in with the PMK to produce the PTK.
Table 1: The security settings in the RSN IE 
Element ID
Group Cipher Suite
Pairwise Cipher Suite Count
Pairwise Cipher Suite List
AKM Suite Count
AKM Suite List
RSN Capabilities
1 bytes
1 byte
2 bytes
4 bytes
2 bytes
n bytes
2 bytes
m bytes
2 bytes
2 bytes
Message 2 is sent in response, from the client to the access point, and includes the same information, but from the client: a client RSN IE, and a client nonce. Once the client has chosen its nonce, it has enough information to produce the PTK on its end. The PTK is derived from the two nonces, the addresses of the access point and client, and the PMK. At this point, it might seem like the protocol is done: the client knows enough to construct a PTK before sending Message 2, and the access point, once it gets the message, can use the same information to construct its own PTK. If the two devices share the same PMK—the master key—then they will pick the same PTK, and packets will flow. This is true, but the protocol needs to do a little bit more work to handle the case where the PMKs do not agree. To do this, the client "signs" Message 2 with a message integrity code (MIC). The MIC used is a cryptographic hash based on both the contents of the message and the key (PTK). Thus, the access point, once it derives its own PTK from its PMK and the nonces, can check to see whether the client's sent MIC matches what it would generate using its own PTK. If they match, then the access point knows that message 2 is not a forgery and the client has the right key. If they do not match, then the access point drops the message.
If Message 2 is correct, then Message 3 is sent by the access point, and is similar to Message 1 except that it too is now "signed" by the MIC. This lets the client know that the access point has the right key: at Message 2, only the access point could detect an attacker, but not the client. Also, the client can now verify that the access point is using the same security algorithms as the client—a mismatch would only occur if an attacker is injecting false RSN IEs into the network to try to get one side or both to negotiate to a weaker algorithm (say, TKIP) if a stronger algorithm (say, AES) is available. Finally, for WPA2, the client learns of the multicast key, the group temporal key(GTK), this way, as it is encrypted with the PTK and sent as the last part of the message.
Message 4 is a response from the client to the access point, and validates that the client got Message 3 and installed all of the correct keys.
The nonces exist to prove to each side that the other side is not replaying these messages— that is, that the other side is alive and is not an attacker. Imagine that the access point sends its nonce. An attacker trying to replay a previous, valid handshake for the same client could send an old Message 2, but the MIC on that Message 2 can never be correct, because it would always be based on the access point nonce recorded previously and was used in that previous handshake, and not the new one that the access point just created. Thus, the access point always can tell the difference between a client that is really there, and one that is just replayed from the past. The client can use its nonce to do the same thing. Also, if either side has the wrong PMK—which would happen with preshared keys if someone typed one of the keys wrong—the devices can catch it in the four-way handshake and not pretend to have a working connection.
Overall, the four-way handshake lets the two sides come together on a fresh connection key every time. The four way handshake is the same, except for some minor details such as choice of algorithm, for WPA and WPA2.
By the way, keep in mind that the four-way handshake is only designed to provide a new PTK every time based on the same PMK, to provide a fresh PTK and eliminate the problem of old or stale keys that WEP has. The four-way handshake is not designed to hide the PTK from attackers who have the PMK. This is an important point: if an attacker happens to know the PMK already—such as a preshared key that he or she stole or remembered—then every PTK ever generated from that PMK, in the past and in the future, can be broken with minimal effort. This is known as a lack of forward secrecy and is a major security flaw in preshared key networks.
In other words, you must keep the PMK secret. Do not share preshared keys, ever—even if you have stopped using that preshared key and moved to a new one long ago. If an attacker had been recording your past conversations, when the old preshared key was in use, and someone leaks the preshared key to this attacker, your old conversations are in jeopardy.

WEP (Wired Equivalent Privacy) | Wi-Fi Security Technologies


WEP (Wired Equivalent Privacy) was the first attempt to secure 802.11. Unfortunately, the privacy it provided was neither equivalent to wired nor very good. Its very design does not protect against replays, meaning that an attacker can record prior valid traffic and replay it later, getting the network to repeat actions (such as charging credit cards) without detecting it. Furthermore, WEP uses for encryption RC4, an algorithm that was not designed to be used in the way WEP uses it, leading to ways of reverse-engineering and cracking the encryption without the key. Finally, WEP uses a very poor message integrity code.
All of that said, WEP is a good place to look to learn the mechanics of security in 802.11, as the later and better security additions replaced the broken pieces but did not destroy the framework.
It is the author's recommendation to not use WEP in existing or new networks, under any circumstances, because of the known flaws. Please consider the studying of WEP to be an academic exercise at this point, and do not allow vendors to talk you into using it.
1) Keying
WEP starts off with an encryption key, or a piece of knowledge that is known by the access point and the client but is sufficiently complicated that outsiders—attackers, that is— shouldn't be able to guess it.
There is one and may be two or more WEP keys. These keys are each either 40 bits (WEP-40) or 104-bits (WEP-104) long, and are usually created usually from text passwords, although they can be entered directly as hexadecimal numbers. Manually entered keys are called pre-shared keys (PSK). WEP provides very little signaling to designate that encryption is in use, and there is no way to denote whether the short or long keys are being used. If any security, at all, is used in the network, the "Privacy" flag in the network's beacons are set. Clients that want to use WEP had to associate to the network and start sending encrypted traffic. If the keys matched, the network made forward progress and the user was happy. If the keys did not match, the user would not be able to do much, but would otherwise not know what the error was. As you can see, this is not an ideal situation, and is avoided in the modern, post-WEP protocols.
There are some more complicated possibilities, which are not worth going over, except to note that the origin of the confusing 802.11 term "authentication" for the first phase of a client's connection to the network came from an old method of using WEP to verify the key before association. This security method is completely ignored by post-WEP protocols, which use a different concept to ensure that clients have the right key. Therefore, the two Authentication frames are now considered vestigial, and carry no particularly useful information in them.
2) Encryption
The encryption key is not used directly to encrypt each packet. Instead, it is concatenated with a per-packet number, called the initialization vector (IV), to create the key that RC4 uses to encrypt the data. The initialization vector can be any number. Transmitters would start at zero, and add one for each frame sent, until it hit the end of the three-byte sequence, where it would start over at zero again. Why have a per-packet key, when the original key was supposedly secret? To answer this, let's look at the encryption algorithm for WEP, which is based on RC4.
RC4 is a stream cipher, meaning that it is designed to protect a large quantity of flowing, uninterrupted data, with minimal overhead. It is used, for example, to protect secure web traffic (HTTPS), because web traffic goes across in a stream of HTML. RC4 is really a pseudorandom number generator, with cryptographic properties to ensure that the stream of bits that comes out is hard to reverse-engineer. When given a key, RC4 generates an infinite number of bits, all appearing to be random. These bits are then matched up, bit-by-bit, to the incoming plaintext, or not yet encrypted, data. Each bit of the plaintext is added to each matching bit of the RC4 stream, without carry. This is also known as taking the exclusive or of two bits, and the logic goes that the resulting "sum" bit is 1 if either of the incoming bits are 0, and 0 otherwise. The mathematical operation is represented by the  symbol, and so the four possibilities for the exclusive or are as follows: 0  0 = 0, 0  1 = 1, 1  0 = 1, and 1  1 = 0. When applied to the plaintext and RC4 together, the resulting stream looks as random as the original RC4 stream, but has the real data in it. Only a receiver with the right key can recreate the RC4 stream, do the same bitwise exclusive or to the encrypted data, and recover the original data. (The exclusive or operation has the property that any number that has any other number added twice provides the same number back: n  d  d = n. Therefore, applying the exclusive or of the RC4 stream twice to the original data, once by the encryption algorithm and once by the decryption algorithm, gets the plaintext data back.)
So far, so good. However, an attacker can use the properties of the exclusive or to recover the plaintext in certain cases, as well. If two frames come using the same per-frame key— meaning the same IV and WEP key—an eavesdropper can just add the two encrypted frames together. Both frames have the same per-frame key, so they both have the same RC4 stream, causing the exclusive or of the two encrypted frames to cancel out the identical RC4 stream and leave just the exclusive or of the two original, plaintext frames. The exclusive or of two plaintext frames isn't terribly different from having the original plaintext: the attacker can usually guess at the contents of one of the frames and make quick work discovering the contents of the other.
This isn't a flaw with RC4 itself, as it is with using any exclusive or cipher—a type of linear cipher, because  is really addition modulo 2—as they are vulnerable to bit-by-bit attacks unless other algorithms are brought in as well.
Okay, so that explains the per-frame keying and the IV, and why it is not a good solution for security. In summary, replays are allowed, and the IV wraps and key reuse reveals the original plaintext. Finally, the per-frame key doesn't include any information about the sender or receiver. Thus, an attacker can take the encrypted content from one device and inject it as if it were from another. With that, three of the problems of WEP are exposed. But the per-frame keying concept in general is sound.
3) Integrity
To attempt to provide integrity, WEP also introduces the integrity check value (ICV). This is a checksum of the decrypted data—CRC-32, specifically—that is appended to the end of the data and encrypted with it. The idea is that an attacker might want to capture an encrypted frame, make possibly trivial modifications to it (flipping bits or setting specific bits to 0 or 1), and then send it on. Why would the attacker want to do this? Most active attacks, or those that involve an attacker sending its own frames, require some sort of iterative process. The attacker takes a legitimate frame that someone else sends, makes a slight modification, and sees if that too produces a valid frame. It discovers if the frame was valid by looking for some sort of feedback—an encrypted frame in the other direction—from the receiver. As mentioned earlier, RC4 is especially vulnerable to bit flipping, because a flipped bit in the encrypted data results in the flipping of the same bit in the decrypted data. The ICV is charged with detecting when the encrypted data has been modified, because the checksum should hopefully be different for a modified frame, and the frame could be dropped for not matching its ICV.
As mentioned before, however, WEP did not get this right, either. CRC-32 is not cryptographically secure. The effect of a bit flip on the data for a CRC is known. An attacker can flip the appropriate bits in the encrypted data, and know which bits also need to be flipped in the CRC-32 ICV to arrive at another, valid CRC-32, without knowing what the original CRC-32 was. Therefore, attackers can make modifications pretty much at will and get away with it, without needing the key. But again, the concept of a per-framemessage integrity code in general is sound.
4) Overall
WEP alters the data packet, then, by appending the ICV, then encrypting the data field, then prepending the unencrypted IV. Thus, the frame body is replaced with what is in Table 1.
Table 1: 8.02.11 Frame Body with WeP 
Key ID
3 bytes
1 byte
n—8 bytes
4 bytes
The issues described are not unique to RC4, and really applies to how WEP would use any linear cipher. There are also some problems with RC4 itself that come out with the way RC4 is used in WEP, which do not come out in RC4's other applications. All in all, WEP used some of the right concepts, but a perfect storm of execution errors undermined WEP's effectiveness. Researchers and attackers started publishing what became an avalanche of writings on the vulnerability of WEP. Wi-Fi was at risk of becoming known as hopelessly broken, and drastic action was needed. Thus, the industry came together and designed 802. Hi.

Security for 802.11

Security is a broad subject, and there is an entire chapter dedicated to the unique challenges with security for voice mobility later. But any component of voice mobility over Wi-Fi will require some use of 802.11's built-in encryption. Keep in mind that securing the wireless link is not only critical, but may be the only encryption used to prevent eavesdroppers from listening in on sensitive voice calls for many networks.
802.11 security has both a rich and somewhat checkered past. Because of the initial application of 802.11 to the home, and some critical mistakes by some of the original designers, 802.11 started out with inadequate protection for traffic. But thankfully, all Wi-Fi-certified devices today are required to support strong security mechanisms.
Nevertheless, administrators today do still need to keep in mind some of the older, less secure technologies—often because the mobile handset might not correctly support the latest security, and it may fall to you to figure out how to make an old handset work without compromising the security of the rest of the network.
A secure wireless network provides at least the following (borrowed from Chapter 8):
  • Confidentiality: No wireless device other than the intended recipient can decrypt the message.
  • Outsider Rejection: No wireless device other than a trusted sender can send a message correctly encrypted.
  • Authenticity and Forgery Protection: The recipient can prove who the original composer of the message is.
  • Integrity: The message cannot be modified by a third party without the message being detected as having been tampered with.
  • Replay Protection: A older but valid message cannot be resent by an attacker later, thus preventing attackers from replaying old transactions.
Some of these properties are contained in how the encryption keys get established or sent from device to device, and the rest are contained in how the actual encryption or decryption operates.

Collisions, Backoffs, and Retries

Multiple radios that are in range of each other and have data to transmit need to take turns. However, the particular flavor of 802.11 that is used in Wi-Fi devices does not provide for any collaboration between devices to ensure that two devices do take turns. Rather, a probabilistic scheme is used, to allow for radios to know nothing about each other at the most primitive level and yet be able to transmit.
This process is known as backing off, as is the basis of Carrier Sense Multiple Access with Collision Avoidance, or CSMA-CA. The process is somewhat involved, and is the subject of quite a bit of research, but the fundamentals are simple. Each radio that has something to send waits until the channel is free. If they then transmitted immediately, then if any two radios had data to transmit, they would transmit simultaneously, causing a collision, and a receiver would only pick up interference. Carrier sense before transmission helps avoid a radio transmitting only when another radio has been transmitting for a while. If two radios do decide to transmit at roughly the same time—within a few microseconds—then it would be impossible for the two to detect each other.
To partially avoid the collisions, each radio plays a particular well-scripted game. They each pick a random nonnegative integer less than a value known as the contention window (CW), a small power of 2. This value will tell the radio the number of slots, or fixed microsecond delays, that the radio must wait before they can transmit. The goal of the random selection is that, hopefully, each transmitter will pick a different value, and thus avoid collisions. When a radio is in the process of backing off, and another radio begins to transmit during a slot, the backing-off radio will stop counting slots, wait until the channel becomes free again, and then resume where it left off. That lets each radio take turns (see Figure 1).

Figure 1: The backoff procedure for two radios
However, nothing stops two radios from picking the same value, and thus colliding. When a collision occurs, the two transmitters find out not by being able to detect a collision as Ethernet does, but by not receiving the appropriate acknowledgments. This causes the unsuccessful transmitters to double their contention window, thus reducing the likelihood that the two colliders will pick the same backoff again. Backoffs do not grow unbounded: there is a maximum contention window. Furthermore, when a transmitter with an inflated contention window does successfully transmit a frame, or gives up trying to retransmit a frame, it resets its contention window back to the initial, minimum value. The key is to remember that the backoff mechanism applies to the retransmissions only for any one given frame. Once that frame either succeeds or exceeds its retransmission limit, the backoff state is forgotten and refreshed with the most aggressive minimums.
The slotted backoff scheme had its origin in the educational Hawaiian research network scheme known as Slotted ALOHA, an early network that addressed the problem of figuring out which of multiple devices should talk without using coordination such as that which token-based networks use. This scheme became the foundation of all contention-based network schemes, including Ethernet and Wi-Fi.
However, the way contention is implemented in 802.11 has a number of negative consequences. The denser and busier the network, the more likely that two radios will collide. For example, with a contention window of four, if five stations each have data, then a collision is assured. The idea of doubling contention windows is to exponentially grow the window, reducing the chance of collisions accordingly by making it large enough to handle the density. This would allow for the backoffs to adapt to the density and business of the network. However, once a radio either succeeds or fails miserably, it resets its contention window, forgetting all adaptation effects and increasing the chance of collisions dramatically.
Furthermore, there is a direct interplay between rate adaptation—where radios drop their data rates when there is loss, assuming that the loss is because the receiver is out of range and the transmitter's choice of data rate is too aggressive—and contention avoidance. Normally, most devices do not want to transmit data at the same time. However, the busier the channel is, the more likely that devices that get data to send at different times are forced to wait for the same opening, increasing the contention. As contention goes up, collisions go up, and rate adaptation falsely assumes that the loss is because of range issues and drops the data rate. Dropping the data rate increases the amount of time each frame stays on air—a 1Mbps data frame takes 300 times the amount of time a 300Mbps data frame of the same number of bytes takes—thus increasing the business of the channel. This becomes a vicious cycle, in a process known as congestion collapse that causes the network to spend an inordinate amount of time retransmitting old data and very little time transmitting new data. This is a major issue for voice mobility networks, because the rate of traffic does not change, no matter what the air is doing, and so a network that was provisioned with plenty of room left over can become extremely congested by passing over a very short tipping point.

Hidden Nodes | Wi-Fi's Approach to Wireless

Carrier sense lets the transmitter know if the channel near itself is clear. However, for one transmitter's wireless signal to be successfully received, the channel around the receiver must be clear—the transmitter's channel doesn't matter. The receiver's channel must be clear to prevent interference from multiple signals at the same time. However, the transmitter can successfully transmit with another signal in the air, because the two signals will pass through each other without harming the transmitter's signal.
So why does 802.11 require the transmitter to listen before sending? There is no way for the receiver to inform the transmitter of its channel conditions without itself transmitting. In networks that are physically very small—well under the range of Wi-Fi transmissions—the transmitter's own carrier sensing can be a good proxy for the receiver's state. Clearly, if the transmitter and receiver are immediately next to each other, the transmitter and receiver pretty much see the same channel. But as they separate, they experience different channel conditions. Far enough away, and the transmitter has no ability to sense if a third device is transmitting to or by the receiver at the same time. This is called the hidden node problem.
Figure 1 shows two transmitters and a receiver in between the two. The receiver can hear each transmitter equally, and if both transmitters are sending at the same time, the receiver will not be able to make out the two different signals and will receive interference only. Each transmitter will perform carrier sense to ensure that the channel around it is clear, but it won't matter, because the other transmitter is out of range. Hidden node problems generally appear this way, where the interfering transmitters are on the other side of the receiver, away from the transmitter in question.

Figure 1: Hidden Nodes: The receiver can hear both transmitters equally, but neither transmitter can hear the other
802.11 uses RTS/CTS as a partial solution. As mentioned when discussing the 802.11 protocol itself, a transmitter will first send an RTS, requesting from the receiver a clear channel for the entire length of the transmission. By itself, the RTS does not do anything for the transmitter or receiver, because the data frame that should have been sent would have the same effect, of silencing all other devices around the sender. However, what matters is what the receiver does. The CTS it sends will silence the devices on the far side from the sender, using the duration value and virtual carrier sense to cause those devices to not send, even though they cannot detect the following real data frame (seeFigure 2).

Figure 2: RTS/CTS for Hidden Nodes: The CTS silences the interfering devices
This is only a partial solution, as the RTSs themselves can get lost because of hidden nodes. The advantage of the RTS, however, is that it is usually somewhat shorter than the data frame or frames following. For the RTS/CTS protocol to be the most effective against hidden nodes, the RTS and CTS must go out at the lowest data rate. However, many devices send the RTSs at far higher rates. This is done mostly to just take advantage of RTSs determining whether the receiver is in range, and not to avoid hidden nodes.
Furthermore, the RTS/CTS protocol has a very high overhead, as many data packets could be sent in the time it takes for an RTS/CTS transmission to complete.

Telecom Made Simple

Related Posts with Thumbnails