🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Chapter 07 - Cryptography and the PKI.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Chapter 7 Cryptography and the PKI THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.4. Explain the importance of using appropriate cryptographic solutions. Public key infrastructure (PKI) (Public key, Private...

Chapter 7 Cryptography and the PKI THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.4. Explain the importance of using appropriate cryptographic solutions. Public key infrastructure (PKI) (Public key, Private key, Key escrow) Encryption (Level (Full-disk, Partition, File, Volume, Database, Record), Transport/communication, Asymmetric, Symmetric, Key exchange, Algorithms, Key length) Obfuscation (Steganography) Hashing Salting Digital signatures Key stretching Blockchain Open public ledger Certificates (Certificate authorities, Certificate revocation lists (CRLs), Online Certificate Status Protocol (OCSP), Self-signed, Third-party, Root of trust, Certificate signing request (CSR) generation, Wildcard) Domai 2.0: Threats, Vulnerabilities, and Mitigations 2.3. Explain various types of vulnerabilities. Cryptographic 2.4. Given a scenario, analyze indicators of malicious activity. Cryptographic attacks (Downgrade, Collision, Birthday) Cryptography is the practice of encoding information in a manner that it cannot be decoded without access to the required decryption key. Cryptography consists of two main operations: encryption, which transforms plain-text information into ciphertext using an encryption key, and decryption, which transforms ciphertext back into plain text using a decryption key. Cryptography has several important goals. First among these is the goal of confidentiality, which corresponds to one of the three legs of the CIA triad. Organizations use encryption to protect sensitive information from prying eyes. The second goal, integrity, also corresponds to one of the three elements of the CIA triad. Organizations use cryptography to ensure that data is not maliciously or unintentionally altered. When we get to the third goal, authentication, the goals of cryptography begin to differ from the CIA triad. Although authentication begins with the letter A, remember that the A in the CIA triad is “availability.” Authentication refers to uses of encryption to validate the identity of individuals. The fourth goal, nonrepudiation, ensures that individuals can prove to a third party that a message came from its purported sender. Different cryptographic systems are capable of achieving different goals, as you will learn in this chapter. Many people, even many textbooks, tend to use the terms cryptography and cryptology interchangeably. You are not likely to be tested on the historical overview of cryptology. However, modern cryptography is considered a more challenging part of the exam and real-world security practitioners use cryptography on a regular basis to keep data confidential. So, it is important that you understand its background. An Overview of Cryptography Cryptography is a field almost as old as humankind. The first recorded cryptographic efforts occurred 4,000 years ago. These early efforts included translating messages from one language into another or substituting characters. Since that time, cryptography has grown to include a plethora of possibilities. These early forays into cryptography focused exclusively on achieving the goal of confidentiality. Classic methods used relatively simple techniques that a human being could usually break in a reasonable amount of time. The obfuscation used in modern cryptography is much more sophisticated and can be unbreakable within a practical period of time. Historical Cryptography Historical methods of cryptography predate the modern computer age. These methods did not depend on mathematics, as many modern methods do, but rather on some technique for scrambling the text. A cipher is a method used to scramble or obfuscate characters to hide their value. Ciphering is the process of using a cipher to do that type of scrambling to a message. The two primary types of nonmathematical cryptography, or ciphering methods, are substitution and transposition. We will discuss both of these methods in this section. Substitution Ciphers A substitution cipher is a type of coding or ciphering system that changes one character or symbol into another. Character substitution can be a relatively easy method of encrypting information. One of the oldest known substitution ciphers is called the Caesar cipher. It was purportedly used by Julius Caesar. The system involves simply shifting all letters a certain number of spaces in the alphabet. Supposedly, Julius Caesar used a shift of three to the right. This simply means that you turn the A's of a message into D's, the B's into E's, and so on. When you hit the end of the alphabet, you simply “wrap around” so that X's become A's, Y's become B's, and Z's become C's. Caesar was working in Latin, of course, but the same thing can be done with any language, including English. Here is an example: [I WILL PASS THE EXAM] If you shift each letter three to the right, you get the following: [L ZLOO SDVV WKH HADP] Decrypting a message encrypted with the Caesar cipher follows the reverse process. Instead of shifting each letter three places to the right, decryption shifts each letter of the ciphertext three places to the left to restore the original plain-text character. ROT13 ROT13, or “rotate 13,” is another simple substitution cipher. The ROT13 cipher works the same way as the Caesar cipher but rotates every letter 13 places in the alphabet. Thus an A becomes an N, a B becomes an O, and so forth. Because the alphabet has 26 letters, you can use the same rotation of 13 letters to decrypt the message. The Caesar cipher and ROT13 are very simple examples of substitution ciphers. They are far too simplistic to use today, as any cryptologist could break these ciphers, or any similar substitution, in a matter of seconds. However, the substitution operation forms the basis of many modern encryption algorithms. They just perform far more sophisticated substitutions and carry out those operations many times to add complexity and make the cipher harder to crack. Polyalphabetic Substitution One of the problems with substitution ciphers is that they did not change the underlying letter and word frequency of the text. One way to combat this was to have multiple substitution alphabets for the same message. Ciphers using this approach are known as polyalphabetic substitution ciphers. For example, you might shift the first letter by three to the right, the second letter by two to the right, and the third letter by one to the left; then repeat this formula with the next three letters. The most famous example of a polyalphabetic substitution from historical times was the Vigenère cipher. It used a keyword to look up the cipher text in a table, shown in Figure 7.1. The user would take the first letter in the text that they wanted to encrypt, go to the Vigenère table, and match that with the letter from the keyword in order to find the ciphertext letter. This would be repeated until the entire message was encrypted. Each letter in the keyword generated a different substitution alphabet. FIGURE 7.1 Vigenère cipher table For example, imagine that you wanted to use this cipher to encrypt the phrase “SECRET MESSAGE” using the keyword “APPLE.” You would begin by lining up the characters of the message with the characters of the keyword, repeating the keyword as many times as necessary: S E C R E T M E S S A G E A P P L E A P P L E A P P Then you create the ciphertext by looking up each pair of plain-text and key characters in Figure 7.1's Vigenère table. The first letter of the plain text is “S” and the first letter of the key is “A,” so you go to the column for S in the table and then look at the row for A and find that the ciphertext value is “S.” Repeating this process for the second character, you look up the intersection of “E” and “P” in the table to get the ciphertext character “T.” As you work your way through this process, you get this encrypted message: S T R C I T B T D W A V T To decrypt the message, you reverse the process, finding the ciphertext character in the row for the key letter and then looking at the top of that column to find the plain text. For example, the first letter brings us to the row for “A,” where we find the ciphertext character “S” is in the “S” column. The second letter brings us to the row for “P,” where we find the ciphertext character “T” in the “E” column. Transposition Ciphers A transposition cipher involves transposing or scrambling the letters in a certain manner. Typically, a message is broken into blocks of equal size, and each block is then scrambled. In the simple example shown in Figure 7.2, the characters are transposed by changing the ordering of characters within each group. In this case, the letters are rotated three places in the message. You could change the way Block 1 is transposed from Block 2 and make it a little more difficult, but it would still be relatively easy to decrypt. FIGURE 7.2 A simple transposition cipher in action Columnar transposition is a classic example of a transposition cipher. With this cipher, you choose the number of rows in advance, which will be your encryption key. You then write your message by placing successive characters in the next row until you get to the bottom of a column. For example, if you wanted to encode the message M E E T M E I N T H E S T O R E using a key of 4, you would write the message in four rows, like this: M M T T E E H O E I E R T N S E Then, to get the ciphertext, you read across the rows instead of down the columns, giving you M M T T E E H O E I E R T N S E To decrypt this message, you must know that the message was encrypted using four rows, and then you use that information to re-create the matrix, writing the ciphertext characters across the rows. You then decrypt the message by reading down the columns instead of across the rows. The Enigma Machine No discussion of the history of cryptography would be complete without discussing the Enigma machine. The Enigma machine was created by the German government during World War II to provide secure communications between military and political units. The machine, shown in Figure 7.3, looked like a typewriter with some extra features. The operator was responsible for configuring the machine to use the code of the day by setting the rotary dials at the top of the machine and configuring the wires on the front of the machine. The inner workings of the machine implemented a polyalphabetic substitution, changing the substitution for each character of the message. Once the machine was properly configured for the day, using it was straightforward. The sending operator pressed the key on the keyboard corresponding to a letter of the plain- text message. The corresponding ciphertext letter then lit up. The receiving operator followed the same process to convert back to plain text. The Enigma machine vexed Allied intelligence officers, who devoted significant time and energy to a project called Ultra designed to defeat the machine. The effort to defeat Enigma was centered at Bletchley Park in the United Kingdom and was led by pioneering computer scientist Alan Turing. The efforts led to great success in deciphering German communication, and those efforts were praised by British Prime Minister Winston Churchill himself, who reportedly told King George VI that “it is thanks to [Ultra], put into use on all the fronts, that we won the war!” FIGURE 7.3 Enigma machine from the National Security Agency's National Cryptologic Museum Source: USA.gov Steganography Steganography is the art of using cryptographic techniques to embed secret messages within another file. Steganographic algorithms work by making alterations to the least significant bits of the many bits that make up image files. The changes are so minor that there is no appreciable effect on the viewed image. This technique allows communicating parties to hide messages in plain sight—for example, they might embed a secret message within an illustration on an otherwise innocent web page. Exam Note Remember that steganography is the practice of using cryptographic techniques to embed or conceal secret messages within another file. It can be used to hide images, text, audio, video, and many other forms of digital content. Steganographers often embed their secret messages within images, video files, or audio files because these files are often so large that the secret message would easily be missed by even the most observant inspector. Steganography techniques are often used for illegal or questionable activities, such as espionage and child pornography. Steganography can also be used for legitimate purposes, however. Adding digital watermarks to documents to protect intellectual property is accomplished by means of steganography. The hidden information is known only to the file's creator. If someone later creates an unauthorized copy of the content, the watermark can be used to detect the copy and (if uniquely watermarked files are provided to each original recipient) trace the offending copy back to the source. Steganography is an extremely simple technology to use, with free tools openly available on the Internet. Figure 7.4 shows the entire interface of one such tool, OpenStego. It simply requires that you specify a text file containing your secret message and an image file that you wish to use to hide the message. Figure 7.5 shows an example of a picture with an embedded secret message; the message is impossible to detect with the human eye. FIGURE 7.4 OpenStego steganography tool Goals of Cryptography Security practitioners use cryptographic systems to meet four fundamental goals: confidentiality, integrity, authentication, and non-repudiation. Achieving each of these goals requires the satisfaction of a number of design requirements, and not all cryptosystems are intended to achieve all four goals. In the following sections, we'll examine each goal in detail and give a brief description of the technical requirements necessary to achieve it. FIGURE 7.5 Image with embedded message Source: vadiml/Adobe Stock Photos Confidentiality Confidentiality ensures that data remains private in three different situations: when it is at rest, when it is in transit, and when it is in use. Confidentiality is perhaps the most widely cited goal of cryptosystems—the preservation of secrecy for stored information or for communications between individuals and groups. Two main types of cryptosystems enforce confidentiality: Symmetric cryptosystems use a shared secret key available to all users of the cryptosystem. Asymmetric cryptosystems use individual combinations of public and private keys for each user of the system. Both of these concepts are explored in the section “Modern Cryptography” later in this chapter. Exam Tip The concept of protecting data at rest, data in transit, and data in use is often covered on the Security+ exam. You should also know that data in transit is also commonly called data on the wire, referring to the network cables that carry data communications. If you're not familiar with these concepts, you might want to review their coverage in Chapter 1. When developing a cryptographic system for the purpose of providing confidentiality, you must think about three types of data: Data at rest, or stored data, is that which resides in a permanent location awaiting access. Examples of data at rest include data stored on hard drives, backup tapes, cloud storage services, USB devices, and other storage media. Data in transit, or data in transport/communication, is data being transmitted across a network between two systems. Data in transit might be traveling on a corporate network, a wireless network, or the public Internet. The most common way to protect network communications using sensitive data is with the Transport Layer Security (TLS) protocol. Data in use is data that is stored in the active memory of a computer system where it may be accessed by a process running on that system. Each of these situations poses different types of confidentiality risks that cryptography can protect against. For example, data in transit may be susceptible to eavesdropping attacks, whereas data at rest is more susceptible to the theft of physical devices. Data in use may be accessed by unauthorized processes if the operating system does not properly implement process isolation. Obfuscation is a concept closely related to confidentiality. It is the practice of making it intentionally difficult for humans to understand how code works. This technique is often used to hide the inner workings of software, particularly when it contains sensitive intellectual property. Protecting Data at Rest with Different Levels of Encryption When you are protecting data at rest, you have several different options for applying encryption to that data. Encrypting Data on Disk Data that is stored directly on a disk may be managed with full-disk encryption, partition encryption, file encryption, and volume encryption. Full-disk encryption (FDE) is a form of encryption where all the data on a hard drive is automatically encrypted, including the operating system and system files. The key advantage of FDE is that it requires no special attention from the user after initial setup. In the case of loss or theft, FDE can prevent unauthorized access to all data on the hard drive. However, once the system is booted, the entire disk is accessible, which means data is vulnerable if the system is compromised while running. Partition encryption is similar to FDE but targets a specific partition of a hard drive instead of the entire disk. This allows for more flexibility, as you can choose which parts of your data to encrypt and which to leave unencrypted. Partition encryption is particularly useful when dealing with dual-boot systems or when segregating sensitive data. File-level encryption focuses on individual files. This method allows users to encrypt specific files rather than entire drives or partitions. It is generally easier to set up and manage than FDE or partition encryption but may not be as secure since unencrypted and encrypted files may coexist on the same drive. Volume encryption involves encrypting a set “volume” on a storage device, which could contain several folders and files. This is like a middle ground between partition encryption and file-level encryption. Volume encryption is useful when you want to encrypt a large amount of data at once but don't need to encrypt an entire disk or partition. Encrypting Database Data Sensitive information may also be managed by a database, which is responsible for maintaining the confidentiality of sensitive information. When encrypting data in a database, you may choose to perform database-level encryption and/or record-level encryption. Database encryption targets data at the database level. It's a method used to protect sensitive information stored in a database from access by unauthorized individuals. There are two primary types of database encryption: Transparent Data Encryption (TDE), which encrypts the entire database, and Column-level Encryption (CLE), which allows for specific columns within tables to be encrypted. Record-level encryption is a more granular form of database encryption. It allows individual records within a database to be encrypted. This can provide more precise control over who can access what data, and it can be particularly useful in shared environments, where different users or user groups need access to different subsets of data. Exam Note Be sure you know the differences between the various encryption levels; Full-disk, Partition, File, Volume, Database, and Record. Integrity Integrity ensures that data is not altered without authorization. If integrity mechanisms are in place, the recipient of a message can be certain that the message received is identical to the message that was sent. Similarly, integrity checks can ensure that stored data was not altered between the time it was created and the time it was accessed. Integrity controls protect against all forms of alteration, including intentional alteration by a third party attempting to insert false information, intentional deletion of portions of the data, and unintentional alteration by faults in the transmission process. Message integrity is enforced through the use of encrypted message digests, known as digital signatures, created upon transmission of a message. The recipient of the message simply verifies that the message's digital signature is valid, ensuring that the message was not altered in transit. Integrity can be enforced by both public and secret key cryptosystems. Authentication Authentication verifies the claimed identity of system users and is a major function of cryptosystems. For example, suppose that Bob wants to establish a communications session with Alice and they are both participants in a shared secret communications system. Alice might use a challenge-response authentication technique to ensure that Bob is who he claims to be. Figure 7.6 shows how this challenge-response protocol would work in action. In this example, the shared-secret code used by Alice and Bob is quite simple—the letters of each word are simply reversed. Bob first contacts Alice and identifies himself. Alice then sends a challenge message to Bob, asking him to encrypt a short message using the secret code known only to Alice and Bob. Bob replies with the encrypted message. After Alice verifies that the encrypted message is correct, she trusts that Bob himself is truly on the other end of the connection. FIGURE 7.6 Challenge-response authentication protocol Non-repudiation Non-repudiation provides assurance to the recipient that the message was originated by the sender and not someone masquerading as the sender. It also prevents the sender from claiming that they never sent the message in the first place (also known as repudiating the message). Secret key, or symmetric key, cryptosystems (such as simple substitution ciphers) do not provide this guarantee of non-repudiation. If Jim and Bob participate in a secret key communication system, they can both produce the same encrypted message using their shared secret key. Non-repudiation is offered only by public key, or asymmetric, cryptosystems, a topic discussed later in this chapter. Cryptographic Concepts As with any science, you must be familiar with certain terminology before studying cryptography. Let's take a look at a few of the key terms used to describe codes and ciphers. Before a message is put into a coded form, it is known as a plain-text message and is represented by the letter P when encryption functions are described. The sender of a message uses a cryptographic algorithm to encrypt the plain-text message and produce a ciphertext message, represented by the letter C. This message is transmitted by some physical or electronic means to the recipient. The recipient then uses a predetermined algorithm to decrypt the ciphertext message and retrieve the plain-text version. Cryptographic Keys All cryptographic algorithms rely on keys to maintain their security. For the most part, a key is nothing more than a number. It's usually a very large binary number, but it's a number nonetheless. Every algorithm has a specific key space. The key space is the range of values that are valid for use as a key for a specific algorithm. A key space is defined by its key length. Key length is nothing more than the number of binary bits (0s and 1s) in the key. The key space is the range between the key that has all 0s and the key that has all 1s. Or to state it another way, the key space is the range of numbers from 0 to 2n, where n is the bit size of the key. So, a 128-bit key can have a value from 0 to 2128 (which is roughly 3.40282367 × 1038, a very big number!). It is absolutely critical to protect the security of secret keys. In fact, all of the security you gain from cryptography rests on your ability to keep secret keys secret. The Kerckhoffs’ Principle All cryptography relies on algorithms. An algorithm is a set of rules, usually mathematical, that dictates how enciphering and deciphering processes are to take place. Most cryptographers follow the Kerckhoffs’ principle, a concept that makes algorithms known and public, allowing anyone to examine and test them. Specifically, the Kerckhoffs’ principle (also known as Kerckhoffs’ assumption) is that a cryptographic system should be secure even if everything about the system, except the key, is public knowledge. The principle can be summed up as “The enemy knows the system.” A large number of cryptographers adhere to this principle, but not all agree. In fact, some believe that better overall security can be maintained by keeping both the algorithm and the key private. Kerckhoffs’ adherents retort that the opposite approach includes the dubious practice of “security through obscurity” and believe that public exposure produces more activity and exposes more weaknesses more readily, leading to the abandonment of insufficiently strong algorithms and quicker adoption of suitable ones. As you'll learn in this chapter, different types of algorithms require different types of keys. In private key (or secret key) cryptosystems, all participants use a single shared key. In public key cryptosystems, each participant has their own pair of keys. Cryptographic keys are sometimes referred to as cryptovariables. The art of creating and implementing secret codes and ciphers is known as cryptography. This practice is paralleled by the art of cryptanalysis—the study of methods to defeat codes and ciphers. Together, cryptography and cryptanalysis are commonly referred to as cryptology. Specific implementations of a code or cipher in hardware and software are known as cryptosystems. Ciphers Ciphers are the algorithms used to perform encryption and decryption operations. Cipher suites are the sets of ciphers and key lengths supported by a system. Modern ciphers fit into two major categories, describing their method of operation: Block ciphers operate on “chunks,” or blocks, of a message and apply the encryption algorithm to an entire message block at the same time. The transposition ciphers are examples of block ciphers. The simple algorithm used in the challenge-response algorithm takes an entire word and reverses its letters. The more complicated columnar transposition cipher works on an entire message (or a piece of a message) and encrypts it using the transposition algorithm and a secret keyword. Most modern encryption algorithms implement some type of block cipher. Stream ciphers operate on one character or bit of a message (or data stream) at a time. The Caesar cipher is an example of a stream cipher. The one-time pad is also a stream cipher because the algorithm operates on each letter of the plain-text message independently. Stream ciphers can also function as a type of block cipher. In such operations there is a buffer that fills up to real-time data that is then encrypted as a block and transmitted to the recipient. Modern Cryptography Modern cryptosystems use computationally complex algorithms and long cryptographic keys to meet the cryptographic goals of confidentiality, integrity, authentication, and nonrepudiation. The following sections cover the roles cryptographic keys play in the world of data security and examine three types of algorithms commonly used today: symmetric key encryption algorithms, asymmetric key encryption algorithms, and hashing algorithms. Cryptographic Secrecy In the early days of cryptography, one of the predominant principles was “security through obscurity.” Some cryptographers thought the best way to keep an encryption algorithm secure was to hide the details of the algorithm from outsiders. Old cryptosystems required communicating parties to keep the algorithm used to encrypt and decrypt messages secret from third parties. Any disclosure of the algorithm could lead to compromise of the entire system by an adversary. Modern cryptosystems do not rely on the secrecy of their algorithms. In fact, the algorithms for most cryptographic systems are widely available for public review in the accompanying literature and on the Internet. Opening algorithms to public scrutiny actually improves their security. Widespread analysis of algorithms by the computer security community allows practitioners to discover and correct potential security vulnerabilities and ensure that the algorithms they use to protect their communications are as secure as possible. Instead of relying on secret algorithms, modern cryptosystems rely on the secrecy of one or more cryptographic keys used to personalize the algorithm for specific users or groups of users. Recall from the discussion of transposition ciphers that a keyword is used with the columnar transposition to guide the encryption and decryption efforts. The algorithm used to perform columnar transposition is well known—you just read the details of it in this book! However, columnar transposition can be used to securely communicate between parties as long as a keyword is chosen that would not be guessed by an outsider. As long as the security of this keyword is maintained, it doesn't matter that third parties know the details of the algorithm. Although the public nature of the algorithm does not compromise the security of columnar transposition, the method does possess several inherent weaknesses that make it vulnerable to cryptanalysis. It is therefore an inadequate technology for use in modern secure communication. The length of a cryptographic key is an extremely important factor in determining the strength of the cryptosystem and the likelihood that the encryption will not be compromised through cryptanalytic techniques. The rapid increase in computing power allows you to use increasingly long keys in your cryptographic efforts. However, this same computing power is also in the hands of cryptanalysts attempting to defeat the algorithms you use. Therefore, it's essential that you outpace adversaries by using sufficiently long keys that will defeat contemporary cryptanalysis efforts. Additionally, if you want to improve the chance that your data will remain safe from cryptanalysis some time into the future, you must strive to use keys that will outpace the projected increase in cryptanalytic capability during the entire time period the data must be kept safe. For example, the advent of quantum computing may transform cryptography, rendering current cryptosystems insecure, as discussed later in this chapter. Several decades ago, when the Data Encryption Standard (DES) was created, a 56-bit key was considered sufficient to maintain the security of any data. However, the 56-bit DES algorithm is no longer secure because of advances in cryptanalysis techniques and supercomputing power. Modern cryptographic systems use at least a 128-bit key to protect data against prying eyes. Remember, the length of the key directly relates to the work function of the cryptosystem; for a secure cryptosystem, the longer the key, the harder it is to break the cryptosystem. Symmetric Key Algorithms Symmetric key algorithms rely on a “shared secret” encryption key that is distributed to all members who participate in the communications. This key is used by all parties to both encrypt and decrypt messages, so the sender and the receiver both possess a copy of the shared key. The sender encrypts with the shared secret key and the receiver decrypts with it. When large-sized keys are used, symmetric encryption is very difficult to break. It is primarily employed to perform bulk encryption and provides only for the security service of confidentiality. Symmetric key cryptography can also be called secret key cryptography and private key cryptography. Figure 7.7 illustrates the symmetric key encryption and decryption processes. FIGURE 7.7 Symmetric key cryptography The use of the term private key can be tricky because it is part of three different terms that have two different meanings. The term private key by itself always means the private key from the key pair of public key cryptography (aka asymmetric). However, both private key cryptography and shared private key refer to symmetric cryptography. The meaning of the word private is stretched to refer to two people sharing a secret that they keep confidential. (The true meaning of private is that only a single person has a secret that's kept confidential.) Be sure to keep these confusing terms straight in your studies. Symmetric key cryptography has several weaknesses: Key exchange is a major problem. Parties must have a secure method of exchanging the secret key before establishing communications with a symmetric key protocol. If a secure electronic channel is not available, an offline key distribution method must often be used (that is, out-of-band exchange). Symmetric key cryptography does not implement non-repudiation. Because any communicating party can encrypt and decrypt messages with the shared secret key, there is no way to prove where a given message originated. The algorithm is not scalable. It is extremely difficult for large groups to communicate using symmetric key cryptography. Secure private communication between individuals in the group could be achieved only if each possible combination of users shared a private key. Keys must be regenerated often. Each time a participant leaves the group, all keys known by that participant must be discarded. The major strength of symmetric key cryptography is the great speed at which it can operate. Symmetric key encryption is very fast, often 1,000 to 10,000 times faster than asymmetric algorithms. By nature of the mathematics involved, symmetric key cryptography also naturally lends itself to hardware implementations, creating the opportunity for even higher-speed operations. The section “Symmetric Cryptography” later in this chapter provides a detailed look at the major secret key algorithms in use today. Asymmetric Key Algorithms Asymmetric key algorithms, also known as public key algorithms, provide a solution to the weaknesses of symmetric key encryption. In these systems, each user has two keys: a public key, which is shared with all users, and a private key, which is kept secret and known only to the owner of the key pair. But here's a twist: opposite and related keys must be used in tandem to encrypt and decrypt. In other words, if the public key encrypts a message, then only the corresponding private key can decrypt it, and vice versa. Figure 7.8 shows the algorithm used to encrypt and decrypt messages in a public key cryptosystem. Consider this example. If Alice wants to send a message to Bob using public key cryptography, she creates the message and then encrypts it using Bob's public key. The only possible way to decrypt this ciphertext is to use Bob's private key, and the only user with access to that key is Bob. Therefore, Alice can't even decrypt the message herself after she encrypts it. If Bob wants to send a reply to Alice, he simply encrypts the message using Alice's public key, and then Alice reads the message by decrypting it with her private key. FIGURE 7.8 Asymmetric key cryptography Key Requirements In a class one of the authors of this book taught recently, a student wanted to see an illustration of the scalability issue associated with symmetric encryption algorithms. The fact that symmetric cryptosystems require each pair of potential communicators to have a shared private key makes the algorithm nonscalable. The total number of keys required to completely connect n parties using symmetric cryptography is given by the following formula: Number of Keys = n(n–1) / 2 Now, this might not sound so bad (and it's not for small systems), but consider the following figures. Obviously, the larger the population, the less likely a symmetric cryptosystem will be suitable to meet its needs. Number of Number of symmetric keys Number of asymmetric keys participants required required 2 1 4 3 3 6 4 6 8 5 10 10 10 45 20 100 4,950 200 1,000 499,500 2,000 10,000 49,995,000 20,000 Asymmetric key algorithms also provide support for digital signature technology. Basically, if Bob wants to assure other users that a message with his name on it was actually sent by him, he first creates a message digest by using a hashing algorithm (you'll find more on hashing algorithms in the next section). Bob then encrypts that digest using his private key. Any user who wants to verify the signature simply decrypts the message digest using Bob's public key and then verifies that the decrypted message digest is accurate. The following is a list of the major strengths of asymmetric key cryptography: The addition of new users requires the generation of only one public- private key pair. This same key pair is used to communicate with all users of the asymmetric cryptosystem. This makes the algorithm extremely scalable. Users can be removed far more easily from asymmetric systems. Asymmetric cryptosystems provide a key revocation mechanism that allows a key to be canceled, effectively removing a user from the system. Key regeneration is required only when a user's private key is compromised. If a user leaves the community, the system administrator simply needs to invalidate that user's keys. No other keys are compromised and therefore key regeneration is not required for any other user. Asymmetric key encryption can provide integrity, authentication, and non-repudiation. If a user does not share their private key with other individuals, a message signed by that user can be shown to be accurate and from a specific source and cannot be later repudiated. Key exchange is a simple process. Users who want to participate in the system simply make their public key available to anyone with whom they want to communicate. There is no method by which the private key can be derived from the public key. No preexisting communication link needs to exist. Two individuals can begin communicating securely from the start of their communication session. Asymmetric cryptography does not require a preexisting relationship to provide a secure mechanism for data exchange. The major weakness of public key cryptography is its slow speed of operation. For this reason, many applications that require the secure transmission of large amounts of data use public key cryptography to establish a connection and then exchange a symmetric secret key. The remainder of the session then uses symmetric cryptography. Table 7.1 compares the symmetric and asymmetric cryptography systems. Close examination of this table reveals that a weakness in one system is matched by a strength in the other. Exam Note Exam objective 1.4 calls out Asymmetric, Symmetric, Key exchange, Algorithms, and Key length. Be sure you focus on each of these! TABLE 7.1 Comparison of symmetric and asymmetric cryptography systems Symmetric Asymmetric Single shared key Key pair sets Out-of-band exchange In-band exchange Not scalable Scalable Fast Slow Bulk encryption Small blocks of data, digital signatures, digital certificates Confidentiality, Confidentiality, integrity, authentication, non- integrity repudiation Hashing Algorithms In the previous section, you learned that public key cryptosystems can provide digital signature capability when used in conjunction with a message digest. Message digests are summaries of a message's content (not unlike a file checksum) produced by a hashing algorithm. It's extremely difficult, if not impossible, to derive a message from an ideal hash function, and it's very unlikely that two messages will produce the same hash value. Cases where a hash function produces the same value for two different methods are known as collisions, and the existence of collisions typically leads to the deprecation of a hashing algorithm. Symmetric Cryptography You've learned the basic concepts underlying symmetric key cryptography, asymmetric key cryptography, and hashing functions. In the following sections, we'll take an in- depth look at three common symmetric cryptosystems: the Data Encryption Standard (DES), Triple DES (3DES), and the Advanced Encryption Standard (AES). Data Encryption Standard The U.S. government published the Data Encryption Standard in 1977 as a proposed standard cryptosystem for all government communications. Because of flaws in the algorithm, cryptographers and the federal government no longer consider DES secure. It is widely believed that intelligence agencies routinely decrypt DES-encrypted information. DES was superseded by the Advanced Encryption Standard in December 2001. An adapted version of DES, Triple DES (3DES), uses the same algorithm three different times with three different encryption keys to produce a more secure encryption. However, even the 3DES algorithm is now considered insecure, and it is scheduled to be deprecated in December 2023. Advanced Encryption Standard In October 2000, the National Institute of Standards and Technology (NIST) announced that the Rijndael (pronounced “rhine-doll”) block cipher had been chosen as the replacement for DES. In November 2001, NIST released Federal Information Processing Standard (FIPS) 197, which mandated the use of AES/Rijndael for the encryption of all sensitive but unclassified data by the U.S. government. The AES cipher allows the use of three key strengths: 128 bits, 192 bits, and 256 bits. AES only allows the processing of 128-bit blocks, but Rijndael exceeded this specification, allowing cryptographers to use a block size equal to the key length. The number of encryption rounds depends on the key length chosen: 128-bit keys require 10 rounds of encryption. 192-bit keys require 12 rounds of encryption. 256-bit keys require 14 rounds of encryption. Today, AES is one of the most widely used encryption algorithms, and it plays an essential role in wireless network security, the Transport Layer Security (TLS) protocol, file/disk encryption, and many other applications that call for strong cryptography. Symmetric Key Management Because cryptographic keys contain information essential to the security of the cryptosystem, it is incumbent upon cryptosystem users and administrators to take extraordinary measures to protect the security of the keying material. These security measures are collectively known as key management practices. They include safeguards surrounding the creation, distribution, storage, destruction, recovery, and escrow of secret keys. Creation and Distribution of Symmetric Keys As previously mentioned, key exchange is one of the major problems underlying symmetric encryption algorithms. Key exchange is the secure distribution of the secret keys required to operate the algorithms. The three main methods used to exchange secret keys securely are offline distribution, public key encryption, and the Diffie– Hellman key exchange algorithm. Offline Distribution The most technically simple method involves the physical exchange of key material. One party provides the other party with a sheet of paper or piece of storage media containing the secret key. In many hardware encryption devices, this key material comes in the form of an electronic device that resembles an actual key that is inserted into the encryption device. However, every offline key distribution method has its own inherent flaws. If keying material is sent through the mail, it might be intercepted. Telephones can be wiretapped. Papers containing keys might be lost or thrown in the trash inadvertently. Public Key Encryption Many communicators want to obtain the speed benefits of secret key encryption without the hassles of key distribution. For this reason, many people use public key encryption to set up an initial communications link. Once the link is established successfully and the parties are satisfied as to each other's identity, they exchange a secret key over the secure public key link. They then switch communications from the public key algorithm to the secret key algorithm and enjoy the increased processing speed. In general, secret key encryption is thousands of times faster than public key encryption. Diffie–Hellman In some cases, neither public key encryption nor offline distribution is sufficient. Two parties might need to communicate with each other, but they have no physical means to exchange key material, and there is no public key infrastructure in place to facilitate the exchange of secret keys. In situations like this, key exchange algorithms like the Diffie–Hellman algorithm prove to be extremely useful mechanisms. About the Diffie–Hellman Algorithm The Diffie–Hellman algorithm represented a major advance in the state of cryptographic science when it was released in 1976. It's still in use today. The algorithm works as follows: 1. The communicating parties (we'll call them Richard and Sue) agree on two large numbers: p (which is a prime number) and g (which is an integer) such that 1 < g < p. 2. Richard chooses a random large integer r and performs the following calculation: R = gr mod p 3. Sue chooses a random large integer s and performs the following calculation: S = gs mod p 4. Richard sends R to Sue and Sue sends S to Richard. 5. Richard then performs the following calculation: K = Sr mod p 6. Sue then performs the following calculation: K = Rs mod p At this point, Richard and Sue both have the same value, K, and can use this for secret key communication between them. Storage and Destruction of Symmetric Keys Another major challenge with the use of symmetric key cryptography is that all of the keys used in the cryptosystem must be kept secure. This includes following best practices surrounding the storage of encryption keys: Never store an encryption key on the same system where encrypted data resides. This just makes it easier for the attacker! For sensitive keys, consider providing two different individuals with half of the key. They then must collaborate to re-create the entire key. This is known as the principle of split knowledge. When a user with knowledge of a secret key leaves the organization or is no longer permitted access to material protected with that key, the keys must be changed, and all encrypted materials must be re-encrypted with the new keys. The difficulty of destroying a key to remove a user from a symmetric cryptosystem is one of the main reasons organizations turn to asymmetric algorithms. Key Escrow and Recovery While cryptography offers tremendous security benefits, it can also be a little risky. If someone uses strong cryptography to protect data and then loses the decryption key, they won't be able to access their data again! Similarly, if an employee leaves the organization unexpectedly, coworkers may be unable to decrypt data that the user encrypted with a secret key. Key escrow systems address this situation by having a third party store a protected copy of the key for use in an emergency. Organizations may have a formal key recovery policy that specifies the circumstances under which a key may be retrieved from escrow and used without a user's knowledge. Asymmetric Cryptography Recall from earlier in this chapter that public key cryptosystems rely on pairs of keys assigned to each user of the cryptosystem. Every user maintains both a public key and a private key. As the names imply, public key cryptosystem users make their public keys freely available to anyone with whom they want to communicate. The mere possession of the public key by third parties does not introduce any weaknesses into the cryptosystem. The private key, on the other hand, is reserved for the sole use of the individual who owns the keys. It is never shared with any other cryptosystem user. Normal communication between public key cryptosystem users is quite straightforward, and was illustrated in Figure 7.8. Notice that the process does not require the sharing of private keys. The sender encrypts the plain-text message (P) with the recipient's public key to create the ciphertext message (C). When the recipient opens the ciphertext message, they decrypt it using their private key to recreate the original plain-text message. Once the sender encrypts the message with the recipient's public key, no user (including the sender) can decrypt that message without knowing the recipient's private key (the second half of the public-private key pair used to generate the message). This is the beauty of public key cryptography—public keys can be freely shared using unsecured communications and then used to create secure communications channels between users previously unknown to each other. Asymmetric cryptography entails a higher degree of computational complexity than symmetric cryptography. Keys used within asymmetric systems must be longer than those used in symmetric systems to produce cryptosystems of equivalent strengths. RSA The most famous public key cryptosystem is named after its creators. In 1977, Ronald Rivest, Adi Shamir, and Leonard Adleman proposed the RSA public key algorithm that remains a worldwide standard today. They patented their algorithm and formed a commercial venture known as RSA Security to develop mainstream implementations of their security technology. Today, the RSA algorithm has been released into the public domain and is widely used for secure communication. The RSA algorithm depends on the computational difficulty inherent in factoring large prime numbers. Each user of the cryptosystem generates a pair of public and private keys using the algorithm. The specifics of key generation are beyond the scope of the exam, but you should remember that it is based on the complexity of factoring large prime numbers. Key Length The length of the cryptographic key is perhaps the most important security parameter that can be set at the discretion of the security administrator. It's important to understand the capabilities of your encryption algorithm and choose a key length that provides an appropriate level of protection. This judgment can be made by weighing the difficulty of defeating a given key length (measured in the amount of processing time required to defeat the cryptosystem) against the importance of the data. Generally speaking, the more critical your data, the stronger the key you use to protect it should be. Timeliness of the data is also an important consideration. You must take into account the rapid growth of computing power—Moore's law suggests that computing power doubles approximately every 2 years. If it takes current computers one year of processing time to break your code, it will take only 3 months if the attempt is made with contemporary technology about 4 years down the road. If you expect that your data will still be sensitive at that time, you should choose a much longer cryptographic key that will remain secure well into the future. Also, as attackers are now able to leverage cloud computing resources, they are able to more efficiently attack encrypted data. The cloud allows attackers to rent scalable computing power, including powerful graphics processing units (GPUs) on a per- hour basis and offers significant discounts when using excess capacity during non- peak hours. This brings powerful computing well within reach of many attackers. The strengths of various key lengths also vary greatly according to the cryptosystem you're using. For example, a 1,024-bit RSA key offers approximately the same degree of security as a 160-bit ECC key. So, why not just always use an extremely long key? Longer keys are certainly more secure, but they also require more computational overhead. It's the classic trade-off of resources versus security constraints. Elliptic Curve In 1985, two mathematicians, Neal Koblitz from the University of Washington, and Victor Miller from IBM, independently proposed the application of elliptic curve cryptography (ECC) theory to develop secure cryptographic systems. The mathematical concepts behind elliptic curve cryptography are quite complex and well beyond the scope of this book. However, you should be generally familiar with the elliptic curve algorithm and its potential applications when preparing for the Security+ exam. Any elliptic curve can be defined by the following equation: y2 = x3 + ax + b In this equation, x, y, a, and b are all real numbers. Each elliptic curve has a corresponding elliptic curve group made up of the points on the elliptic curve along with the point O, located at infinity. Two points within the same elliptic curve group (P and Q) can be added together with an elliptic curve addition algorithm. This operation is expressed as P + Q This problem can be extended to involve multiplication by assuming that Q is a multiple of P, meaning the following: Q = xP Computer scientists and mathematicians believe that it is extremely hard to find x, even if P and Q are already known. This difficult problem, known as the elliptic curve discrete logarithm problem, forms the basis of elliptic curve cryptography. It is widely believed that this problem is harder to solve than both the prime factorization problem that the RSA cryptosystem is based on and the standard discrete logarithm problem utilized by Diffie–Hellman. Hash Functions Later in this chapter, you'll learn how cryptosystems implement digital signatures to provide proof that a message originated from a particular user of the cryptosystem and to ensure that the message was not modified while in transit between the two parties. Before you can completely understand that concept, we must first explain the concept of hash functions, which we first visited in Chapter 4, “Social Engineering and Password Attacks.” We will explore the basics of hash functions and look at several common hash functions used in modern digital signature algorithms. Hash functions have a very simple purpose—they take a potentially long message and generate a unique output value derived from the content of the message. This value is commonly referred to as the message digest. Message digests can be generated by the sender of a message and transmitted to the recipient along with the full message for two reasons. First, the recipient can use the same hash function to recompute the message digest from the full message. They can then compare the computed message digest to the transmitted one to ensure that the message sent by the originator is the same one received by the recipient. If the message digests do not match, that means the message was somehow modified while in transit. It is important to note that the messages must be exactly identical for the digests to match. If the messages have even a slight difference in spacing, punctuation, or content, the message digest values will be completely different. It is not possible to tell the degree of difference between two messages by comparing the digests. Even a slight difference will generate totally different digest values. Second, the message digest can be used to implement a digital signature algorithm. This concept is covered in the section “Digital Signatures” later in this chapter. The term message digest is used interchangeably with a wide variety of synonyms, including hash, hash value, hash total, CRC, fingerprint, checksum, and digital ID. There are five basic requirements for a cryptographic hash function: They accept an input of any length. They produce an output of a fixed length, regardless of the length of the input. The hash value is relatively easy to compute. The hash function is one-way (meaning that it is extremely hard to determine the input when provided with the output). A secure hash function is collision free (meaning that it is extremely hard to find two messages that produce the same hash value). Exam Note Remember that a hash is a one-way cryptographic function that takes an input and generates a unique and repeatable output from that input. No two inputs should ever generate the same hash, and a hash should not be reversible so that the original input can be derived from the hash. SHA The Secure Hash Algorithm (SHA), and its successors, SHA-1, SHA-2, and SHA-3, are government standard hash functions promoted by NIST and are specified in an official government publication—the Secure Hash Standard (SHS), also known as FIPS 180. SHA-1 takes an input of virtually any length (in reality, there is an upper bound of approximately 2,097,152 terabytes on the algorithm) and produces a 160-bit message digest. The SHA-1 algorithm processes a message in 512-bit blocks. Therefore, if the message length is not a multiple of 512, the SHA algorithm pads the message with additional data until the length reaches the next highest multiple of 512. Cryptanalytic attacks demonstrated that there are weaknesses in the SHA-1 algorithm. This led to the creation of SHA-2, which has four variants: SHA-256 produces a 256-bit message digest using a 512-bit block size. SHA-224 uses a truncated version of the SHA-256 hash to produce a 224-bit message digest using a 512-bit block size. SHA-512 produces a 512-bit message digest using a 1,024-bit block size. SHA-384 uses a truncated version of the SHA-512 hash to produce a 384-bit digest using a 1,024-bit block size. The cryptographic community generally considers the SHA-2 algorithms secure, but they theoretically suffer from the same weakness as the SHA-1 algorithm. In 2015, the federal government announced the release of the Keccak algorithm as the SHA-3 standard. The SHA-3 suite was developed to serve as a drop-in replacement for the SHA-2 hash functions, offering the same variants and hash lengths using a more secure algorithm. MD5 In 1991, Ron Rivest released the next version of his message digest algorithm, which he called MD5. It also processes 512-bit blocks of the message, but it uses four distinct rounds of computation to produce a digest of the same length as the earlier MD2 and MD4 algorithms (128 bits). MD5 implements security features that reduce the speed of message digest production significantly. Unfortunately, security researchers demonstrated that the MD5 protocol is subject to collisions, preventing its use for ensuring message integrity. Digital Signatures Once you have chosen a cryptographically sound hashing algorithm, you can use it to implement a digital signature system. Digital signature infrastructures have two distinct goals: Digitally signed messages assure the recipient that the message truly came from the claimed sender. They enforce non-repudiation (that is, they preclude the sender from later claiming that the message is a forgery). Digitally signed messages assure the recipient that the message was not altered while in transit between the sender and recipient. This protects against both malicious modification (a third party altering the meaning of the message) and unintentional modification (because of faults in the communications process, such as electrical interference). Digital signature algorithms rely on a combination of the two major concepts already covered in this chapter—public key cryptography and hashing functions. If Alice wants to digitally sign a message she's sending to Bob, she performs the following actions: 1. Alice generates a message digest of the original plain-text message using one of the cryptographically sound hashing algorithms, such as SHA3-512. 2. Alice then encrypts only the message digest using her private key. This encrypted message digest is the digital signature. 3. Alice appends the signed message digest to the plain-text message. 4. Alice transmits the appended message to Bob. When Bob receives the digitally signed message, he reverses the procedure, as follows: 1. Bob decrypts the digital signature using Alice's public key. 2. Bob uses the same hashing function to create a message digest of the full plain-text message received from Alice. 3. Bob then compares the decrypted message digest he received from Alice with the message digest he computed himself. If the two digests match, he can be assured that the message he received was sent by Alice. If they do not match, either the message was not sent by Alice or the message was modified while in transit. Digital signatures are used for more than just messages. Software vendors often use digital signature technology to authenticate code distributions that you download from the Internet, such as applets and software patches. Note that the digital signature process does not provide any privacy in and of itself. It only ensures that the cryptographic goals of integrity, authentication, and non- repudiation are met. However, if Alice wanted to ensure the privacy of her message to Bob, she could add a step to the message creation process. After appending the signed message digest to the plain-text message, Alice could encrypt the entire message with Bob's public key. When Bob received the message, he would decrypt it with his own private key before following the steps just outlined. HMAC The Hash-Based Message Authentication Code (HMAC) algorithm implements a partial digital signature—it guarantees the integrity of a message during transmission, but it does not provide for non-repudiation. Which Key Should I Use? If you're new to public key cryptography, selecting the correct key for various applications can be quite confusing. Encryption, decryption, message signing, and signature verification all use the same algorithm with different key inputs. Here are a few simple rules to help keep these concepts straight in your mind when preparing for the exam: If you want to encrypt a message, use the recipient's public key. If you want to decrypt a message sent to you, use your private key. If you want to digitally sign a message you are sending to someone else, use your private key. If you want to verify the signature on a message sent by someone else, use the sender's public key. These four rules are the core principles of public key cryptography and digital signatures. If you understand each of them, you're off to a great start! HMAC can be combined with any standard message digest generation algorithm, such as SHA-3, by using a shared secret key. Therefore, only communicating parties who know the key can generate or verify the digital signature. If the recipient decrypts the message digest but cannot successfully compare it to a message digest generated from the plain-text message, that means the message was altered in transit. Because HMAC relies on a shared secret key, it does not provide any non-repudiation functionality (as previously mentioned). However, it operates in a more efficient manner than the digital signature standard described in the following section and may be suitable for applications in which symmetric key cryptography is appropriate. In short, it represents a halfway point between unencrypted use of a message digest algorithm and computationally expensive digital signature algorithms based on public key cryptography. Public Key Infrastructure The major strength of public key encryption is its ability to facilitate communication between parties previously unknown to each other. This is made possible by the public key infrastructure (PKI) hierarchy of trust relationships. These trusts permit combining asymmetric cryptography with symmetric cryptography along with hashing and digital certificates, giving us hybrid cryptography. In the following sections, you'll learn the basic components of the public key infrastructure and the cryptographic concepts that make global secure communications possible. You'll learn the composition of a digital certificate, the role of certificate authorities, and the process used to generate and destroy certificates. Certificates Digital certificates provide communicating parties with the assurance that the people they are communicating with truly are who they claim to be. Digital certificates are essentially endorsed copies of an individual's public key. When users verify that a certificate was signed by a trusted certificate authority (CA), they know that the public key is legitimate. Digital certificates contain specific identifying information, and their construction is governed by an international standard—X.509. Certificates that conform to X.509 contain the following certificate attributes: Version of X.509 to which the certificate conforms Serial number (from the certificate creator) Signature algorithm identifier (specifies the technique used by the certificate authority to digitally sign the contents of the certificate) Issuer name (identification of the certificate authority that issued the certificate) Validity period (specifies the dates and times—a starting date and time and an expiration date and time—during which the certificate is valid) Subject's Common Name (CN) that clearly describes the certificate owner (e.g., certmike.com) Certificates may optionally contain Subject Alternative Names (SANs) that allow you to specify additional items (IP addresses, domain names, and so on) to be protected by the single certificate Subject's public key (the meat of the certificate—the actual public key the certificate owner used to set up secure communications) The current version of X.509 (version 3) supports certificate extensions—customized variables containing data inserted into the certificate by the certificate authority to support tracking of certificates or various applications. Certificates may be issued for a variety of purposes. These include providing assurance for the public keys of: Computers/machines Individual users Email addresses Developers (code-signing certificates) The subject of a certificate may include a wildcard in the certificate name, indicating that the certificate is good for subdomains as well. The wildcard is designated by an asterisk character. For example, a wildcard certificate issued to *.certmike.com would be valid for all of the following domains: certmike.com www.certmike.com mail.certmike.com secure.certmike.com Wildcard certificates are only good for one level of subdomain. Therefore, the *.certmike.com certificate would not be valid for the www.cissp.certmike.com subdomain. Certificate Authorities Certificate authorities (CAs) are the glue that binds the public key infrastructure together. These neutral organizations offer notarization services for digital certificates. To obtain a digital certificate from a reputable CA, you must prove your identity to the satisfaction of the CA. The following list includes some of the major CAs who provide widely accepted digital certificates: IdenTrust Amazon Web Services DigiCert Group Sectigo/Comodo GlobalSign Let's Encrypt GoDaddy Nothing is preventing any organization from simply setting up shop as a CA. However, the certificates issued by a CA are only as good as the trust placed in the CA that issued them. This is an important item to consider when receiving a digital certificate from a third party. If you don't recognize and trust the name of the CA that issued the certificate, you shouldn't place any trust in the certificate at all. PKI relies on a hierarchy of trust relationships. If you configure your browser to trust a CA, it will automatically trust all of the digital certificates issued by that CA. Browser developers preconfigure browsers to trust the major CAs to avoid placing this burden on users. Registration authorities (RAs) assist CAs with the burden of verifying users' identities prior to issuing digital certificates. They do not directly issue certificates themselves, but they play an important role in the certification process, allowing CAs to remotely validate user identities. Certificate authorities must carefully protect their own private keys to preserve their trust relationships. To do this, they often use an offline CA to protect their root certificate, the top-level certificate for their entire PKI that serves as the root of trust for all certificates issued by the CA. This offline root CA is disconnected from networks and powered down until it is needed. The offline CA uses the root certificate to create subordinate intermediate CAs that serve as the online CAs used to issue certificates on a routine basis. In the CA trust model, the use of a series of intermediate CAs is known as certificate chaining. To validate a certificate, the browser verifies the identity of the intermediate CA(s) first and then traces the path of trust back to a known root CA, verifying the identity of each link in the chain of trust. Certificate authorities do not need to be third-party service providers. Many organizations operate internal CAs that provide self-signed certificates for use inside an organization. These certificates won't be trusted by the browsers of external users, but internal systems may be configured to trust the internal CA, saving the expense of obtaining certificates from a third-party CA. Certificate Generation and Destruction The technical concepts behind the public key infrastructure are relatively simple. In the following sections, we'll cover the processes used by certificate authorities to create, validate, and revoke client certificates. Enrollment When you want to obtain a digital certificate, you must first prove your identity to the CA in some manner; this process is called enrollment. As mentioned in the previous section, this sometimes involves physically appearing before an agent of the certification authority with the appropriate identification documents. Some certificate authorities provide other means of verification, including the use of credit report data and identity verification by trusted community leaders. Once you've satisfied the certificate authority regarding your identity, you provide them with your public key in the form of a Certificate Signing Request (CSR). The CA next creates an X.509 digital certificate containing your identifying information and a copy of your public key. The CA then digitally signs the certificate using the CA's private key and provides you with a copy of your signed digital certificate. You may then safely distribute this certificate to anyone with whom you want to communicate securely. Certificate authorities issue different types of certificates depending on the level of identity verification that they perform. The simplest, and most common, certificates are Domain Validation (DV) certificates, where the CA simply verifies that the certificate subject has control of the domain name. Extended Validation (EV) certificates provide a higher level of assurance, and the CA takes steps to verify that the certificate owner is a legitimate business before issuing the certificate. Verification When you receive a digital certificate from someone with whom you want to communicate, you verify the certificate by checking the CA's digital signature using the CA's public key. Next, you must check and ensure that the certificate was not revoked using a certificate revocation list (CRL) or the Online Certificate Status Protocol (OCSP). At this point, you may assume that the public key listed in the certificate is authentic, provided that it satisfies the following requirements: The digital signature of the CA is authentic. You trust the CA. The certificate is not listed on a CRL. The certificate actually contains the data you are trusting. The last point is a subtle but extremely important item. Before you trust an identifying piece of information about someone, be sure that it is actually contained within the certificate. If a certificate contains the email address ([email protected]) but not the individual's name, you can be certain only that the public key contained therein is associated with that email address. The CA is not making any assertions about the actual identity of the [email protected] email account. However, if the certificate contains the name Bill Jones along with an address and telephone number, the CA is vouching for that information as well. Digital certificate verification algorithms are built into a number of popular web browsing and email clients, so you won't often need to get involved in the particulars of the process. However, it's important to have a solid understanding of the technical details taking place behind the scenes to make appropriate security judgments for your organization. It's also the reason that, when purchasing a certificate, you choose a CA that is widely trusted. If a CA is not included in, or is later pulled from, the list of CAs trusted by a major browser, it will greatly limit the usefulness of your certificate. In 2017, a significant security failure occurred in the digital certificate industry. Symantec, through a series of affiliated companies, issued several digital certificates that did not meet industry security standards. In response, Google announced that the Chrome browser would no longer trust Symantec certificates. As a result, Symantec wound up selling off their certificate issuing business to DigiCert, who agreed to properly validate certificates prior to issuance. This demonstrates the importance of properly validating certificate requests. A series of seemingly small lapses in procedure can decimate a CA's business! Certificate pinning approaches instruct browsers to attach a certificate to a subject for an extended period of time. When sites use certificate pinning, the browser associates that site with their public key. This allows users or administrators to notice and intervene if a certificate changes unexpectedly. Revocation Occasionally, a certificate authority needs to revoke a certificate. This might occur for one of the following reasons: The certificate was compromised (for example, the certificate owner accidentally gave away the private key). The certificate was erroneously issued (for example, the CA mistakenly issued a certificate without proper verification). The details of the certificate changed (for example, the subject's name changed). The security association changed (for example, the subject is no longer employed by the organization sponsoring the certificate). The revocation request grace period is the maximum response time within which a CA will perform any requested revocation. This is defined in the certificate practice statement (CPS). The CPS states the practices a CA employs when issuing or managing certificates. You can use three techniques to verify the authenticity of certificates and identify revoked certificates: Certificate Revocation Lists Certificate revocation lists (CRLs) are maintained by the various certificate authorities and contain the serial numbers of certificates that have been issued by a CA and have been revoked along with the date and time the revocation went into effect. The major disadvantage to certificate revocation lists is that they must be downloaded and cross-referenced periodically, introducing a period of latency between the time a certificate is revoked and the time end users are notified of the revocation. Online Certificate Status Protocol (OCSP) This protocol eliminates the latency inherent in the use of certificate revocation lists by providing a means for real-time certificate verification. When a client receives a certificate, it sends an OCSP request to the CA's OCSP server. The server then responds with a status of good, revoked, or unknown. The browser uses this information to determine whether the certificate is valid. Certificate Stapling The primary issue with OCSP is that it places a significant burden on the OCSP servers operated by certificate authorities. These servers must process requests from every single visitor to a website or other user of a digital certificate, verifying that the certificate is valid and not revoked. Certificate stapling is an extension to the Online Certificate Status Protocol that relieves some of the burden placed upon certificate authorities by the original protocol. When a user visits a website and initiates a secure connection, the website sends its certificate to the end user, who would normally then be responsible for contacting an OCSP server to verify the certificate's validity. In certificate stapling, the web server contacts the OCSP server itself and receives a signed and timestamped response from the OCSP server, which it then attaches, or staples, to the digital certificate. Then, when a user requests a secure web connection, the web server sends the certificate with the stapled OCSP response to the user. The user's browser then verifies that the certificate is authentic and also validates that the stapled OCSP response is genuine and recent. Because the CA signed the OCSP response, the user knows that it is from the certificate authority, and the time stamp ensures the user that the CA recently validated the certificate. From there, communication may continue as normal. The time savings come when the next user visits the website. The web server can simply reuse the stapled certificate without recontacting the OCSP server. As long as the time stamp is recent enough, the user will accept the stapled certificate without needing to contact the CA's OCSP server again. It's common to have stapled certificates with a validity period of 24 hours. That reduces the burden on an OCSP server from handling one request per user over the course of a day, which could be millions of requests, to handling one request per certificate per day. That's a tremendous reduction. Certificate Formats Digital certificates are stored in files, and those files come in a variety of formats, both binary and text-based: The most common binary format is the Distinguished Encoding Rules (DER) format. DER certificates are normally stored in files with the.der,.crt, or.cer extension. The Privacy Enhanced Mail (PEM) certificate format is an ASCII text version of the DER format. PEM certificates are normally stored in files with the.pem or.crt extension. You may have picked up on the fact that the.crt file extension is used for both binary DER files and text PEM files. That's very confusing! You should remember that you can't tell whether a CRT certificate is binary or text without actually looking at the contents of the file. The Personal Information Exchange (PFX) format is commonly used by Windows systems. PFX certificates may be stored in binary form, using either the.pfx or the.p12 file extension. Windows systems also use P7B certificates, which are stored in ASCII text format. Table 7.2 provides a summary of certificate formats. TABLE 7.2 Digital certificate formats File Standard Format extension(s) Distinguished Encoding Rules (DER) Binary.der,.crt,.cer Privacy Enhanced Mail (PEM) Text.pem,.crt Personal Information Exchange Binary.pfx,.p12 (PFX) P7B Text.p7b Asymmetric Key Management When you're working within the public key infrastructure, it's important that you comply with several best practice requirements to maintain the security of your communications. First, choose your encryption system wisely. As you learned earlier, “security through obscurity” is not an appropriate approach. Choose an encryption system with an algorithm in the public domain that has been thoroughly vetted by industry experts. Be wary of systems that use a “black-box” approach and maintain that the secrecy of their algorithm is critical to the integrity of the cryptosystem. You must also select your keys in an appropriate manner. Use a key length that balances your security requirements with performance considerations. Also, ensure that your key is truly random or, in cryptographic terms, that it has sufficient entropy. Any predictability within the key increases the likelihood that an attacker will be able to break your encryption and degrade the security of your cryptosystem. You should also understand the limitations of your cryptographic algorithm and avoid the use of any known weak keys. When using public key encryption, keep your private key secret! Do not, under any circumstances, allow anyone else to gain access to your private key. Remember, allowing someone access even once permanently compromises all communications that take place (past, present, or future) using that key and allows the third party to impersonate you successfully. Retire keys when they've served a useful life. Many organizations have mandatory key rotation requirements to protect against undetected key compromise. If you don't have a formal policy that you must follow, select an appropriate interval based on the frequency with which you use your key. Continued reuse of a key creates more encrypted material that may be used in cryptographic attacks. You might want to change your key pair every few months, if practical. Back up your key! If you lose the file containing your private key because of data corruption, disaster, or other circumstances, you'll certainly want to have a backup available. You may want to either create your own backup or use a key escrow service that maintains the backup for you. In either case, ensure that the backup is handled in a secure manner. Hardware security modules (HSMs) also provide an effective way to manage encryption keys. These hardware devices store and manage encryption keys in a secure manner that prevents humans from ever needing to work directly with the keys. HSMs range in scope and complexity from very simple devices, such as the YubiKey, that store encrypted keys on a USB drive for personal use, to more complex enterprise products that reside in a datacenter. Cloud providers, such as Amazon and Microsoft, also offer cloud-based HSMs that provide secure key management for infrastructure-as-a-service (IaaS) services. Cryptographic Attacks If time has taught us anything, it is that people frequently do things that other people thought were impossible. Every time a new code or process is invented that is thought to be unbreakable, someone comes up with a method of breaking it. Let's look at some common code-breaking techniques. Brute Force This method simply involves trying every possible key. It is guaranteed to work, but it is likely to take so long that it is not usable. For example, to break a Caesar cipher, there are only 26 possible keys, which you can try in a very short time. But even DES, which has a rather weak key, would take 256 different attempts. That is 72,057,594,037,927,936 possible DES keys. To put that in perspective, if you try 1 million keys per second, it would take you just a bit over 46,190,765 years to try them all. Frequency Analysis Frequency analysis involves looking at the blocks of an encrypted message to determine if any common patterns exist. Initially, the analyst doesn't try to break the code but looks at the patterns in the message. In the English language, the letters e and t and words like the, and, that, it, and is are very common. Single letters that stand alone in a sentence are usually limited to a and I. A determined cryptanalyst looks for these types of patterns and, over time, may be able to deduce the method used to encrypt the data. This process can sometimes be simple, or it may take a lot of effort. This method works only on the historical ciphers that we discussed at the beginning of this chapter. It does not work on modern algorithms. Known Plain Text This attack relies on the attacker having pairs of known plain text along with the corresponding ciphertext. This gives the attacker a place to start attempting to derive the key. With modern ciphers, it would still take many billions of such combinations to have a chance at cracking the cipher. This method was, however, successful at cracking the German Naval Enigma. The code breakers at Bletchley Park in the UK realized that all German Naval messages ended with Heil Hitler. They used this known plain-text attack to crack the key. Chosen Plain Text In this attack, the attacker obtains the ciphertexts corresponding to a set of plain texts of their own choosing. This allows the attacker to attempt to derive the key used and thus decrypt other messages encrypted with that key. This can be difficult, but it is not impossible. Advanced methods such as differential cryptanalysis are types of chosen plain-text attacks. Related Key Attack This is like a chosen plain-text attack, except the attacker can obtain ciphertexts encrypted under two different keys. This is a useful attack if you can obtain the plain- text and matching ciphertext. Birthday Attack This is an attack on cryptographic hashes, based on something called the birthday theorem. The basic idea is this: How many people would you need to have in a room to have a strong likelihood that two would have the same birthday (month and day, but not year)? Obviously, if you put 367 people in a room, at least two of them must have the same birthday, since there are only 365 days in a year, plus one more in a leap year. The paradox is not asking how many people you need to guarantee a match—just how many you need to have a strong probability. Even with 23 people in the room, you have a 50 percent chance that two will have the same birthday. The probability that the first person does not share a birthday with any previous person is 100 percent, because there are no previous people in the set. That can be written as 365/365. The second person has only one preceding person, and the odds that the second person has a birthday different from the first are 364/365. The third person might share a birthday with two preceding people, so the odds of having a birthday from either of the two preceding people are 363/365. Because each of these is independent, we can compute the probability as follows: (342 is the probability that the 23rd person shares a birthday with a preceding person.) When we convert these to decimal values, it yields (truncating at the third decimal point): This 49 percent is the probability that 23 people will not have any birthdays in common; thus, there is a 51 percent (better than even odds) chance that two of the 23 will have a birthday in common. The math works out to about 1.7 √ n to get a collision. Remember, a collision is when two inputs produce the same output. So for an MD5 hash, you might think that you need 2128 +1 different inputs to get a collision—and for a guaranteed collision you do. That is an exceedingly large number: 3.4028236692093846346337460743177e+38. But the Birthday paradox tells us that to just have a 51 percent chance of there being a collision with a hash you only need 1.7 √ n (n being 2128) inputs. That number is still very large: 31,359,464,925,306,237,747.2. But it is much smaller than the brute-force approach of trying every possible input. Downgrade Attack A downgrade attack is sometimes used against secure communications such as TLS in an attempt to get the user or system to inadvertently shift to less secure cryptographic modes. The idea is to trick the user into shifting to a less secure version of the protocol, one that might be easier to break. Exam Note As you prepare for the exam, be sure to understand the differences between downgrade attacks as well as the concept of collisions and birthday attacks, as these are specifically mentioned in the exam objectives! Hashing, Salting, and Key Stretching Rainbow table attacks attempt to reverse hashed password values by precomputing the hashes of common passwords. The attacker takes a list of common passwords and runs them through the hash function to generate the rainbow table. They then search through lists of hashed values, looking for matches to the rainbow table. The most common approach to preventing these attacks is salting, which adds a randomly generated value to each password prior to hashing. Key stretching is used to create encryption keys from passwords in a strong manner. Key stretching algorithms, such as the Password-Based Key Derivation Function v2 (PBKDF2), use thousands of iterations of salting and hashing to generate encryption keys that are resilient against attack. Exam Note Hashing, salting, and key stretching are all specifically mentioned in the exam objectives. Be sure that you understand these concepts! Exploiting Weak Keys There are also scenarios in which someone is using a good cryptographic algorithm (like AES) but has it implemented in a weak manner—for example, using weak key generation. A classic example is the Wireless Equivalent Privacy (WEP) protocol. This protocol uses an improper implementation of the RC4 encryption algorithm and has significant security vulnerabilities. That is why WEP should never be used on a modern network. Exploiting Human Error Human error is one of the major causes of encryption vulnerabilities. If an email is sent using an encryption scheme, someone else may send it in the clear (unencrypted). If a cryptanalyst gets ahold of both messages, the process of decoding future messages will be simplified considerably. A code key might wind up in the wrong hands, giving insights into what the key consists of. Many systems have been broken into as a result of these types of accidents. A classic example involved the transmission of a sensitive military-related message using an encryption system. Most messages have a preamble that informs the receiver who the message is for, who sent it, how many characters are in the message, the date and time it was sent, and other pertinent information. In this case, the preamble was sent in clear text, and this information was also encrypted and put into the message. As a result, the cryptanalysts gained a key insight into the message contents. They were given approximately 50 characters that were repeated in the message in code. This error caused a relatively secure system to be compromised. Another error is to use weak or deprecated algorithms. Over time, some algorithms are no longer considered appropriate. This may be due to some flaw found in the algorithm. It can also be due to increasing computing power. For example, in 1976 DES was considered very strong. But advances in computer power have made its key length too short. Although the algorithm is sound, the key size makes DES a poor choice for modern cryptography, and that algorithm has been deprecated. Emerging Issues in Cryptography As you prepare for the Security+ exam, you'll need to stay abreast of some emerging issues in cryptography and cryptographic applications. Let's review some of the topics covered in the Security+ exam objectives. Tor and the Dark Web Tor, formerly known as The Onion Router, provides a mechanism for anonymously routing traffic across the Internet using encryption and a set of relay nodes. It relies on a technology known as perfect forward secrecy, where layers of encryption prevent nodes in the relay chain from reading anything other than the specific information they need to accept and forward the traffic. By using perfect forward secrecy in combination with a set of three or more relay nodes, Tor allows for both anonymous browsing of the standard Internet, as well as the hosting of completely anonymous sites on the Dark Web. Blockchain The blockchain is, in its simplest description, a distributed and immutable open public ledger. This means that it can store records in a way that distributes those records among many different systems located around the world and do so in manner that prevents anyone from tampering with those records. The blockchain creates a data store that nobody can tamper with or destroy. The first major application of the blockchain is cryptocurrency. The blockchain was originally invented as a foundational technology for Bitcoin, allowing the tracking of Bitcoin transactions without the use of a centralized authority. In this manner, blockchain allows the existence of a currency that has no central regulator. Authority for Bitcoin transactions is distributed among all participants in the Bitcoin blockchain. Although cryptocurrency is the blockchain application that has received the most attention, there are many other uses for a distributed immutable ledger—so much so that new applications of blockchain technology seem to be appearing every day. For example, property ownership records could benefit tremendously from a blockchain application. This approach would place those records in a transparent, public repository that is protected against intentional or accidental damage. Blockchain technology might also be used to track supply chains, providing consumers with confidence that their produce came from reputable sources and allowing regulators to easily track down the origin of recalled produce. Lightweight Cryptography There are many specialized use cases for cryptography that you may encounter during your career where computing power and energy might be limited. Some devices operate at extremely low power levels and put a premium on conserving energy. For example, imagine sending a satellite into space with a limited power source. Thousands of hours of engineering goes into getting as much life as possible out of that power source. Similar cases happen here on Earth, where remote sensors must transmit information using solar power, a small battery, or other circumstances. Smartcards are another example of a low power environment. They must be able to communicate securely with smartcard readers, but only using the energy either stored on the card or transferred to it by a magnetic field. In these cases, cryptographers often design specialized hardware that is purpose-built to implement lightweight cryptographic algorithms with as little power expenditure as possible. You won't need to know the details of how these algorithms work, but you should be familiar with the concept that specialized hardware can minimize power consumption. Another specialized use case for cryptography are cases where you need very low latency. That simply means that the encryption and decryption should not take a long time. Encrypting network links is a common example of low latency cryptography. The data is moving across a network quickly and the encryption should be done as quickly as possible to avoid becoming a bottleneck. Specialized encryption hardware also solves many low-latency requirements. For example, a dedicated VPN hardware device may contain cryptographic hardware that implements encryption and decryption operations in highly efficient form to maximize speed. High resiliency requirements exist when it is extremely important that data be preserved and not destroyed accidentally during an encryption operation. In cases where resiliency is extremely important, the easiest way to address the issue is for the sender of data to retain a copy until the recipient confirms the successful receipt and decryption of the data. Homomorphic Encryption Privacy concerns also introduce some specialized use cases for encryption. In particular, we sometimes have applications where we want to protect the privacy of individuals but still want to perform calculations on their data. Homomorphic encryption technology allows this, encrypting data in a way that preserves the ability to perform computation on that data. When you encrypt data with a homomorphic algorithm and then perform computation on that data, you get a result that, when decrypted, matches the result you would have received if you had performed the computation on the plain-text data in the first place. Quantum Computing Quantum computing is an emerging field that attempts to use quantum mechanics to perform computing and communication tasks. It's still mostly a theoretical field, but if it advances to the point where that theory becomes practical to implement, quantum cryptography may be able to defeat cryptographic algorithms that depend on factoring large prime numbers. At the same time, quantum computing may be used to develop even stronger cryptographic algorithms that would be far more secure than modern approaches. We'll have to wait and see how those develop to provide us with strong quantum communications in the postquantum era. Summary Cryptography is one of the most important security controls in use today and it touches almost every other area of security, ranging from networking to software development. The use of cryptography supports the goals of providing confidentiality, integrity, authentication, and non-repudiation in a wide variety of applications. Symmetric encryption technology uses shared secret keys to provide security for data at rest and data in motion. As long as users are able to overcome key exchange and maintenance issues, symmetric encryption is fast and efficient. Asymmetric cryptography and the public key infrastructure (PKI) provide a scalable way to securely communicate, particularly when the communicating parties do not have a prior relationship. Exam Essentials Understand the goals of cryptography. The four goals of cryptography are confidentiality, integrity, authentication, and non-repudiation. Confidentiality is the use of encryption to protect sensitive information from prying eyes. Integrity is

Use Quizgecko on...
Browser
Browser