Chapter 20 - Network Security Concepts.pdf
Document Details
Uploaded by Deleted User
Tags
Related
- CompTIA Network+ Guide to Networks 9th Edition PDF
- CompTIA Network+ Guide to Networks 9th Edition PDF
- Base de datos_FCP FortiGate 7.4 Administrator.pdf
- Cisco CCNP Security Core - SCOR 350-701 Official Certification Guide PDF
- CompTIA Security+ Guide to Network Security Fundamentals PDF
- CompTIA Network+ Certification All-in-One Exam Guide PDF
Full Transcript
Chapter 20 Network Security Concepts THE FOLLOWING COMPTIA NETWORK+ OBJECTIVES ARE COVERED IN THIS CHAPTER: Domain 4.0 Network Security 4.1 Explain the importance of basic network security concepts. Logical security Encryption Data in transit Data at rest Certif...
Chapter 20 Network Security Concepts THE FOLLOWING COMPTIA NETWORK+ OBJECTIVES ARE COVERED IN THIS CHAPTER: Domain 4.0 Network Security 4.1 Explain the importance of basic network security concepts. Logical security Encryption Data in transit Data at rest Certificates Public key infrastructure (PKI) Self-signed Identity and access management (IAM) Authentication Multifactor authentication (MFA) Single sign-on (SSO) Remote Authentication Dial-in User Service (RADIUS) LDAP Security Assertion Markup Language (SAML) Terminal Access Controller Access Control System Plus (TACACS+) Time-based authentication Authorization Least privilege Role-based access control Risk Vulnerability Exploit Threat Confidentiality, Integrity, and Availability (CIA) triad Audits and regulatory compliance Data locality Payment Card Industry Data Security Standards (PCI DSS) General Data Protection Regulation (GDPR) An administrator's life was so much simpler in the days before every user had access to the Internet and the Internet had the ability to access organizational data. It was much simpler when an administrator only had to maintain a number of dumb terminals connected to a mainframe. If you didn't have physical access, you simply didn't have a means to access the organization's data. A lot of the headaches for the administrator have come with the inherent security risks of the data availability as networks expand. As our world—and our networks—have become more connected, the need to secure data and keep it away from the eyes of those who can do harm has increased exponentially. Realizing this increasing risk, CompTIA added the domain of Network Security to the Network+ exam a number of years ago. Network security is now a topic in which every Network+ candidate must be proficient in order to pass the exam. A Network+ candidate must understand the basic security concepts, security concerns, and how to mitigate various security concerns. This chapter is one of two chapters that focus primarily on security. This particular chapter will cover a myriad of security concepts that we will build upon. In this chapter, we will cover basic security terminology, and then we will explore the AAA security framework for authentication, authorizing, and auditing users. Network+ is not the only IT certification that CompTIA offers. Security+ is one of the more popular choices. The topics found in this chapter are a subset of what you need to know for that certification. Common Security Terminology Security can be intimidating for the average administrator, because the advanced topics can be confusing and carry a tremendous consequence if not fully understood. An administrator could have a rude awakening if their network security is insufficient; they could even end up in the unemployment line. Fortunately, just like networking concepts, there are common security concepts that build off each other. In the following sections, I will cover some common security terminology that can help in understanding more advanced topics of security as we progress through this chapter. Threats and Risk A threat is a perceived danger to the organization, and a risk is the potential that the threat will come to fruition and disrupt the organization. The concept of threat and risk are two concepts that are intertwined together when talking about any potential for disruption. You can also find these terms as the basis of risk assessment for organizations, analyzing everything from simple changes in practice to major changes. When we think of threats, we tend to think of just cyberattack threats, where a bad actor uses the network to carry out an attack. However, threats can also be physical, such as weather or other environmental factors that have a potential to be disruptive to the organization. As an oversimplified example, let's use the trip to the grocery store to set the stage in understanding threats. If we purchase groceries and place them in the car, there are all kinds of potential dangers (threats) to our groceries. We could get into an accident on the way home, they could be stolen, or our ice cream could melt. For each threat, there is a certain amount of risk. Risk is the likelihood it could or will happen. If you are a good driver, then the risk of an accident is low. If you live in a relatively safe neighborhood, then the risk that the groceries will be stolen is low. However, if it's a really hot day, there is a high risk of the ice cream melting. Unfortunately, an organization's network is a lot more complicated than a grocery store visit. In most organizations there are two types of threats: external and internal. External Threats External threats are almost always carried out by an attacker with malicious intent. The definition of an external threat is that it is a threat that you cannot control. The risk or potential of these threats is usually our connection to the outside world, such as the Internet, email, and our Internet-facing servers, as well as our proactiveness in mitigating the threats. If you mitigate the external threat, then the threat is less likely to occur and the risk is lower. Examples of external threats are denial-of-service (DoS) attacks, ransomware, and viruses, just to name a few. There are many ways of leveraging the risks associated with the aforementioned threats. For example, DoS attacks can be leveraged with firewall rules, ransomware can be leveraged with backups, and viruses can be leveraged with antimalware software. All of these leverages lower the potential risk for the perceived threats. External threats do not need to be exclusive to security threats; they can also be the threat of availability of information. For example, say you served information from your organization, had only one connection to the Internet for the server, and the link went down. This would be an example of an external threat outside of your direct control. Mitigating the threat might be obtaining a second link to the Internet as failover. External nonsecurity threats and risks is a much broader subject and lends itself better to a discussion on risk assessment, which is outside the scope of this chapter. Internal Threats Internal threats, also known as threats from within or internal or insider threats, are potentially carried out by an employee. The various motivations behind attacks are monetary gain, ideology, and revenge, just to name a few. A disgruntled employee can compromise your organization's network by intentionally leaking the organization's data or intentionally running malicious code. Internal threats can be much more dangerous than external threats because employees often know exactly how systems are configured and how to circumvent controls. However, because these threats emanate from within our organization, we can mitigate or control a lot of the potential threats. Later in this chapter, we will explore some ways to mitigate insider threats. Unintentional Threats An unintentional threat is a threat that is not malicious in its intent, such as a leaked password or misconfigured antimalware software. The number of unintentional threats to an organization can be expansive. As network administrators we should be focused on the threats we can directly mitigate to effectively lower our risk. For most unintentional threats we can use system policies, human resource policies, and training, or a combination to mitigate the potential threat. Vulnerability Vulnerabilities introduce weakness to the security of our organization. Therefore, vulnerabilities can contribute to the threats an organization could fall victim to. These weaknesses in security can be physical and network-based. An example of physical vulnerabilities are often doors that are not locked to sensitive areas where data is accessible. Network vulnerabilities are typically found in applications, operating systems, and network products. Vulnerabilities are the reason we need to constantly patch/update applications and network operating systems. However, even with constant patching, we can never be assured that we have eliminated all vulnerabilities. Some vulnerabilities will always exist and sometimes never be known by the owners of the system or the threat actors attempting access to these systems. Patching does, however, lower our risk or potential for an attack through known vulnerabilities. Common Vulnerabilities and Exposures Common Vulnerabilities and Exposures (CVE) is a system that provides a reference- method for publicly known vulnerabilities and exposures. The CVE system was originally created in 1999 by a working group based upon a white paper published by David E. Mann and Steven M. Christey of the MITRE Corporation. The purpose of the CVE system is to create a common naming scheme for vulnerabilities while reducing any overlaps in documentation from varying agencies documenting the same vulnerability. A typical CVE will look similar to this common nomenclature of CVE-2020-17084. This particular CVE details a buffer overflow in Microsoft Exchange Server that was discovered in 2020. By investigating this CVE on the Internet, you can see that regardless of who hosts the information, it pertains to the same vulnerability. For example, Microsoft references the same vulnerability the National Institute of Standards and Technology (NIST) references using this CVE number. All sources have the same relative information about the vulnerability, such as affected versions, a score that explains the associated risk, the impact, and references. Keep in mind that the CVE is not a database. It is a numbering system for tracking vulnerabilities. The numbering and characterization of a CVE is up to the CVE Numbering Authority (can). As in the previous example, Microsoft is CNA and therefore created the initial CVE-2020-17084 document that all other CVE repositories must reference similarly. There are CNAs all around the world, and most of them are software manufacturers such as IBM, Microsoft, and VMware. However, some of the CNAs are security organizations. The underlying theme is not who creates the CVE, but that once it is created, the CVE should be consistent across all references. EXERCISE 20.1 Identifying Vulnerabilities This exercise will help you identify vulnerabilities of applications installed on your operating system. It is not a complete assessment of all vulnerabilities, but it will help you understand the application vulnerabilities that exist and how they can be mitigated. 1. On your operating system navigate to your installed applications list. This is typically done by opening your Settings app and clicking Installed Apps. 2. Navigate to https://cve.mitre.org and select Search CVE List. 3. Start researching each application by submitting a search for each application. 4. Review each CVE result and compare the affected version to the installed version. 5. Research how to mitigate the security vulnerability. You will find that not every vulnerability is patchable by upgrading to the latest software. You will also find that the steps to mitigate the vulnerability are often found on the software publisher's site. The CVE is simply to let you know that there is a security concern. Many different antimalware software packages will scan your computer for malware, and as an added benefit, they will also identify vulnerable software installed. Typically, these antimalware software packages are tailored for the end user and do not detail the CVEs associated with each product. You will find that more complex systems for the organization, such as the Microsoft 365 E5/A5 Security add-on, will report to the administrator vulnerabilities across the entire organization. The administrator can then plan to mitigate vulnerabilities and threats across the entire organization. You can read more about the Microsoft product at https://learn.microsoft.com/en-us/microsoft-365/security/defender- vulnerability-management/defender-vulnerability-management?view=o365- worldwide. Zero-Day Typically, all publicly known vulnerabilities usually have either a workaround or a patch, and at the very least they are known vulnerabilities. However, some vulnerabilities are discovered before a patch or workaround can be made available. These vulnerabilities are known as zero-day vulnerabilities, and they carry high risk because there is no way to defend from an attack. Sometimes, you might not even be made aware that a vulnerability even exists! A zero-day vulnerability doesn't mean that the attack is imminent; some zero-day vulnerabilities are remediated the very next day. However, it does mean that until a patch or workaround is devised, you are vulnerable. The term zero-day can be applied to both a vulnerability and an exploit. A zero-day vulnerability means that a weakness is known before the vendor has time to acknowledge a workaround or patch. A zero-day exploit is much more serious, since there is an automated method created to exploit a weakness not yet documented. You may come across the term zero-day used in either of these two contexts. Exploit A vulnerability is a weakness in security, as you have learned. An exploit is a method that acts on a weakness (vulnerability). I will use a luggage lock as an example, mainly because they are notoriously weak security solutions. A simple paper clip bent appropriately could be used to exploit the weakness and open the lock. It would be nice if all problems were as simple as getting a better lock, but networks are complicated systems that can have many complex vulnerabilities, as well as known exploits. When we talk about exploits, we generally refer to scripts, software, or sequences of commands that exploit a known vulnerability. The CVE published against a vulnerability can be used to patch or block the exploit, but it all depends on the vulnerability. Just like zero-day vulnerabilities, where there is a weakness that is not documented yet, there exists zero-day exploits, in which an attack is carried out without understanding the vulnerability it is carried out upon. The Stuxnet malware was used to sabotage Iranian centrifuges in a nuclear facility in 2010. The Stuxnet malware exploited a vulnerability that existed in the Microsoft Print Spooler service. The malware infected computers and altered programmable logic controllers (PLCs) code that ran the centrifuges. No one knew of the vulnerability or the crafted exploit until after the attack was launched, and a significant amount of equipment was damaged. This vulnerability is about the most famous zero-day vulnerability to date, mainly because it was never disclosed and remained in the wild until it was found. More information can be found on Stuxnet at https://spectrum.ieee.org/the-real-story-of-stuxnet. Confidentiality, Integrity, and Availability One of the most important and basic security concepts is the CIA triad, shown in Figure 20.1. Although the Central Intelligence Agency (CIA) has the same acronym as the CIA triad, the two have nothing to do with each other, other than that the CIA organization adheres to the CIA triad for its own information security, but I digress. FIGURE 20.1 The CIA triad The CIA triad stands for confidentiality, integrity, and availability, and these concepts apply to the information security and the storage of information. It defines how we protect the confidentiality, integrity, and availability of information for an organization. Confidentiality The confidentiality of information focuses on limiting access to only the individuals allowed to access the information while denying access for those individuals who are restricted from accessing the information. Confidentiality can be achieved with physical locks (such as locked doors), file cabinets, safes, and, in extreme cases, security guards. Confidentiality can also be achieved electronically with authentication, encryption, and firewalls. Essentially, you need to decide how secure the information needs to be, and that will dictate the level and method of protecting the data's confidentiality. Integrity The integrity of information focuses on its accuracy and how susceptible it is to being altered by an unauthorized individual. Integrity of data must be protected both at rest and in transit. The integrity of data at rest can be protected with file hashes to detect unauthorized altering of the data. Unauthorized altering of information can also be prevented with the use of access control lists (ACLs). The integrity of data in transit can be protected with the use of digital signatures and checksums, such as the use of the Authentication Header (AH) protocol. This ensures that the data is not altered as it is being transmitted across the network. Availability The availability of the information pertains to the uptime of the systems serving the information. The availability of the data can be heightened by using redundant systems or redundant components to create highly available information systems. Information can also be backed up, which also raises the availability of the information because you are creating a point-in-time snapshot of the data. The restoration of information must be considered from two perspectives: the point in time from which you can restore the data and the time it takes to restore the data to that point. When designing systems that will store and serve data, all three of these concepts must be applied to support the security of the information. Confidentiality alone cannot ensure that data is not taken offline or altered. The integrity alone cannot ensure that the information is not accessed by an unauthorized individual or taken offline. Finally, if the information is taken offline, the data is definitely secure and can't be tampered with, but it is useless to the authorized users because it is unavailable. Therefore, it is the combination of all three of these concepts that must be equally applied to the information. Encryption In an effort to maintain confidentiality and integrity, data encryption should be employed. Data encryption ensures that data loss to an unauthorized user will not occur. Although the term data loss might be construed as the actual loss of data in the form of availability, the term actually means that you have lost control of the data. Without encryption, a threat actor can obtain a copy of the data, the data could be sent unintentionally to an unauthorized user, or it could be modified in an unauthorized manner. Consider an example of a laptop with sensitive patient record information stored on it. If the laptop were to be stolen, the threat actor could use a number of utilities that could provide unauthorized access to the patient records. However, with encryption (such as BitLocker) enabled, both the operating system and the data would remain encrypted and inaccessible to unauthorized users. There are three concepts associated with data encryption: data in transit, data at rest, and data in use, as shown in Figure 20.2. FIGURE 20.2 Data and encryption Data in Transit Encryption of data in transit refers to information traversing the network and should always be encrypted so that it is not intercepted. Over the past decade, just about every website and application has adopted some form of encryption, so there is no reason not to use encryption in transit. Data at Rest Data at rest refers to data that is written out to disk. This concept is a point of contention, because it is believed that once the data hits the server, it's safe. However, the data is more vulnerable because it's in one spot. If a drive needed to be replaced because it went bad, outside of physical destruction, there is no way to assure the data is inaccessible. When backup tapes are used, it is not only a good idea but should be a requirement. Data in Use Encryption of data in use refers to data that is in an inconsistent state, and or currently resident in memory. Most of the time, you don't need to be too concerned with data in memory, since that is normally a function of the operating system. However, when data is written to virtual memory such as the pagefile or swap file, it is considered data in use, and therefore it could be intercepted. The use of encryption is not just a good idea; in a lot of information sectors, the use of encryption is a regulatory requirement. Most regulatory requirements only define that data must be transmitted and stored with strong encryption. Data in use encryption is not typically defined, simply because network administrators use off-the-shelf products like Windows, Linux, and SQL, just to name a few. These products typically adhere to standards that make them secure for data that resides in memory. However, some regulatory requirements might require you to crank the security down (so to speak) on the operating system or application. One such regulatory requirement is the Federal Information Processing Standard (FIPS), but there are many others. When securing information with encryption, there are two basic methods that will encrypt the information: symmetrical encryption and asymmetrical encryption. Figure 20.3 shows a side-by-side comparison of the two types of encryption methods. FIGURE 20.3 Symmetrical and asymmetrical encryption Symmetrical Symmetrical encryption operates by using the same key to encrypt the information, as is used to decrypt the information. Symmetrical encryption is the oldest type of encryption used, dating thousands of years ago. It is still useful today for securing various types of information that require a shared key. One of the issues with symmetrical encryption is it uses a shared key to encrypt and decrypt data. If the key is stolen or leaked, then the data can easily be decrypted by a threat actor. Since the same key used to encrypt the data is required to decrypt the data, the likelihood of the key being leaked is high. Asymmetrical Asymmetrical encryption operates by using a different key to encrypt the data, as is used to decrypt the data. These two keys are called the public and private keys, and they are created using a complex mathematical algorithm called Rivest-Shamir-Adleman (RSA), named after the original creators of the algorithm. The RSA algorithm creates two large prime numbers that can be factored to the same number. These two large prime numbers are called the key pair, one being the public key and the other being the private key. This explanation of the RSA process is oversimplified, and the actual algorithm is very complex, but we need only a surface understanding for day-to-day use, as well as for the exam. Certificates The common use of certificates became popular in the mid-1990s, at around the time the Internet became a household word. Certificate services were created for two main purposes: to encrypt/decrypt entities and to digitally verify entities. Keep in mind that in the 1990s when the Internet became mainstream, there was a tremendous drive for Internet commerce, and this drove the need for certificate services. Today certificate services are an integral part of the Internet, our daily lives, and the world's commerce. Certificate services were taken for granted by the public, but in June 2013, all of that changed with an ex-government employee named Edward Snowden. Edward Snowden released reports that the government was actively listening to Internet conversations for years. This report validated why privacy and the use of encryption were important for the average person. After the report was released, nobody made a transaction or visited a website without checking for the lock symbol in the address bar. Today, in modern web browsers, encryption is assumed, and if the page isn't encrypted, the browser will alert you. Google Chrome even actively blocks any element that is not encrypted, such as a picture, audio, or video, just to name a few. Certificates are typically used for securing data transfers to and from the web browsers for the security of day-to-day operations. We use our web browsers for accessing sensitive information at work, doing our personal banking, accessing social media, and the list goes on. So, securing data in transit is a primary concern to manage the risk of eavesdropping from threat agents. The Hypertext Transfer Protocol over Secure Sockets Layer (SSL), also known as HTTPS, is a method for securing web data communications. SSL is a cryptographic suite of protocols that use public key infrastructure (PKI) to provide secure data transfer. Although the S in HTTPS stands for secure, SSL is the protocol suite used for securing communications. The SSL suite is a suite of protocols that includes the current standard of Transport Layer Security (TLS) 1.3. TLS 1.3 is used for securing websites, as of this writing. Public Key Infrastructure Public key infrastructure (PKI), also referred to as certificate services, supports many different operations, such as securing websites, client authentication, and code signing, just to name a few. PKI operates on two key concepts of mutual trust and encryption. The concepts seem simple, but don't let that fool you; there are a lot of different configurations for PKI, and it can get complex quickly. Thankfully, we just need to understand the basics, and once you have the basics, your knowledge can be expanded to more advanced configurations. Before we start to learn how PKI works, let's learn some terminology that we can build from: Certificate Authority (CA) The certificate authority is responsible for enrolling the clients and issuing the key pairs. In addition, the CA stores the public key and serves it to anyone who requests the public key. Authority Information Access (AIA) The location of the public keys is stored in the Authority Information Access (AIA). The AIA typically defines a path to a file share, or URL to access the public keys. Enrollment Policy The enrollment policy contains the requirements the client must fulfill to enroll and obtain a key pair. The enrollment policy might require specific credentials, a monetary fee, or even manual verification, just to name a few. The enrollment process will depend on the organization's needs. Windows allows for auto- enrollment of clients based upon Windows group membership. A digital notary might require a physical verification that you are the person receiving the certificate. Key Pair The public and private key pair provides asymmetrical encryption for PKI. After issuance, the client will receive the private key of the key pair. The private key is stored safely in the client's private key store. The public key of the key pair is stored in the public key store located on the CA for everyone to access. The AIA defines the location where the public key can be attained. By default, the certificate authority will never store the client's private key. Root CA This CA is the root of all trust for the PKI model being deployed. Both the client and the device using the key pair for encryption of signing must mutually trust the root CA. The root CA servers can be found in the Certificate MMC on Windows under Trusted Root Certification Authorities. Every operating system will have a similar method for trusting the root CA servers. Issuing CA The CA that issues the key pair is the issuing CA. Each issuing CA can be set up with different enrollment policies, to issue a specific type of key pair. For example, some issuing CAs can be set up to only issue key pairs for people, while others can be set up to issue key pairs for computers, or even code signing, just to name a few. For the certificate to be accepted, the issuing CA must be trusted by the client or entity using the certificate. Certificate Revocation List (CRL) The CRL is a list of certificates (key pairs) that have been revoked for various reasons. Typically, the trusting party will download the CRL and compare the certificate in question against the list before trusting the certificate. However, this behavior can be optional depending on the use of the certificate. Expiration Date Each certificate has an expiration date, and from time to time, certificates expire, so they must constantly be renewed or they will not be valid after the expiration. Now that we have a better understanding of the terminology associated with public key infrastructure, let's review the two basic ways a key pair can be used for encryption and validating the sender message, also known as signing. Encryption In Figure 20.4, the sender needs to send an encrypted document to the receiver. We will assume the receiver already fulfilled the enrollment policy requirements, was subsequently enrolled by the issuing CA, and received its private key. The sender will retrieve the receiver's public key from the public key store located on the CA, via the AIA. The sender will then encrypt the document with the receiver's public key and transmit it to the receiver. When the receiver obtains the encrypted document, the receiver's private key will be used to decrypt the document. Signing In Figure 20.5, the sender needs to sign a document to assure the sender's identity for the receiver, as well as integrity of the document. If the document is altered in the process of transfer, the signature will be invalidated. Again, we will assume the sender (in this case) already fulfilled the enrollment policy requirements, was subsequently enrolled by the issuing CA, and received its private key. The sender will use its private key located in the key store to encrypt the document and send it to the receiver. When the receiver obtains the document, it will be decrypted with the sender's public key obtained from the public key store via the AIA. Because only the sender's public key can decrypt the document encrypted by the sender's private key, the document is considered irrefutable. FIGURE 20.4 Encryption with PKI FIGURE 20.5 Singing with PKI Public vs. Private CA It is often misunderstood that all public key infrastructure implementations require a third-party vendor, such as GoDaddy, DigiCert, or GlobalSign, to name a few, to issue a certificate. This is untrue; you can implement a private CA to save costs and have great flexibility. Public vendors are a great option, if you need the root trust pre-established in the browser or application. When you purchase a certificate from one of the big vendors, you can be guaranteed that their root trust is in 99.9% of all web browsers and operating systems. In fact, the list of trusted root certificate authorities must be updated now and then. The Microsoft Windows platform does this with routine Windows Updates, but browsers may update their own list with updates of their own. When creating a public web server, using a public vendor is a requirement for both compatibility and ease of use. Private CAs are an option if you plan to establish root trust, prior to using the certificates. The root trust can be created via Group Policy Object (GPO), Microsoft Intune, or preinstalling the root certificate, to name a few. When you use a private CA, you have the most control and flexibility, but all the work is on you to keep everything up-to-date. Using a private CA is the best option if using a public vendor is too costly. Self-Signed One of the key concepts to PKI is mutual trust. The parties participating in PKI must mutually trust the root CA. In public settings, we can use public vendors, and in corporate environments, we can use private CAs. However, sometimes we just need an individual device to perform some cryptographic function, such as the web page for an appliance or your local computer's encryption of files, to name a few. For these instances, self-signed certificates are an option. A self-signed certificate is a certificate that is issued to and issued by the same entity. For example, if the firewall you are working on requires encryption of the web interface, it will require a certificate. You can install a self-signed certificate issued to the firewall by the firewall. Using a self-signed certificate negates the need to create a PKI infrastructure for one or two devices. It does, however, mean that you will need to explicitly trust the issuer by placing the self-signed certificate in your Trusted Root Certification Authorities certificate store. EXERCISE 20.2 Examining Self-Signed Certificates This exercise will help you understand how self-signed certificates are used by the Windows operating system. 1. Open the Start menu, enter mmc.exe, and select Run As Administrator. 2. Select Yes in the User Account Control dialog box. 3. Select File Add/Remove Snap-In. 4. Find and select Certificates on the Ad Or Remove Snap-ins dialog box, and then click Add. 5. Select My User Account from the option in the Certificates Snap-In dialog box, and then click Finish. 6. Click OK in the Add Or Remove Snap-ins dialog box. 7. Open the chevron next to Certificates - Current User Personal Certificates in the navigation pane. 8. Examine the certificates issued in the results pane of the Microsoft Management Console (MMC). You will find any certificates issued to your account in the results pane. Don't worry if you don't see any certificates issued. If you do have certificates issued, look for the existence of the certificate with the Intended Purpose of Encrypting File System. If you find the cert, skip to step 13. 9. Minimize the MMC application. 10. Create a new text file on the Desktop and then right-click it and select Properties. 11. Click Advance in the properties dialog box, check the box Encrypt Contents To Secure Data, click OK, and then click OK again in the Properties dialog box. 12. Maximize the MMC application and refresh the results pane. 13. Double-click the certificate with the Intended Purpose of Encrypting File System. 14. Examine the Issue To, Issue By, and Validity From fields. The Windows Encrypting File System (EFS) system uses a self-signed certificate to protect the encryption key used to encrypt files. When you encrypt your first file, a self-signed certificate is created on your behalf. Do not delete this certificate or you will lose access to your decryption key, and subsequently you will not be able to decrypt your files. AAA Model The majority of an administrator's day-to-day job is managing users and managing and monitoring access to resources. These day-to-day tasks can be summarized with the AAA model. The AAA model is a security framework that defines the administration of authentication, authorization, and accounting. You will find the AAA model referenced when configuring network switches, routers, or RADIUS servers, just to name a few. The AAA model is referenced anywhere you need to authenticate users, authorize access, and account for the authorized or unauthorized access. In the following sections, you will learn the basic concepts of the AAA model. Authentication When a user wants to access a resource, they must first prove their identity with the act of authentication that states they are who they say they are. As shown in Figure 20.6, a user provides their authentication credentials to the identity store where the credentials are validated. A user can provide authentication credentials using several different factors. The most common authentication factors are something you know (passwords), something you have (smartcard), and something you are (biometrics). Besides the various factors of authentication, there are several protocols that can be used to transmit credentials or aid in the authentication of a user. In the following sections, I will cover in detail all of the various protocols as well as the various factors of authentication that can be used to authenticate a user or computer. FIGURE 20.6 Authentication components Multifactor Authentication All authentication is based on something that you know, have, are, or do, or your location. A common factor of authentication is a password, but passwords can be guessed, stolen, or cracked. No one factor is secure by itself, because by themselves they can be compromised easily. A fingerprint can be lifted with tape, a key can be stolen, or a location can be spoofed. Multifactor authentication (MFA) helps solve the problem of a compromised single- factor authentication method by combining the authentication methods. With multifactor authentication, a single factor will no longer authenticate a user; two or more of the factors discussed in this section are required for authentication. This makes the credentials of the user more complex to compromise. One of the most common examples where MFA is used in everyday life is at an ATM. To withdraw money, a user must provide a card (one factor) and a PIN (a second factor). If you know the PIN but do not have the card, you cannot get money from the machine. If you have the card but do not have the PIN, you cannot get money from the machine. In this section we will cover the most common two-factor (2FA)/ MFA methods used by protected applications. The following methods are generally used in conjunction with a traditional user and password combination. It should be assumed that when we talk about 2FA, it provides the same functionality as MFA. MFA just has more than two factors of authentication. For the rest of this section, we will use the term MFA. Something You Know Computing has used the factor of something a person knows since computer security began. This is commonly in the form of a username and password. We can make passwords more complex by requiring uppercase, lowercase, numeric, and symbol combinations. We can also mandate the length of passwords and the frequency in which they are changed. However, the username/password combination is among the most common type of credential to be stolen because they can be phished or sniffed with a keylogger. Something You Have Authentication based on something a person has relates to physical security. When we use a key fob, RFID tag, or magnetic card to enter a building, we are using something we have. An identification badge is something we have, although technically it is also something we are if it has a picture of us on it. Credit cards have long since been something we have to authenticate a transaction. Within the past two decades, it has also become the most stolen credentials. Recently, credit cards have implemented a new authentication method called Europay, MasterCard, and Visa (EMV). EMV will make it harder to steal and duplicate cards. However, if a card is lost, it can still be used by an unscrupulous person because it is something you physically have. Something You Are A decade or so ago, authenticating a user based on something they are was science fiction. We now have biometric readers built into our phones for our convenience! All we need to do is place our finger on the reader, speak into the phone, or allow the phone to recognize our face and we are instantly logged in. Computers can be outfitted with fingerprint readers to allow logon of users based on their fingerprint as well. When this technology entered the market, there were various ways to get around it, such as tape-lifting a print, playing back someone's voice, or displaying a picture of a person for the camera. These systems have gotten better since they have entered the market by storing more points of the fingerprint, listening to other aspects of a user's voice, and looking for natural motion in the camera. Somewhere You Are A relatively new factor of authentication is based on somewhere you are. With the proliferation of Global Positioning System (GPS) chips, your current location can authenticate you for a system. This is performed by creating authentication rules on the location. GPS sensors are not the only method of obtaining your current location. Geographic IP information queried from Geo-IP services can also be used for the authentication process. We can restrict login to a specific IP or geographic location based on the IP address provided. Something You Do Another relatively new factor of authentication for network systems is based on something you do. Although it has been used for hundreds of years for documents and contracts, a signature is something you do, and you don't even think about how you do it. It is unique to you and only you because there is a specific way you sign your name. Typing your name into the computer is something you do and don't think about, but there is a slight hesitation that you make without knowing it. Algorithms pick up on this and use the keystrokes as a form of authentication. Arguably, it can be considered biometrics because it is something your brain does without you consciously thinking about it. Multifactor Authentication Methods Now that you understand the various factors of authentication, let's examine the various methods to employ MFA. Depending on the application, you may see any number of these methods being used for MFA. In the example shown in Figure 20.7, the MFA method can be replaced with any of the following methods to provide MFA for the application. FIGURE 20.7 Authentication methods Email Some applications will use email as an MFA method. However, using email as an MFA option is probably the least secure method. This is mainly because people reuse passwords. For example, if your banking website username and password is compromised (something you know) and you reuse the same credentials on email, it provides no protection. As the threat actor will quickly log into your email and use it as a factor of authentication. Ideally the email account is protected with MFA in a way that it requires something you have. Email is useful as a notification method when someone logs into a secure login. However, keep in mind the threat agents know this as well. If your email account is compromised, a threat agent will often create a rule in your email box to dump these notifications directly to the trash. They have also been known to create forwarding rules to redirect communications directly to their own disposable email account. Short Message Service Some applications will allow the use of short message service (SMS) text messages as the MFA method. When this method is used, a simple text message is sent to the user's phone number. The message will contain a random 5- to-8-digit code that the user will use to satisfy the MFA requirement. When you first set up this MFA method, the protected application will request the code before turning on MFA. This is done to verify that the phone number is correct and that you can receive text messages, before the security is applied to your account. Voice Call Some applications that are protected by MFA will allow voice calls to be initiated to the end user. This is usually done if the person does not have a phone that accepts text messages. The voice call will recite a five- to eight-digit code that the user will use to satisfy the MFA requirement. This process is similar to SMS, with the difference being it is an automated voice call. Time-Based Hardware and Software Tokens Time-based hardware tokens are devices that enable the user to generate a one-time password (OTP) to authenticate their identity. SecurID from RSA is one of the best-known examples of a physical hardware token, as shown in Figure 20.8. FIGURE 20.8 An RSA security key fob Time-based hardware tokens operate by rotating a code every 60 seconds. The token typically uses the time-based one-time password (TOTP) or HMAC-based one-time password (HOTP) algorithm to calculate the rotating code. This rotating code is combined with a user's PIN or password for authentication. A time-based hardware token is considered multifactor authentication because it is something you have (hardware token) along with something you know, such as your PIN or password. A new type of time-based hardware token is becoming the new standard, and it can be considered a time-based software token or soft token. It operates the same as a hardware token, but it is an application on your cell phone that provides the code. Google Authenticator is one example of these types of applications. Microsoft also has an authenticator application similar to Google Authenticator. When configuring MFA on an application, you have two ways of adding an account to the authenticator application. You can take a picture of a quick response (QR) code, or you can enter a security code into the authenticator application. If you choose to use a QR code, then the application turning the MFA on will present a QR code that can be scanned by the authenticator application. If you choose to use a setup key, the application turning on the MFA will provide a key. There is generally a second step before the application is protected by MFA, where you will be required to enter a code from the authenticator application to the protected application. A lengthy one-time-use backup key is also generated, in case you need to turn MFA off in the event your device is lost or stolen. Single Sign-On One of the big problems larger networks must deal with is the need for users to access multiple systems or applications. This may require users to remember multiple accounts and passwords. An alternative to this is that the application must support Active Directory authentication, but that creates other considerations. Single sign-on is a great benefit to cloud applications and can help aid in implementing zero-trust model. The zero-trust model is a security strategy that assumes that no one or nothing can be trusted to access the resource. The resource will require authentication each time it is accessed, to provide the greatest level of security. With typical authentication, this would require the user to log in repeatedly. Single sign-on helps by allowing the user to pass a token (cookie) each time the user accesses the resource; this process is totally unbeknownst to the user. Single sign-on (SSO) is an aspect of authentication and not a protocol. SSO assists the user log on by allowing the user to authenticate to the first application and then reusing the credentials for other applications. This way, the user no longer has to enter the same username and password for each application they access. Single sign-on is often used with cloud-based resources. The principle behind SSO is that the resource, also known as the service provider, will trust that the user has already been authenticated by the authentication server, also known as the identity provider (IdP). The IdP will send a claim on behalf of the user, as shown in Figure 20.9. This claim can contain any number of Active Directory user attributes, such as first name, last name, email, and username. It is important to understand that at no time during the authentication process are the username and password sent to the service provider that is requesting authentication. The service provider must trust that the user has already been authenticated and accept the claim at face value. FIGURE 20.9 Claims-based authentication The Security Assertion Markup Language (SAML) protocol is used to initially authenticate the user to the IdP and authenticate the user to the service provider. The SAML protocol is based on Extensible Markup Language (XML) and provides a universal method for obtaining the claims for the user. The SAML protocol also supports signing and encryption of claims so that the claims being passed can be trusted as originating from the IdP. Setting up SSO with SAML can be challenging because you must coordinate the accepted claims sent by your IdP, and the claims must match the service provider's expectations. Other settings such as the mutual use of signing certificates and encryption must match as well. Some of the best deployments of SSO happen when both the IdP admin (typically you) and the service provider admin can get on a conference call to set up, test, and adjust settings. If you do your homework up front, you are guaranteed to get everything up and running within about 15 minutes, as opposed to days or weeks without proper communication and feedback from the service provider. Remote Authentication Dial-In User Service Remote Authentication Dial-In User Service (RADIUS) was originally proposed as an Internet Engineering Task Force (IETF) standard. It has become a widely adopted industry standard for authenticating users and computers for network systems. RADIUS creates a common authentication system, which allows for centralized authentication and accounting. The origins of RADIUS are from the original Internet service provider (ISP) dial-up days, as its acronym describes. Today, RADIUS is commonly used for authentication of virtual private networks (VPNs), wireless systems, and any network system that requires a common authentication system. RADIUS operates as a client-server protocol. The RADIUS server controls authentication, authorization, and accounting (AAA). The RADIUS client can be wireless access points, a VPN, or wired switches. The RADIUS client will communicate with the RADIUS server via UDP port 1812 for authentication and UDP port 1813 for accounting. RADIUS is synonymous with the AAA security framework, since it provides authentication, authorization, and accounting for users and computers. Authentication The main purpose of the RADIUS server is to provide centralized authentication of users or computers. This can be performed by means of username and password, or authentication can be extended with MFA. We can even authorize computers based upon their attributes, such as their MAC address. All of these authentication methods are dependent on the RADIUS installation. Authorization Authorization of a user on a computer to allow a connection to a resource, is based upon the attributes returned after successful authorization. The returned attributes can be used by the authenticating application to configure the user account or computer with a VLAN, configure a connection type, or even enforce restrictions, just to name a few. Accounting Accounting is performed by recording each transaction to the data store configured with the RADIUS server. The key takeaway is that the RADIUS server accepts the RADIUS accounting packet and then stores the data. The RADIUS server can be installed on many different operating systems, such as Linux and Windows. Microsoft Windows Server includes an installable feature, called the Network Policy Server (NPS), that provides RADIUS functionality. The authentication of users and computers is based upon Active Directory. The authorization is based upon the settings in the NPS. The accounting is directed to a logfile in the file system, but an account can also be configured for a Microsoft SQL database. Terminal Access Controller Access Control System Plus The Terminal Access Controller Access-Control System Plus (TACACS+) protocol is also an AAA method and an alternative to RADIUS. Like RADIUS, it is capable of performing authentication on behalf of multiple wireless APs, RRAS servers, or even LAN switches that are 802.1X capable. Based on its name, you would think it's an extension of the TACACS protocol (and in some ways it is), but the two definitely are not compatible. Here are two major differences between TACACS+ and RADIUS: RADIUS combines user authentication and authorization into one profile, but TACACS+ separates the two. TACACS+ utilizes the connection-based TCP port 49, but RADIUS uses UDP instead. Even though both are commonly used today, because of these two reasons TACACS+ is considered more stable and secure than RADIUS. Figure 20.10 shows how TACACS+ works. FIGURE 20.10 TACACS+ login and logout sequence When a TACACS+ session is closed, the information in the following list is logged, or accounted for. This isn't a complete list; it's just meant to give you an idea of the type of accounting information TACACS+ gathers: Connection start time and stop time The number of bytes sent and received by the user The number of packets sent and received by the user The reason for the disconnection LDAP Lightweight Directory Access Protocol (LDAP) is an open standard directory service protocol originally defined by the IETF. LDAP is based on the earlier standard of X.500 that uses Directory Access Protocol (DAP), but LDAP is more efficient and requires less code; therefore, it is considered lightweight in its implementation. LDAP operates as a client-server protocol used for looking up objects in a directory service and their respective attributes. LDAP was adopted by Microsoft for Active Directory (AD) lookups of objects on domain controllers. It was released with Windows 2000 Server to compete with Novell Directory Services (NDS). Active Directory is a highly scalable directory service that can contain many different objects, including users, computers, and printers, just to name a few. Today, LDAP is an integral part of Active Directory for quick lookups of objects. It is also important to understand the components of Active Directory. Active Directory itself is not the authentication mechanism; Active Directory is only the directory for storing objects. LDAP assists in the lookup of objects stored inside of Active Directory. Kerberos is the authentication mechanism that is used with Active Directory to actually authenticate the user or computer, as well as its subsequent authentication to other servers and services. An LDAP client queries requests to an LDAP server with a specifically formatted uniform resource identifier (URI). The URI will contain the object to search for and the attributes to be retrieved. In addition, filters can be supplied so that only specific objects are searched. LDAP uses a default protocol and port of TCP/389. When SSL is used with LDAP called LDAPS, the protocol and port of TCP/636 is used. The following LDAP query will query an LDAP data store for the object with a user ID of rick.sanchez and return the givenName, sn, and canonical name, cn, attributes. The directory searched will be contoso.com located on the LDAP server (DC) of dc1.contoso.com listening on port 389. ldap://dc1.contoso.com:389/dc=contoso,dc=com?givenName,sn,cn? (uid=rick.sanchez) As you can see, forming your own LDAP query is intimidating, and searching for a user in a directory such as AD is simpler with a right-click. However, it is important to understand what happens under the covers (so to speak). Typically, when configuring LDAP as an authentication method, you will be working with Microsoft Active Directory. Other operating systems can be configured with LDAP, but Microsoft is the dominant consumer of the technology. Authorization In this section, we'll discuss how authorization is needed when using the following: Identity and Access Management Least Privilege Role-Based Access Control Geofencing Identity and Access Management Identity and access management (IAM) is a security framework used for the authentication and authorization of users. The IAM model has been adopted by service providers as a means of managing users with their application. The IAM framework allows a service provider to define the authentication of users by means of a username and password, certificate key pair, or even SSO, just to name a few methods. The framework also defines the authorization of user access for a resource within the service provider. Many service providers have adopted a model of security in which the object and accompanying permission is defined. This allows a granular control of what can be accessed by the user and what can be done with the service or object by the user. For example, if a database is created on a service provider, the specific database can be security so that a particular user has access only to read objects within the database. However, administrators might have a higher level of permissions to alter information. Least Privilege The principle of least privilege is a common security concept that states a user should be restricted to the least amount of privileges that they need to do their job. By leveraging the principle of least privilege, you can limit internal and external threats. For example, if a front-line worker has administrative access on their computer, they have the ability to circumvent security; this is an example of an internal threat. Along the same lines, if a worker has administrative access on their computer and received a malicious email, a bad actor could now have administrative access to the computer; this is an example of an external threat. Therefore, only the required permissions to perform their tasks should be granted to users, thus providing least privilege. Security is not the only benefit to following the principle of least privilege, although it does reduce your surface area of attack because users have less access to sensitive data that can be leaked. When you limit workers to the least privilege they need on their computer or the network, fewer intentional or accidental misconfigurations will happen that can lead to downtime or help-desk calls. Some regulatory standards require following the principle of least privilege. By following the principle of least privilege, an organization can improve upon compliance audits by regulatory bodies. Role-Based Access Control As administrators, we are accustomed to file-based access controls and the granularity that accompany these access control models. However, with today's emerging cloud- based systems, we often do not need the granularity of individual permissions. Role- based access control (RBAC) helps remove the complex granularity by creating roles for users who accumulate specific rights. The user is then given a role or multiple roles in which specific rights have been established for the resource, as shown in Figure 20.11. FIGURE 20.11 Role-based access As an example, Microsoft Teams has roles for a Teams meeting. You can be an attendee, presenter, or organizer. The attendee can attend a Teams meeting and share their video and audio feed. The presenter can do that plus they can share a presentation, mute participants, and perform several other presentation key functions. The organizer can do everything the presenter can do plus they can create breakout rooms and view attendance. By changing someone's role in the meeting, from attendee to presenter, for example, we can allow them to share a document with the meeting's other attendees. If we had to find the specific permission to allow that person to perform the function, it would take a lot longer and would be prone to error. Role-based access doesn't stop with just cloud-based applications. We can use role- based access controls in our day-to-day operations by standardizing permissions based upon specific roles in our organization. When a marketing person is hired, the standardized role of marketing can be applied. This can be performed with Active Directory groups and the permission groups we include. Geofencing Geofencing is the process of defining the area in which an operation can be performed by using GPS or radio frequency identification (RFID) to define a geographic boundary. An example of usage involves a location-aware device of a location-based service (LBS) user entering or exiting a geofence. This activity could trigger an alert to the device's user as well as messaging to the geofence operator. Accounting Accounting is the last A in the AAA model, and after learning about authentication and authorization, this section is where we can see it all together. Ironically, let's use the analogy of a transaction in a physical bank. In Figure 20.12, the customer (user) appears on the far left, and their money (resource) is shown on the far right. As an example, I will use the analogy of a bank transaction in which a customer will withdraw money. FIGURE 20.12 AAA bank analogy A customer (user) will provide their authentication via their account number (something they know) and identification (something they are). The bank teller can then authenticate that they are the person they say they are. Once the teller has authenticated the customer (user), authorization will be checked. With the analogy of a bank, authorization might be how much money is in your bank account. However, a better example is who in the bank is allowed to enter the vault and touch the money! I'm sure even if my bank authenticates me, they won't authorize me to count and withdraw my own money. I'm pretty sure that if I tried, I would go to jail and not collect my $200. The teller is authorized to touch the money and hand it to you. It is important to note that, in this example, the teller is also authenticated when they come into work, though this authentication process is less rigorous than your authentication process. Now that you have been authenticated and authorized to receive your money, an audit trail is created. If you had $400 and withdrew $200, your account would be debited $200. The audit trail in this example is the accounting process of the AAA system. Accounting allows us to trust, but audit. In a network system, when a user logs on, they will commonly authenticate with a username and password. When the user tries to access the resource, their authorization to the resource will be checked. If they are authorized to access the resource, the accounting of access will be recorded. It is important to note that accounting can record denied access to a resource as well. Although the analogy of a bank has to deal with money and the word accounting is synonymous with money, the word accounting in the IT world has nothing to do with money. Accounting in the IT world has to do with the audit trail that users and computers leave when they authenticate and access information. The only time the accounting feature has anything to do with money is if your service provider is charging you based on the amount of time you've spent logged in or for the amount of data sent and received. Regulatory Compliance Regulations are rules imposed on your organization by an outside agency, like a certifying board or a government entity, and they're usually totally rigid and immutable. The list of possible regulations that your organization could be subjected to is so exhaustively long, there's no way I can include them all in this book. Different regulations exist for different types of organizations, depending on whether they're corporate, nonprofit, scientific, educational, legal, governmental, and so on, and they also vary by where the organization is located. For instance, US governmental regulations vary by county and state, federal regulations are piled on top of those, and many other countries have multiple regulatory bodies as well. The Sarbanes-Oxley Act of 2002 (SOX) is an example of a regulation system imposed on all publicly traded companies in the United States. Its main goal was to ensure corporate responsibility and sound accounting practices, and although that may not sound like it would have much of an effect on your IT department, it does because a lot of the provisions in this act target the retention and protection of data. Believe me, something as innocent sounding as deleting old emails could get you in trouble—if any of them could've remotely had a material impact on the organization's financial disclosures, deleting them could actually be breaking the law. So, be aware, and be careful! One of the most commonly applied regulations is the ISO/IEC 27002 standard for information security, previously known as ISO 17799, renamed in 2007 and updated in 2013. It was developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), and it is based on British Standard (BS) 7799-1:1999. The official title of ISO/IEC 27002 is Information technology - Security techniques - Code of practice for information security controls. Although it's beyond our scope to get into the details of this standard, know that the following items are among the topics it covers: Risk assessment Security policy Organization of information security Asset management Human-resources security Physical and environmental security Communications and operations management Access control Information systems acquisition, development, and maintenance Information security incident management Business-continuity management Compliance The following are various regulations you may encounter while working in the IT field. They are in no way a complete list of regulations, just the most common. Data Locality With the adoption of cloud computing by private and government entities, it has created new challenges for collecting, processing, and storing information about citizens or residents. Data locality, also known as data localization laws, requires data of a citizen to be collected, processed, and stored within the same country, before being transferred to other countries. This ensures that local laws for privacy can be applied to the information of the country's citizens. In certain situations, specific data about a citizen might even need to be removed from the data set before it can be transferred. Once these requirements are met, then the governed data can be moved outside of the country. Family Educational Rights and Privacy Act The Family Educational Rights and Privacy Act (FERPA) affects education providers and organizations that process student records. FERPA regulates the handling of student records, such as grades, report cards, and disciplinary records. It was created to protect the rights of both students and parents for educational privacy. The Department of Education enforces FERPA compliance. Gramm–Leach–Bliley Act The Gramm–Leach–Bliley Act (GLBA) affects providers of financial services. GLBA requires financial institutions that offer products and services, such as loans, investment advice, or insurance, to safeguard customer information and detail the practices for sharing consumer information. It was created to protect consumer information and avoid the loss of consumer information. The Federal Trade Commission (FTC) enforces GLBA compliance. General Data Protection Regulation The General Data Protection Regulation (GDPR) is a European Union (EU) law governing how consumer data can be used and protected. The GDPR was created primarily to protect citizens of the European Union. It applies to anyone involved in the processing of data based upon the citizens of the European Union, regardless of where the organization is located. The GDPR recommends that organizations hire a data protection officer (DPO). This person is the point of contact for all compliance for GDPR, as well as any other compliances your organization fall under. The underlying goal is to achieve consent from the end user of your product or service. Consent to collect information must be proven by an organization beyond a shadow of doubt. This means that if someone visits your website from the European Union, you must receive consent in clear language to even place a cookie in their web browser. The DPO is responsible for coordinating this language, as well as the life cycle of any data that is collected. Health Insurance Portability and Accountability Act The Health Insurance Portability and Accountability Act (HIPAA) affects healthcare providers and providers that process health records. It regulates how a patient's information is secured and processed during the patient's care. HIPAA regulations are imposed on healthcare providers to ensure patient privacy. The Department of Health & Human Services (HHS) enforces HIPAA compliance. Payment Card Industry Data Security Standards Payment Card Industry Data Security Standards (PCI DSS) is a standard of processes and procedures used to handle data related to transactions using payment cards. A payment card is any card that allows the transfer of money for goods or services. Types of payment cards include credit cards, debit cards, or even store gift cards. PCI DSS compliance is not enforced by government entities. PCI DSS compliance is actually enforced by banks and creditors. Merchants must comply with the PCI DSS standard to maintain payment card services. If a merchant does not comply with PCI DSS standards and a breach occurs, the merchant can be fined by the banks. Once a breach of PCI data occurs, then local, state, and federal laws can apply to the merchant. For example, some laws require the merchant to pay for credit-monitoring services for victims after a breach. Sarbanes–Oxley Act The Sarbanes–Oxley Act (SOX) affects publicly traded companies. It regulates how companies maintain financial records and how they protect sensitive financial data. The Securities and Exchange Commission (SEC) enforces SOX compliance. Policies, Processes, and Procedures Your organization must comply with the various regulations, or you could risk fines or, in some cases, even jail time. Your organization can comply with regulations by creating internal policies. These policies will have a major influence on processes and ultimately procedures that your business unit in the organization will need to follow, as shown in Figure 20.13. So, to answer the question of why you need to follow a procedure, it's often the result of regulations imposed on your organization. FIGURE 20.13 Regulations, compliance, and policies The overall execution of policies, processes, and procedures when driven by regulations is known as compliance. Ensuring compliance to regulations is often the responsibility of the compliance officer in the organization. This person is responsible for reading the regulations (laws) and interpreting how they affect the organization and business units in the organization. The compliance officer will then work with the business unit directors to create a policy to internally enforce these regulations so that an organization is compliant. Once the policy is created, the process can then be defined or modified. A process consists of numerous procedures or direct instructions for employees to follow. Figure 20.14 shows a typical policy for disposing of hazardous waste. The process of decommissioning network equipment might be one of the processes affected by the policy. Procedures are steps within a process, and these, too, are affected (indirectly) by the policy. As the example shows, a regulation might have been created that affects the handling of hazardous waste. To adhere to compliance, a hazardous waste policy was created. The process of decommissioning equipment was affected by the policy. As a result, the procedures (steps) to decommission equipment have been affected as well. FIGURE 20.14 Policy for disposing of hazardous waste Audit You can internally ensure that your organization adheres to regulatory compliance, by creating policies, processes, and procedures based around the regulations imposed on your organization. However, over time, policies, processes, and procedures mature and change to fit the environment and business requirements. Therefore, periodically we must ensure that these changes in policies, processes, and procedures keep our organization in a state of compliance with the imposed regulations. An audit process is typically created so that an organization can verify they still comply with regulations. Internal audits are typically performed by the managers of each department, since they know their policies, processes, and procedures best. The compliance officer will oversee the audit and will generate a report stating the compliance of the organization. These internal audits can be ongoing, quarterly, bi- annually, or annual, depending on the requirements and consequences of the regulations. It really all comes down to time versus money in the end. Audits can also be performed externally by a group of auditors. An external audit is performed for several reasons, but it is typically required by the regulation that you must comply with. Audits that require organizations to comply with financial regulations often require external audits. Some regulations might even require a different group of auditors each year; this prevents complacency or collusion. The frequency of external audits is typically annually, but again it depends on the regulations your organization must comply with. Where compliance is not a concern, security is always a concern, and we should apply the mantra of trust, but audit. As an administrator you should continually audit your logs for suspicious or irregular activity. Summary In this chapter, you learned the basic concepts, terms, and principles that all network professionals should understand to secure an enterprise network. We covered concepts such as the CIA triad, internal and external threats, and how vulnerabilities can be classified using the CVE database. You then learned about the two common encryption methods used to protect