Information Assurance and Security 2 - Module 1 PDF
Document Details
Uploaded by YoungAcademicArt
Tags
Summary
This document presents Module 1 of a course on Information Assurance and Security 2. It covers various security technology concepts, specifically access control, firewalls, remote connections, and identification, authentication, and authorization mechanisms. It delves into the theory of discretionary, nondiscretionary, and mandatory access controls. No questions are presented unlike a typical past paper.
Full Transcript
IN FORMA TION A SSU RAN CE AND SECURITY 2 1 Module 1: Security Technology: Access Control, Firewalls, and Protecting Remote Connections Overview This chapter, along with Modules 2 and 3, describes the function of many common technical controls and explains how they fit into...
IN FORMA TION A SSU RAN CE AND SECURITY 2 1 Module 1: Security Technology: Access Control, Firewalls, and Protecting Remote Connections Overview This chapter, along with Modules 2 and 3, describes the function of many common technical controls and explains how they fit into the physical design of an information security program. Students who want to acquire expertise on the configuration and maintenance of technology-based control systems will require additional education and usually specialized training. Learning Objectives Upon completion of this material, you should be able to: Discuss the role of access control in information systems, and identify and discuss the four fundamental functions of access control systems; Define authentication and explain the three commonly used authentication factors; Describe firewall technologies and the various categories of firewalls; Discuss the various approaches to firewall implementation; Identify the various approaches to control remote and dial-up access by authenticating and authorizing users; Describe virtual private networks (VPNs) and discuss the technology that enables them. Access Control Access control is the method by which systems determine whether and how to admit a user into a trusted area of the organization—that is, information systems, restricted areas such as computer rooms, and the entire physical location. Access control is achieved through a combination of policies, programs, and technologies. To understand access controls, you must first understand they are focused on the permissions or privileges that a subject (user or system) has on an object (resource), including if, when, and from where a subject may access an object and especially how the subject may use that object. To understand access controls, you must first understand they are focused on the permissions or privileges that a subject (user or system) has on an object (resource), including if, when, and from where a subject may access an object and especially how the subject may use that object. In the early days of access controls during the 1960s and 1970s, the government defined only mandatory access controls (MACs) and discretionary access controls. These definitions were later codified in the Trusted Computer System Evaluation Criteria (TCSEC) documents from the U.S. Department of Defense (DoD). As the definitions and applications evolved, MAC became further refined as a specific type of lattice-based, nondiscretionary access control, as described in the following sections. In general, access controls can be discretionary or nondiscretionary (see Figure 6-1). Discretionary access controls (DACs) provide the ability to share resources in a peer-to-peer configuration that allows users to control and possibly provide access to information or resources at their disposal. The users can allow general, unrestricted access, or they can allow specific people or groups of people to access these resources. For example, a user might have a hard drive that contains information to IN FORMA TION A SSU RAN CE AND SECURITY 2 2 be shared with office coworkers. This user can elect to allow access to specific coworkers by providing access by name in the share control function. Figure 6-2 shows an example of a discretionary access control from Microsoft Windows 10. Nondiscretionary access controls (NDACs) are managed by a central authority in the organization. A form of nondiscretionary access controls is called lattice-based access control (LBAC), in which users are assigned a matrix of authorizations for areas of access. The authorization may vary between levels, depending on the classification of authorizations that users possess for each group of information or resources. The lattice structure contains subjects and objects, and the boundaries associated with each pair are demarcated. Lattice-based control specifies the level of access each subject has to each object, as implemented in access control lists (ACLs) and capabilities tables. Some lattice-based controls are tied to a person’s duties and responsibilities; such controls include role- based access controls (RBACs) and task-based access controls (TBACs). IN FORMA TION A SSU RAN CE AND SECURITY 2 3 Role-based controls are associated with the duties a user performs in an organization, such as a position or temporary assignment like project manager, while task-based controls are tied to a particular chore or responsibility, such as a department’s printer administrator. Some consider TBACs a sub-role access control and a method of providing more detailed control over the steps or stages associated with a role or project. These controls make it easier to maintain the restrictions associated with a particular role or task, especially if different people perform the role or task. Instead of constantly assigning and revoking the privileges of employees who come and go, the administrator simply assigns access rights to the role or task. Then, when users are associated with that role or task, they automatically receive the corresponding access. When their turns are over, they are removed from the role or task and access is revoked. Roles tend to last for a longer term and be related to a position, whereas tasks are much more granular and short-term. In some organizations the terms are used synonymously. Mandatory access controls (MACs) are also a form of lattice-based, nondiscretionary access controls that use data classification schemes; they give users and data owners limited control over access to information resources. In a data classification scheme, each collection of information is rated, and all users are rated to specify the level of information they may access. These ratings are often referred to as sensitivity levels, and they indicate the level of confidentiality the information requires. A newer approach to the lattice-based access controls promoted by NIST is attribute-based access controls (ABACs). “There are characteristics or attributes of a subject such as name, date of birth, home address, training record, and job function that may, either individually or when combined, comprise a unique identity that distinguishes that person from all others. These characteristics are often called subject attributes.” An ABAC system simply uses one of these attributes to regulate access to a particular set of data. This system is similar in concept to looking up movie times on a Web site that requires you to enter your zip code to select a particular theatre, or a home supply or electronics store that asks for your zip code to determine if a particular discount is available at your nearest store. According to NIST, ABAC is actually the parent approach to lattice-based, MAC, and RBAC controls, as they all are based on attributes. Access Control Mechanisms Key Terms access control matrix An integration of access control lists (focusing on assets) and capabilities tables (focusing on users) that results in a matrix with organizational assets listed in the column headings and users listed in the row headings. The matrix contains ACLs in columns for a particular device or asset and capabilities tables in rows for a particular user accountability The access control mechanism that ensures all actions on a system—authorized or unauthorized—can be attributed to an authenticated identity. Also known as auditability. asynchronous token An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computergenerated number used to support remote login authentication. This token does not require calibration of the central authentication server; instead, it uses a challenge/response system. authentication The access control mechanism that requires the validation and verification of an unauthenticated entity’s purported identity. authentication factors Three mechanisms that provide authentication based on something an unauthenticated entity knows, something an unauthenticated entity has, and something an unauthenticated entity is. IN FORMA TION A SSU RAN CE AND SECURITY 2 4 authorization The access control mechanism that represents the matching of an authenticated entity to a list of information assets and corresponding access levels. dumb card An authentication card that contains digital user data, such as a personal identification number (PIN), against which user input is compared. identification The access control mechanism whereby unverified or unauthenticated entities who seek access to a resource provide a label by which they are known to the system. passphrase A plain-language phrase, typically longer than a password, from which a virtual password is derived. password A secret word or combination of characters that only the user should know; a password is used to authenticate the user smart card An authentication component similar to a dumb card that contains a computer chip to verify and validate several pieces of information instead of just a PIN. strong authentication In access control, the use of at least two different authentication mechanisms drawn from two different factors of authentication. synchronous token An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computer-generated number used to support remote login authentication. This token must be calibrated with the corresponding software on the central authentication server. virtual password The derivative of a passphrase. In general, all access control approaches rely on the following four mechanisms, which represent the four fundamental functions of access control systems: Identification: I am a user of the system. Authentication: I can prove I’m a user of the system. Authorization: Here’s what I can do with the system. Accountability: You can track and monitor my use of the system. Identification Identification is a mechanism whereby unverified or unauthenticated entities who seek access to a resource provide a label by which they are known to the system. This label is called an identifier (ID), and it must be mapped to one and only one entity within the security domain. Sometimes the unauthenticated entity supplies the label, and sometimes it is applied to the entity. Some organizations use composite identifiers by concatenating elements—department codes, random numbers, or special characters—to make unique identifiers within the security domain. Other organizations generate random IDs to protect resources from potential attackers. Most organizations use a single piece of unique information, such as a complete name or the user’s first initial and surname. Authentication Authentication is the process of validating an unauthenticated entity’s purported identity. There are three widely used authentication mechanisms, or authentication factors: Something you know Something you have Something you are Something You Know This factor of authentication relies on what the unverified user or system knows and can recall—for example, a password, passphrase, or other unique authentication code, such as a IN FORMA TION A SSU RAN CE AND SECURITY 2 5 personal identification number (PIN). A password is a private word or combination of characters that only the user should know. One of the biggest debates in the information security industry concerns the complexity of passwords. On one hand, a password should be difficult to guess, which means it cannot be a series of letters or a word that is easily associated with the user, such as the name of the user’s spouse, child, or pet. By the same token, a password should not be a series of numbers easily associated with the user, such as a phone number, Social Security number, or birth date. On the other hand, the password must be easy for the user to remember, which means it should be short or easily associated with something the user can remember. A passphrase is a series of characters that is typically longer than a password and can be used to derive a virtual password. By using the words of the passphrase as cues to create a stream of unique characters, you can create a longer, stronger password that is easy to remember. For example, while a typical password might be ―23skedoo,‖ a typical passphrase might be ―MayTheForceBeWithYouAlways,‖ represented as the virtual password ―MTFBWYA.‖ Users increasingly employ longer passwords or passphrases to provide effective security. As a result, it is becoming increasingly difficult to track the multitude of system usernames and passwords needed to access information for a typical business or personal transaction. The credit reporting service Experian found that the average user has 26 online accounts, but uses only five different passwords. For users between the ages of 25 and 34, the average number of accounts jumps to 40.2 A common method of keeping up with so many passwords is to write them down, which is a cardinal sin in information security. A better solution is automated password-tracking storage, like the application shown in Figure 6- 3. This example shows a mobile application that can be synchronized across multiple platforms, including Apple IOS, Android, Windows, Macintosh, and Linux, to manage access control information in all its forms. Something You Have This authentication factor relies on something an unverified user or system has and can produce when necessary. One example is dumb cards, such as ID cards or ATM cards with magnetic stripes that contain the digital (and often encrypted) user PIN, which is compared against the number the user enters. The smart card contains a computer chip that can verify and validate several pieces of information instead of just a PIN. Another common device is the token—a card or key fob with a computer chip and a liquid crystal display that shows a computer-generated number used to support remote login authentication. IN FORMA TION A SSU RAN CE AND SECURITY 2 6 Tokens are synchronous or asynchronous. Once synchronous tokens are synchronized with a server, both the server and token use the same time or a time-based database to generate a number that must be entered during the user login phase. Asynchronous tokens don’t require that the server and tokens maintain the same time setting. Instead, they use a challenge/response system, in which the server challenges the unauthenticated entity during login with a numerical sequence. The unauthenticated entity places this sequence into the token and receives a response. The prospective user then enters the response into the system to gain access. Some examples of synchronous and asynchronous tokens are presented in Figure 6-4. Something You Are or Can Produce This authentication factor relies on individual characteristics, such as fingerprints, palm prints, hand topography, hand geometry, or retina and iris scans, or something an unverified user can produce on demand, such as voice patterns, signatures, or keyboard kinetic measurements. Some of these characteristics are known collectively as biometrics, which is covered later in this chapter. Note that certain critical logical or physical areas may require the use of strong authentication— at least two authentication mechanisms drawn from two different factors of authentication, most often something you have and something you know. For example, access to a bank’s ATM services requires a banking card plus a PIN. Such systems are called two-factor authentication because two separate mechanisms are used. Strong authentication requires that at least one of the mechanisms be something other than what you know. IN FORMA TION A SSU RAN CE AND SECURITY 2 7 Authorization Authorization is the matching of an authenticated entity to a list of information assets and corresponding access levels. This list is usually an ACL or access control matrix. In general, authorization can be handled in one of three ways: Authorization for each authenticated user, in which the system performs an authentication process to verify each entity and then grants access to resources for only that entity. This process quickly becomes complex and resource-intensive in a computer system. Authorization for members of a group, in which the system matches authenticated entities to a list of group memberships and then grants access to resources based on the group’s access rights. This is the most common authorization method. Authorization across multiple systems, in which a central authentication and authorization system verifies an entity’s identity and grants it a set of credentials Authorization credentials, which are also called authorization tickets, are issued by an authenticator and are honored by many or all systems within the authentication domain. Sometimes called single sign-on (SSO) or reduced sign-on, authorization credentials are becoming more common and are frequently enabled using a shared directory structure such as the Lightweight Directory Access Protocol (LDAP). Accountability Accountability, also known as auditability, ensures that all actions on a system—authorized or unauthorized—can be attributed to an authenticated identity. Accountability is most often accomplished by means of system logs, database journals, and the auditing of these records. Systems logs record specific information, such as failed access attempts and systems modifications. Logs have many uses, such as intrusion detection, determining the root cause of a system failure, or simply tracking the use of a particular resource. IN FORMA TION A SSU RAN CE AND SECURITY 2 8 Access Control Architecture Models Key Terms covert channels Unauthorized or unintended methods of communications hidden inside a computer system. reference monitor Within TCB, a conceptual piece of the system that manages access controls— in other words, it mediates all access to objects by subjects. storage channels TCSEC-defined covert channels that communicate by modifying a stored object, such as in steganography. timing channels TCSEC-defined covert channels that communicate by managing the relative timing of events. trusted computing base (TCB) Under the Trusted Computer System Evaluation Criteria (TCSEC), the combination of all hardware, firmware, and software responsible for enforcing the security policy. Security access control architecture models, which are often referred to simply as architecture models, illustrate access control implementations and can help organizations quickly make improvements through adaptation. Formal models do not usually find their way directly into usable implementations; instead, they form the theoretical foundation that an implementation uses. These formal models are discussed here so you can become familiar with them and see how they are used in various access control approaches. When a specific implementation is put into place, noting that it is based on a formal model may lend credibility, improve its reliability, and lead to improved results. Some models are implemented into computer hardware and software, some are implemented as policies and practices, and some are implemented in both. Some models focus on the confidentiality of information, while others focus on the information’s integrity as it is being processed. The first models discussed here—specifically, the trusted computing base, the Information Technology System Evaluation Criteria, and the Common Criteria—are used as evaluation models and to demonstrate the evolution of trusted system assessment, which include evaluations of access controls. The later models—Bell-LaPadula, Biba, and others—demonstrate implementations in some computer security systems to ensure that the confidentiality, integrity, and availability of information is protected by controlling the access of one part of a system on another. TCSEC’s Trusted Computing Base The Trusted Computer System Evaluation Criteria (TCSEC) is an older DoD standard that defines the criteria for assessing the access controls in a computer system. This standard is part of a larger series of standards collectively referred to as the Rainbow Series because of the color coding used to uniquely identify each document (see Figure 6-6). TCSEC is also known as the ―Orange Book‖ and is considered the cornerstone of the series. As described later in this chapter, this series was replaced in 2005 with a set of standards known as the Common Criteria, but information security professionals should be familiar with the terminology and concepts of this legacy approach. For example, TCSEC uses the concept of the trusted computing base (TCB) to enforce security policy. In this context, ―security policy‖ refers to the rules of configuration for a system rather than a managerial guidance document. TCB is only as effective as its internal control mechanisms and the administration of the systems being configured. TCB is made up of the hardware and software that has been implemented to provide security for a particular information system. This usually includes the operating system kernel and a specified set of security utilities, such as the user login subsystem. IN FORMA TION A SSU RAN CE AND SECURITY 2 9 The term ―trusted‖ can be misleading—in this context, it means that a component is part of TCB’s security system, but not that it is necessarily trustworthy. The frequent discovery of flaws and delivery of patches by software vendors to remedy security vulnerabilities attest to the relative level of trust you can place in current generations of software. Within TCB is an object known as the reference monitor, which is the piece of the system that manages access controls. Systems administrators must be able to audit or periodically review the reference monitor to ensure it is functioning effectively, without unauthorized modification. One of the biggest challenges in TCB is the existence of covert channels. Covert channels could be used by attackers who seek to exfiltrate sensitive data without being detected. Data loss prevention technologies monitor standard and covert channels to attempt to reduce an attacker’s ability to accomplish exfiltration. For example, the cryptographic technique known as steganography allows the embedding of data bits in the digital version of graphical images, allowing a user to hide a message in a picture. TCSEC defines two kinds of covert channels: Storage channels, which are used in steganography, as described above, and in the embedding of data in TCP or IP header fields. IN FORMA TION A SSU RAN CE AND SECURITY 2 10 Timing channels, which are used in a system that places a long pause between packets to signify a 1 and a short pause between packets to signify a 0. ITSEC The Information Technology System Evaluation Criteria (ITSEC), an international set of criteria for evaluating computer systems, is very similar to TCSEC. Under ITSEC, Targets of Evaluation (ToE) are compared to detailed security function specifications, resulting in an assessment of systems functionality and comprehensive penetration testing. Like TCSEC, ITSEC was functionally replaced for the most part by the Common Criteria, which are described in the following section. The ITSEC rates products on a scale of E1 to the highest level of E6, much like the ratings of TCSEC and the Common Criteria. E1 is roughly equivalent to the EAL2 evaluation of the Common Criteria, and E6 is roughly equivalent to EAL7. The Common Criteria The Common Criteria for Information Technology Security Evaluation, often called the Common Criteria or just CC, is an international standard (ISO/IEC 15408) for computer security certification. It is widely considered the successor to both TCSEC and ITSEC in that it reconciles some differences between the various other standards. Most governments have discontinued their use of the other standards. CC is a combined effort of contributors from Australia, New Zealand, Canada, France, Germany, Japan, the Netherlands, Spain, the United Kingdom, and the United States. In the United States, the National Security Agency (NSA) and NIST were the primary contributors. CC and its companion, the Common Methodology for Information Technology Security Evaluation (CEM), are the technical basis for an international agreement called the Common Criteria Recognition Agreement (CCRA), which ensures that products can be evaluated to determine their particular security properties. CC seeks the widest possible mutual recognition of secure IT products.5 The CC process assures that the specification, implementation, and evaluation of computer security products are performed in a rigorous and standard manner. CC terminology includes: Target of Evaluation (ToE): The system being evaluated Protection Profile (PP): User-generated specification for security requirements Security Target (ST): Document describing the ToE’s security properties Security Functional Requirements (SFRs): Catalog of a product’s security functions Evaluation Assurance Levels (EALs): The rating or grading of a ToE after evaluation EAL is typically rated on the following scale: EAL1: Functionally Tested: Confidence in operation against nonserious threats EAL2: Structurally Tested: More confidence required but comparable with good business practices EAL3: Methodically Tested and Checked: Moderate level of security assurance EAL4: Methodically Designed, Tested, and Reviewed: Rigorous level of security assurance but still economically feasible without specialized development EAL5: Semiformally Designed and Tested: Certification requires specialized development above standard commercial products EAL6: Semiformally Verified Design and Tested: Specifically designed security ToE EAL7: Formally Verified Design and Tested: Developed for extremely high-risk situations or for high-value systems. IN FORMA TION A SSU RAN CE AND SECURITY 2 11 Bell-LaPadula Confidentiality Model The Bell-LaPadula (BLP) confidentiality model is a ―state machine reference model‖—in other words, a model of an automated system that is able to manipulate its state or status over time. BLP ensures the confidentiality of the modeled system by using MACs, data classification, and security clearances. The intent of any state machine model is to devise a conceptual approach in which the system being modeled can always be in a known secure condition; in other words, this kind of model is provably secure. A system that serves as a reference monitor compares the level of data classification with the clearance of the entity requesting access; it allows access only if the clearance is equal to or higher than the classification. BLP security rules prevent information from being moved from a level of higher security to a lower level. Access modes can be one of two types: simple security and the * (star) property. Simple security (also called the read property) prohibits a subject of lower clearance from reading an object of higher clearance, but it allows a subject with a higher clearance level to read an object at a lower level (read down). The * property (the write property), on the other hand, prohibits a high-level subject from sending messages to a lower-level object. In short, subjects can read down and objects can write or append up. BLP uses access permission matrices and a security lattice for access control. This model can be explained by imagining a fictional interaction between General Bell, whose thoughts and actions are classified at the highest possible level, and Private LaPadula, who has the lowest security clearance in the military. It is prohibited for Private LaPadula to read anything written by General Bell and for General Bell to write in any document that Private LaPadula could read. In short, the principle is ―no read up, no write down.‖ Biba Integrity Model The Biba integrity model is similar to BLP. It is based on the premise that higher levels of integrity are more worthy of trust than lower ones. The intent is to provide access controls to ensure that objects or subjects cannot have less integrity as a result of read/write operations. The Biba model assigns integrity levels to subjects and objects using two properties: the simple integrity (read) property and the integrity * property (write). The simple integrity property permits a subject to have read access to an object only if the subject’s security level is lower than or equal to the level of the object. The integrity * property permits a subject to have write access to an object only if the subject’s security level is equal to or higher than that of the object. The Biba model ensures that no information from a subject can be passed on to an object in a higher security level. This prevents contaminating data of higher integrity with data of lower integrity. This model can be illustrated by imagining fictional interactions among some priests, a monk named Biba, and some parishioners in the Middle Ages. Priests are considered holier (of greater integrity) than monks, who are in turn holier than parishioners. A priest cannot read (or offer) Masses or prayers written by Biba the Monk, who in turn cannot read items written by his parishioners. Biba the Monk is also IN FORMA TION A SSU RAN CE AND SECURITY 2 12 prohibited from writing in a priest’s sermon books, just as parishioners are prohibited from writing in Biba’s book. These properties prevent the lower integrity of the lower level from corrupting the ―holiness‖ or higher integrity of the upper level. On the other hand, higher-level entities can share their writings with the lower levels without compromising the integrity of the information. This example illustrates the ―no write up, no read down‖ principle behind the Biba model. Clark-Wilson Integrity Model The Clark-Wilson integrity model, which is built upon principles of change control rather than integrity levels, was designed for the commercial environment. The model’s change control principles are: No changes by unauthorized subjects No unauthorized changes by authorized subjects The maintenance of internal and external consistency Internal consistency means that the system does what it is expected to do every time, without exception. External consistency means that the data in the system is consistent with similar data in the outside world. This model establishes a system of subject-program-object relationships so that the subject has no direct access to the object. Instead, the subject is required to access the object using a well-formed transaction via a validated program. The intent is to provide an environment where security can be proven through the use of separated activities, each of which is provably secure. The following controls are part of the Clark-Wilson model: Subject authentication and identification Access to objects by means of well-formed transactions Execution by subjects on a restricted set of programs The elements of the Clark-Wilson model are: Constrained data item (CDI): Data item with protected integrity Unconstrained data item: Data not controlled by Clark-Wilson; nonvalidated input or any output Integrity verification procedure (IVP): Procedure that scans data and confirms its integrity Transformation procedure (TP): Procedure that only allows changes to a constrained data item All subjects and objects are labeled with TPs. The TPs operate as the intermediate layer between subjects and objects. Each data item has a set of access operations that can be performed on it. Each subject is assigned a set of access operations that it can perform. The system then compares these two parameters and either permits or denies access by the subject to the object.10 As an example, consider a database management system (DBMS) that sits between a database user and the actual data. The DBMS requires the user to be authenticated before accessing the data, only accepts specific inputs (such as SQL queries), and only provides a restricted set of operations, in accordance with its design. This example illustrates the Clark-Wilson model controls. IN FORMA TION A SSU RAN CE AND SECURITY 2 13 Graham-Denning Access Control Model The Graham-Denning access control model has three parts: a set of objects, a set of subjects, and a set of rights. The subjects are composed of two things: a process and a domain. The domain is the set of constraints that control how subjects may access objects. The set of rights governs how subjects may manipulate the passive objects. This model describes eight primitive protection rights, called commands, which subjects can execute to have an effect on other subjects or objects. Note that these commands are similar to the rights a user can assign to an entity in modern operating systems. The eight primitive protection rights are: Create object Create subject Delete object Delete subject Read access right Grant access right Delete access right Transfer access right Harrison-Ruzzo-Ullman Model The Harrison-Ruzzo-Ullman (HRU) model defines a method to allow changes to access rights and the addition and removal of subjects and objects, a process that the Bell-LaPadula model does not allow. Because systems change over time, their protective states need to change. HRU is built on an access control matrix and includes a set of generic rights and a specific set of commands. These include: Create subject/create object Enter right X into Delete right X from Destroy subject/destroy object By implementing this set of rights and commands and restricting the commands to a single operation each, it is possible to determine if and when a specific subject can obtain a particular right to an object. Brewer-Nash Model (Chinese Wall) The Brewer-Nash model, commonly known as a Chinese Wall, is designed to prevent a conflict of interest between two parties. Imagine that a law firm represents two people who are involved in a car accident. One sues the other, and the firm has to represent both. To prevent a conflict of interest, the individual attorneys should not be able to access the private information of these two litigants. The Brewer-Nash model requires users to select one of two conflicting sets of data, after which they cannot access the conflicting data. IN FORMA TION A SSU RAN CE AND SECURITY 2 14 Firewalls Firewalls Key Terms address restrictions Firewall rules designed to prohibit packets with certain addresses or partial addresses from passing through the device. dynamic packet-filtering firewall A firewall type that can react to network traffic and create or modify configuration rules to adapt. firewall In information security, a combination of hardware and software that filters or prevents specific information from moving between the outside network and the inside network. packet-filtering firewall A networking device that examines the header information of data packets that come into a network and determines whether to drop them (deny) or forward them to the next network connection (allow), based on its configuration rules. state table A tabular record of the state and context of each packet in a conversation between an internal and external user or system. A state table is used to expedite traffic filtering. stateful packet inspection (SPI) firewall A firewall type that keeps track of each network connection between internal and external systems using a state table and that expedites the filtering of those communications. Also known as a stateful inspection firewall. static packet-filtering firewall A firewall type that requires the configuration rules to be manually created, sequenced, and modified within the firewall. trusted network The system of networks inside the organization that contains its information assets and is under the organization’s control. untrusted network The system of networks outside the organization over which the organization has no control. The Internet is an example of an untrusted network. In commercial and residential construction, firewalls are concrete or masonry walls that run from the basement through the roof to prevent a fire from spreading from one section of the building to another. In aircraft and automobiles, a firewall is an insulated metal barrier that keeps the hot and dangerous moving parts of the motor separate from the flammable interior where the passengers sit. A firewall in an information security program is similar to a building’s firewall in that it prevents specific types of information from moving between two different levels of networks, such as an untrusted network like the Internet and a trusted network like the organization’s internal network. Some organizations place firewalls that have different levels of trust between portions of their network environment, often to add extra security for the most important applications and data. The firewall may be a separate computer system, a software service running on an existing router or server, or a separate network that contains several supporting devices. Firewalls can be categorized by processing mode, development era, or structure. Firewall Processing Modes Firewalls fall into several major categories of processing modes: packet-filtering firewalls, application layer proxy firewalls, media access control layer firewalls, and hybrids.14 Hybrid firewalls use a combination of the other modes; in practice, most firewalls fall into this category because most implementations use multiple approaches. Packet-Filtering Firewalls The packet-filtering firewall examines the header information of data packets that come into a network. A packet-filtering firewall installed on a TCP/IP-based network typically IN FORMA TION A SSU RAN CE AND SECURITY 2 15 functions at the IP layer and determines whether to deny (drop) a packet or allow (forward) it to the next network connection, based on the rules programmed into the firewall. Packet-filtering firewalls examine every incoming packet header and can selectively filter packets based on header information such as destination address, source address, packet type, and other key information. Figure 6-7 shows the structure of an IPv4 packet. Packet-filtering firewalls scan network data packets looking for compliance with the rules of the firewall’s database or violations of those rules. Filtering firewalls inspect packets at the network layer, or Layer 3, of the Open Systems Interconnect (OSI) model, which represents the seven layers of networking processes. (The OSI model is illustrated later in this chapter in Figure 6-11.) If the device finds a packet that matches a restriction, it stops the packet from traveling from one network to another. The restrictions most commonly implemented in packet-filtering firewalls are based on a combination of the following: IP source and destination address Direction (inbound or outbound) Protocol, for firewalls capable of examining the IP protocol layer Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) source and destination port requests, for firewalls capable of examining the TCP/UPD layer IN FORMA TION A SSU RAN CE AND SECURITY 2 16 Packet structure varies depending on the nature of the packet. The two primary service types are TCP and UDP, as noted above. Figures 6-8 and 6-9 show the structures of these two major elements of the combined protocol known as TCP/IP. Simple firewall models examine two aspects of the packet header: the destination and source address. They enforce address restrictions through ACLs, which are created and modified by the firewall administrators. Figure 6-10 shows how a packet-filtering router can be used as a firewall to filter data packets from inbound connections and allow outbound connections unrestricted access to the public network. Dual-homed bastion host firewalls are discussed later in this chapter. IN FORMA TION A SSU RAN CE AND SECURITY 2 17 To better understand an address restriction scheme, consider an example. If an administrator configured a simple rule based on the content of Table 6-2, any connection attempt made by an external computer or network device in the 192.168.x.x address range (192.168.0.0–192.168.255.255) to the Web server at 10.10.10.25 would be allowed. The ability to restrict a specific service rather than just a range of IP addresses is available in a more advanced version of this first-generation firewall. Additional details on firewall rules and configuration are presented later in this chapter. The ability to restrict a specific service is now considered standard in most routers and is invisible to the user. Unfortunately, such systems are unable to detect whether packet headers have been modified, which is an advanced technique used in IP spoofing attacks and other attacks. The three subsets of packet-filtering firewalls are static packet filtering, dynamic packet filtering, and stateful packet inspection (SPI). They enforce address restrictions, rules designed to prohibit packets with certain addresses or partial addresses from passing through the device. Static packet filtering requires that the filtering rules be developed and installed with the firewall. The rules are created and sequenced by a person who either directly edits the rule set or uses a programmable interface to specify the rules and the sequence. Any changes to the rules require human intervention. This type of filtering is common in network routers and gateways. A dynamic packet-filtering firewall can react to an emergent event and update or create rules to deal with that event. This reaction could be positive, as in allowing an internal user to engage in a specific activity upon request, or negative, as in dropping all packets from a particular address when an increased presence of a particular type of malformed packet is detected. While static packet-filtering firewalls allow entire IN FORMA TION A SSU RAN CE AND SECURITY 2 18 sets of one type of packet to enter in response to authorized requests, dynamic packet filtering allows only a particular packet with a particular source, destination, and port address to enter. This filtering works by opening and closing ―doors‖ in the firewall based on the information contained in the packet header, which makes dynamic packet filters an intermediate form between traditional static packet filters and application proxies. (These proxies are described in the next section.) SPI firewalls, also called stateful inspection firewalls, keep track of each network connection between internal and external systems using a state table. A state table tracks the state and context of each packet in the conversation by recording which station sent what packet and when. Like first-generation firewalls, stateful inspection firewalls perform packet filtering, but they take it a step further. Whereas simple packet-filtering firewalls only allow or deny certain packets based on their address, a stateful firewall can expedite incoming packets that are responses to internal requests. If the stateful firewall receives an incoming packet that it cannot match in its state table, it refers to its ACL to determine whether to allow the packet to pass. The primary disadvantage of this type of firewall is the additional processing required to manage and verify packets against the state table. Without this processing, the system is vulnerable to a DoS or DDoS attack. In such an attack, the system receives a large number of external packets, which slows the firewall because it attempts to compare all of the incoming packets first to the state table and then to the ACL. On the positive side, these firewalls can track connectionless packet traffic, such as UDP and remote procedure calls (RPC) traffic. Dynamic SPI firewalls keep a dynamic state table to make changes to the filtering rules within predefined limits, based on events as they happen. A state table looks like a firewall rule set but has additional information, as shown in Table 6-3. The state table contains the familiar columns for source IP, source port, destination IP, and destination port, but it adds information for the protocol used (UDP or TCP), total time in seconds, and time remaining in seconds. Many state table implementations allow a connection to remain in place for up to 60 minutes without any activity before the state entry is deleted. The example in Table 6-3 shows this value in the Total Time column. Application Layer Proxy Firewalls Key Terms application firewall See application layer proxy firewall. application layer proxy firewall A device capable of functioning both as a firewall and an application layer proxy server. demilitarized zone (DMZ) An intermediate area between two networks designed to provide servers and firewall filtering between a trusted internal network and the outside, untrusted network. Traffic on the outside network carries a higher level of risk. proxy server A server that exists to intercept requests for information from external users and provide the requested information by retrieving it from an internal server, thus protecting and minimizing the demand on internal servers. Some proxy servers are also cache servers. IN FORMA TION A SSU RAN CE AND SECURITY 2 19 reverse proxy A proxy server that most commonly retrieves information from inside an organization and provides it to a requesting user or system outside the organization Firewall Architectures Firewall Architectures Key Terms bastion host A device placed between an external, untrusted network and an internal, trusted network. Also known as a sacrificial host, a bastion host serves as the sole target for attack and should therefore be thoroughly secured. extranet A segment of the DMZ where additional authentication and authorization controls are put into place to provide services that are not available to the general public. Network Address Translation (NAT) A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-one basis; that is, one external valid address directly maps to one assigned internal address. Port Address Translation (PAT) A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-many basis; that is, one external valid address is mapped dynamically to a range of internal addresses by adding a unique port number to the address when traffic leaves the private network and is placed on the public network. sacrificial host See bastion host. screened host architecture A firewall architectural model that combines the packet filtering router with a second, dedicated device such as a proxy server or proxy firewall. screened subnet architecture A firewall architectural model that consists of one or more internal bastion hosts located behind a packet filtering router on a dedicated network segment, with each host performing a role in protecting the trusted network. single bastion host See bastion host. The value of a firewall comes from its ability to filter out unwanted or dangerous traffic as it enters the network perimeter of an organization. A challenge to the value proposition offered by firewalls is the changing nature of the way networks are used. As organizations implement cloud-based IT solutions, bring your own device (BYOD) options for employees, and other emerging network solutions, the network perimeter may be dissolving for them. One reaction is the use of a software-defined perimeter that uses secure VPN technology to deliver network connectivity only to verified devices, regardless of location. Regardless of the approach companies take to meet these challenges, they will often make use of expertise from other companies that offer managed security services (MSS). These companies assist their clients with highly available monitoring services from secure network operations centers (NOCs). Many companies still rely on the defined network perimeter as their first line of network security defense. All firewall devices can be configured in several network connection architectures. These approaches are sometimes mutually exclusive, but sometimes they can be combined. The configuration that works best for a particular organization depends on three factors: the objectives of the network, the organization’s ability to develop and implement the architectures, and the budget available for the function. Although hundreds of variations exist, three architectural implementations of firewalls are especially common: single bastion hosts, screened host firewalls, and screened subnet firewalls. Single Bastion Hosts The next option in firewall architecture is a single firewall that provides protection behind the organization’s router. As you saw in Figure 6-10 earlier, the single bastion host architecture can be IN FORMA TION A SSU RAN CE AND SECURITY 2 20 implemented as a packet filtering router, or it could be a firewall behind a router that is not configured for packet filtering. Any system, router, or firewall that is exposed to the untrusted network can be referred to as a bastion host. The bastion host is sometimes referred to as a sacrificial host because it stands alone on the network perimeter. This architecture is simply defined as the presence of a single protection device on the network perimeter. It is commonplace in residential SOHO environments. Larger organizations typically look to implement architectures with more defense in depth, with additional security devices designed to provide a more robust defense strategy. The bastion host is usually implemented as a dual-homed host because it contains two network interfaces: one that is connected to the external network and one that is connected to the internal network. All traffic must go through the device to move between the internal and external networks. Such an architecture lacks defense in depth, and the complexity of the ACLs used to filter the packets can grow and degrade network performance. An attacker who infiltrates the bastion host can discover the configuration of internal networks and possibly provide external sources with internal information. Implementation of the bastion host architecture often makes use of Network Address Translation (NAT). RFC 2663 uses the term network address and port translation (NAPT) to describe both NAT and Port Address Translation (PAT), which is described later in this section. NAT is a method of mapping valid, external IP addresses to special ranges of nonroutable internal IP addresses, known as private IPv4 addresses, to create another barrier to intrusion from external attackers. In IPv6 addressing, these addresses are referred to as Unique Local Addresses (ULA), as defined by RFC 4193. The internal addresses used by NAT consist of three different ranges. Organizations that need a large group of addresses for internal use will use the private IP address ranges reserved for nonpublic networks, as shown in Table 6-4. Messages sent with internal addresses within these three reserved ranges cannot be routed externally, so if a computer with one of these internal-use addresses is directly connected to the external network and avoids the NAT server, its traffic cannot be routed on the public network. Taking advantage of this, NAT prevents external attacks from reaching internal machines with addresses in specified ranges. If the NAT server is a multi-homed bastion host, it translates between the true, external IP addresses assigned to the organization by public network naming authorities and the internally assigned, nonroutable IP addresses. NAT translates by dynamically assigning addresses to internal communications and tracking the conversations with sessions to determine which incoming message is a response to which outgoing traffic. IN FORMA TION A SSU RAN CE AND SECURITY 2 21 A variation on NAT is Port Address Translation (PAT). Where NAT performs a one-to-one mapping between assigned external IP addresses and internal private addresses, PAT performs a one-to-many assignment that allows the mapping of many internal hosts to a single assigned external IP address. The system is able to maintain the integrity of each communication by assigning a unique port number to the external IP address and mapping the address þ port combination (known as a socket) to the internal IP address. Multiple communications from a single internal address would have a unique matching of the internal IP address þ port to the external IP address þ port, with unique port addresses for each communication. Figure 6-16 shows an example configuration of a dual-homed firewall that uses NAT to protect the internal network. Screened Host Architecture A screened host architecture combines the packet-filtering router with a separate, dedicated firewall, such as an application proxy server, which retrieves information on behalf of other system users and often caches copies of Web pages and other needed information on its internal drives to speed up access. This approach allows the router to prescreen packets to minimize the network traffic and load on the internal proxy. The application proxy examines an application layer protocol, such as HTTP, and performs the proxy services. Because an application proxy may retain working copies of some Web documents to improve performance, unanticipated losses can result if it is compromised and the documents were not designed for general access. As such, the screened host firewall may present a promising target because compromise of the bastion host can lead to attacks on the proxy server that could disclose the configuration of internal networks and possibly provide attackers with internal information. To its advantage, this configuration requires the external attack to compromise two separate systems before the attack can access internal data. In this way, the bastion host protects the data more fully than the router alone. Figure 6-17 shows a typical configuration of a screened host architecture. IN FORMA TION A SSU RAN CE AND SECURITY 2 22 Screened Subnet Architecture (with DMZ) The dominant architecture today is the screened subnet used with a DMZ. The DMZ can be a dedicated port on the firewall device linking a single bastion host, or it can be connected to a screened subnet, as shown in Figure 6-18. Until recently, servers that provided services through an untrusted network were commonly placed in the DMZ. Examples include Web servers, file transfer protocol (FTP) servers, and certain database servers. More recent strategies using proxy servers have provided much more secure solutions. A common arrangement is a subnet firewall that consists of two or more internal bastion hosts behind a packet-filtering router, with each host protecting the trusted network. There are many variants of the screened subnet architecture. The first general model consists of two filtering routers, with one or more dual-homed bastion hosts between them. In the second general model, as illustrated in Figure 6-19, the connections are routed as follows: Connections from the outside or untrusted network are routed through an external filtering router. Connections from the outside or untrusted network are routed into—and then out of— a routing firewall to the separate network segment known as the DMZ. Connections into the trusted internal network are allowed only from the DMZ bastion host servers. IN FORMA TION A SSU RAN CE AND SECURITY 2 23 The screened subnet architecture is an entire network segment that performs two functions. First, it protects the DMZ systems and information from outside threats by providing a level of intermediate security, which means the network is more secure than general public networks but less secure than the internal network. Second, the screened subnet protects the internal networks by limiting how external connections can gain access to them. Although extremely secure, the screened subnet can be expensive to IN FORMA TION A SSU RAN CE AND SECURITY 2 24 implement and complex to configure and manage. The value of the information it protects must justify the cost. Another facet of the DMZ is the creation of an area known as an extranet. An extranet is a segment of the DMZ where additional authentication and authorization controls are put into place to provide services that are not available to the general public. An example is an online retailer that allows anyone to browse the product catalog and place items into a shopping cart, but requires extra authentication and authorization when the customer is ready to check out and place an order. Selecting the Right Firewall When trying to determine the best firewall for an organization, you should consider the following questions: 1. Which type of firewall technology offers the right balance between protection and cost for the needs of the organization? 2. What features are included in the base price? What features are available at extra cost? Are all cost factors known? 3. How easy is it to set up and configure the firewall? Does the organization have staff on hand that are trained to configure the firewall, or would the hiring of additional employees (or contractors or managed service providers) be required? 4. Can the firewall adapt to the growing network in the target organization? The most important factor, of course, is the extent to which the firewall design provides the required protection. The next important factor is cost, which may keep a certain make, model, or type of firewall out of reach. As with all security decisions, certain compromises may be necessary to provide a viable solution under the budgetary constraints stipulated by management. Configuring and Managing Firewalls Configuring and Managing Firewalls Key Term configuration rules The instructions a system administrator codes into a server, networking device, or security device to specify how it operates. Once the firewall architecture and technology have been selected, the organization must provide for the initial configuration and ongoing management of the firewall(s). Good policy and practice dictates that each firewall device, whether a filtering router, bastion host, or other implementation, must have its own set of configuration rules. In theory, packet-filtering firewalls examine each incoming packet using a rule set to determine whether to allow or deny the packet. That set of rules is made up of simple statements that identify source and destination addresses and the type of requests a packet contains based on the ports specified in the packet. In fact, the configuration of firewall policies can be complex and difficult. IT professionals who are familiar with application programming can appreciate the difficulty of debugging both syntax errors and logic errors. Syntax errors in firewall policies are usually easy to identify, as the systems alert the administrator to incorrectly configured policies. However, logic errors, such as allowing instead of denying, specifying the wrong port or service type, and using the wrong switch, are another story. A myriad of simple mistakes can take a device designed to protect users’ communications and turn it into one giant choke point. A choke point that restricts all communications or an incorrectly configured rule can cause other unexpected results. For example, novice firewall administrators often improperly configure a virus-screening e-mail gateway to operate as a type of e-mail firewall. Instead of screening e- mail for malicious code, it blocks all incoming e-mail and causes a great deal of frustration among users. IN FORMA TION A SSU RAN CE AND SECURITY 2 25 Configuring firewall policies is as much an art as it is a science. Each configuration rule must be carefully crafted, debugged, tested, and placed into the firewall’s rule base in the proper sequence. Good, correctly sequenced firewall rules ensure that the actions taken comply with the organization’s policy. In a well- designed, efficient firewall rule set, rules that can be evaluated quickly and govern broad access are performed before rules that may take longer to evaluate and affect fewer cases. The most important thing to remember when configuring firewalls is that when security rules conflict with the performance of business, security often loses. If users can’t work because of a security restriction, the security administration is usually told in no uncertain terms to remove the safeguard. In other words, organizations are much more willing to live with potential risk than certain failure. Best Practices for Firewalls This section outlines some of the best practices for firewall use. Note that these rules are not presented in any particular sequence. For sequencing of rules, refer to the next section. All traffic from the trusted network is allowed out. This rule allows members of the organization to access the services they need. Filtering and logging of outbound traffic can be implemented when required by specific organizational policies. The firewall device is never directly accessible from the public network for configuration or management purposes. Almost all administrative access to the firewall device is denied to internal users as well. Only authorized firewall administrators access the device through secure authentication mechanisms, preferably via a method that is based on cryptographically strong authentication and uses two-factor access control techniques. Simple Mail Transfer Protocol (SMTP) data is allowed to enter through the firewall, but is routed to a well-configured SMTP gateway to filter and route messaging traffic securely. All Internet Control Message Protocol (ICMP) data should be denied, especially on external interfaces. Known as the ping service, ICMP is a common method for hacker reconnaissance and should be turned off to prevent snooping. Telnet (terminal emulation) access should be blocked to all internal servers from the public networks. At the very least, Telnet access to the organization’s Domain Name System (DNS) server should be blocked to prevent illegal zone transfers and to prevent attackers from taking down the organization’s entire network. If internal users need to access an organization’s network from outside the firewall, the organization should enable them to use a virtual private network (VPN) client or other secure system that provides a reasonable level of authentication. When Web services are offered outside the firewall, HTTP traffic should be blocked from internal networks through the use of some form of proxy access or DMZ architecture. That way, if any employees are running Web servers for internal use on their desktops, the services are invisible to the outside Internet. If the Web server is behind the firewall, allow HTTP or HTTPS traffic (also known as Secure Sockets Layer or SSL) so users on the Internet at large can view it. The best solution is to place the Web servers that contain critical data inside the network and use proxy services from a DMZ (screened network segment), and to restrict Web traffic bound for internal network addresses to allow only those requests that originated from internal addresses. This restriction can be accomplished using NAT or other stateful inspection or proxy server firewalls. All other incoming HTTP traffic should be blocked. If the Web servers only contain advertising, they should be placed in the DMZ and rebuilt on a timed schedule or when—not if, but when—they are compromised. IN FORMA TION A SSU RAN CE AND SECURITY 2 26 All data that is not verifiably authentic should be denied. When attempting to convince packet- filtering firewalls to permit malicious traffic, attackers frequently put an internal address in the source field. To avoid this problem, set rules so that the external firewall blocks all inbound traffic with an organizational source address. Firewall Rules As you learned earlier in this chapter, firewalls operate by examining a data packet and performing a comparison with some predetermined logical rules. The logic is based on a set of guidelines programmed by a firewall administrator or created dynamically based on outgoing requests for information. This logical set is commonly referred to as firewall rules, a rule base, or firewall logic. Most firewalls use packet header information to determine whether a specific packet should be allowed to pass through or be dropped. Firewall rules operate on the principle of ―that which is not permitted is prohibited,‖ also known as expressly permitted rules. In other words, unless a rule explicitly permits an action, it is denied. When your organization (or even your home network) uses certain cloud services, like backup providers or Application as a Service providers, or implements some types of device automation, such as the Internet of Things, you may have to make firewall rule adjustments. This may include allowing remote servers access to specific on-premise systems or requiring firewall controls to block undesirable outbound traffic. When these special circumstances occur, you will need to understand how firewall rules are implemented. To better understand more complex rules, you must be able to create simple rules and understand how they interact. In the exercise that follows, many of the rules are based on the best practices outlined earlier. Note that some of the example rules may be implemented automatically by certain brands of firewalls. Therefore, it is imperative to become well trained on a particular brand of firewall before attempting to implement one in any setting outside of a lab. For the purposes of this discussion, assume a network configuration as illustrated in Figure 6-19, with an internal and external filtering firewall. The exercise discusses the rules for both firewalls and provides a recap at the end that shows the complete rule sets for each filtering firewall. Note that separate access control lists are created for each interface on a firewall and are bound to that interface. This creates a set of unidirectional flow checks for dual-homed hosts, for example, which means that some of the rules shown here are designed for inbound traffic from the untrusted side of the firewall to the trusted side, and some rules are designed for outbound traffic from the trusted side to the untrusted side. It is important to ensure that the appropriate rule is used, as permitting certain traffic on the wrong side of the device can have unintended consequences. These examples assume that the firewall can process information beyond the IP level (TCP/UDP) and thus can access source and destination port addresses. If it could not, you could substitute the IP ―Protocol‖ field for the source and destination port fields. Some firewalls can filter packets by protocol name as opposed to protocol port number. For instance, Telnet protocol packets usually go to TCP port 23, but they can sometimes be redirected to another much higher port number in an attempt to conceal the activity. The system (or well-known) port numbers are 0 through 1023, user (or registered) port numbers are 1024 through 49151, and dynamic (or private) port numbers are 49152 through 65535. See www.iana.org/assignments/port-numbers for more information. The example shown in Table 6-5 uses the port numbers associated with several well-known protocols to build a rule base. IN FORMA TION A SSU RAN CE AND SECURITY 2 27 Rule set 1: Responses to internal requests are allowed. In most firewall implementations, it is desirable to allow a response to an internal request for information. In stateful firewalls, this response is most easily accomplished by matching the incoming traffic to an outgoing request in a state table. In simple packet filtering, this response can be accomplished by setting the following rule for the external filtering router. (Note that the network address for the destination ends with.0; some firewalls use a notation of.x instead.) Use extreme caution in deploying this rule, as some attacks use port assignments above 1023. However, most modern firewalls use stateful inspection filtering and make this concern obsolete. The rule is shown in Table 6-6. It states that any inbound packet destined for the internal network and for a destination port greater than 1023 is allowed to enter. The inbound packets can have any source address and be from any source port. The destination address of the internal network is 10.10.10.0, and the destination port is any port beyond the range of well-known ports. Why allow all such packets? While outbound communications request information from a specific port (for example, a port 80 request for a Web page), the response is assigned a number outside the well-known port range. If multiple browser windows are open at the same time, each window can request a packet from a Web site, and the response is directed to a specific destination port, allowing the browser and Web server to keep each conversation separate. While this rule is sufficient for the external firewall, it is dangerous to allow any traffic in just because it is destined to a high port range. A better solution is to have the internal firewall use state tables that track connections and thus prevent dangerous packets from entering this upper port range. Again, this practice is known as stateful packet inspection. This is one of the rules allowed by default by most modern firewall systems. Rule set 2: The firewall device is never accessible directly from the public network. If attackers can directly access the firewall, they may be able to modify or delete rules and allow unwanted traffic through. For the same reason, the firewall itself should never be allowed to access other network devices directly. If hackers compromise the firewall and then use its permissions to access other servers or clients, they may cause additional damage or mischief. IN FORMA TION A SSU RAN CE AND SECURITY 2 28 The rules shown in Table 6-7 prohibit anyone from directly accessing the firewall, and prohibit the firewall from directly accessing any other devices. Note that this example is for the external filtering router and firewall only. Similar rules should be crafted for the internal router. Why are there separate rules for each IP address? The 10.10.10.1 address regulates external access to and by the firewall, while the 10.10.10.2 address regulates internal access. Not all attackers are outside the firewall! Note that if the firewall administrator needs direct access to the firewall from inside or outside the network, a permission rule allowing access from his or her IP address should preface this rule. The interface can also be accessed on the opposite side of the device, as traffic would be routed through the box and ―boomerang‖ back when it hits the first router on the far side. Thus, the rule protects the interfaces in both the inbound and outbound rule set. Rule set 3: All traffic from the trusted network is allowed out. As a general rule, it is wise not to restrict outbound traffic unless separate routers and firewalls are configured to handle it, to avoid overloading the firewall. If an organization wants control over outbound traffic, it should use a separate filtering device. The rule shown in Table 6-8 allows internal communications out, so it would be used on the outbound interface. Why should rule set 3 come after rule sets 1 and 2? It makes sense to allow rules that unambiguously affect the most traffic to be placed earlier in the list. The more rules a firewall must process to find one that applies to the current packet, the slower the firewall will run. Therefore, most widely applicable rules should come first because the firewall employs the first rule that applies to any given packet. Rule set 4: The rule set for SMTP data is shown in Table 6-9. As shown, the packets governed by this rule are allowed to pass through the firewall, but are all routed to a wellconfigured SMTP gateway. It is important that e-mail traffic reach your e-mail server and only your e-mail server. Some attackers try to disguise dangerous packets as e-mail traffic to fool a firewall. If such packets can reach only the e-mail server and it has been properly configured, the rest of the network ought to be safe. Note that if the organization allows home access to an internal e-mail server, then it may want to implement a second, separate server to handle the POP3 protocol that retrieves mail for e-mail clients like Outlook and Thunderbird. This is usually a low-risk operation, especially if email encryption is in place. More challenging is the transmission of e-mail using the SMTP protocol, a service that is attractive to spammers who may seek to hijack an outbound mail server. Rule set 5: All ICMP data should be denied. Pings, formally known as ICMP Echo requests, are used by internal systems administrators to ensure that clients and servers can communicate. There is virtually no IN FORMA TION A SSU RAN CE AND SECURITY 2 29 legitimate use for ICMP outside the network, except to test the perimeter routers. ICMP may be the first indicator of a malicious attack. It’s best to make all directly connected networking devices ―black holes‖ to external probes. A common networking diagnostic command in most operating systems is traceroute; it uses a variation of the ICMP Echo requests, so restricting this port provides protection against multiple types of probes. Allowing internal users to use ICMP requires configuring two rules, as shown in Table 6- 10. The first of these two rules allows internal administrators and users to use ping. Note that this rule is unnecessary if the firewall uses internal permissions rules like those in rule set 2. The second rule in Table 6-10 does not allow anyone else to use ping. Remember that rules are processed in order. If an internal user needs to ping an internal or external address, the firewall allows the packet and stops processing the rules. If the request does not come from an internal source, then it bypasses the first rule and moves to the second. Rule set 6: Telnet (terminal emulation) access should be blocked to all internal servers from the public networks. Though it is not used much in Windows environments, Telnet is still useful to systems administrators on UNIX and Linux systems. However, the presence of external requests for Telnet services can indicate an attack. Allowing internal use of Telnet requires the same type of initial permission rule you use with ping. See Table 6-11. Again, this rule is unnecessary if the firewall uses internal permissions rules like those in rule set 2. Rule set 7: When Web services are offered outside the firewall, HTTP and HTTPS traffic should be blocked from the internal networks via the use of some form of proxy access or DMZ architecture. With a Web server in the DMZ, you simply allow HTTP to access the Web server and then use the cleanup rule described later in rule set 8 to prevent any other access. To keep the Web server inside the internal network, direct all HTTP requests to the proxy server and configure the internal filtering router/firewall only to allow the proxy server to access the internal Web server. The rule shown in Table 6-12 illustrates the first example. This rule accomplishes two things: it allows HTTP traffic to reach the Web server, and it uses the cleanup rule (Rule 8) to prevent non-HTTP traffic from reaching the Web server. IN FORMA TION A SSU RAN CE AND SECURITY 2 30 If someone tries to access the Web server with non-HTTP traffic (other than port 80), then the firewall skips this rule and goes to the next one. Proxy server rules allow an organization to restrict all access to a device. The external firewall would be configured as shown in Table 6-13. The effective use of a proxy server requires that the DNS entries be configured as if the proxy server were the Web server. The proxy server is then configured to repackage any HTTP request packets into a new packet and retransmit to the Web server inside the firewall. The retransmission of the repackaged request requires that the rule shown in Table 6-14 enables the proxy server at 10.10.10.5 to send to the internal router, assuming the IP address for the internal Web server is 10.10.10.8. Note that when an internal NAT server is used, the rule for the inbound interface uses the externally routable address because the device performs rule filtering before it performs address translation. For the outbound interface, however, the address is in the native 192.168.x.x format. The restriction on the source address then prevents anyone else from accessing the Web server from outside the internal filtering router/firewall. Rule set 8: The cleanup rule. As a general practice in firewall rule construction, if a request for a service is not explicitly allowed by policy, that request should be denied by a rule. The rule shown in Table 6-15 implements this practice and blocks any requests that aren’t explicitly allowed by other rules. This is another rule that is usually allowed by default by most modern firewall devices. It is included here for discussion purposes. Additional rules that restrict access to specific servers or devices can be added, but they must be sequenced before the cleanup rule. The specific sequence of the rules becomes crucial because once a rule is fired, that action is taken and the firewall stops processing the rest of the rules in the list. Misplacement of a particular rule can result in unintended consequences and unforeseen results. One organization installed an expensive new firewall only to discover that the security it provided was too perfect—nothing was allowed in and nothing was allowed out! Not until the firewall administrators realized that the rules were out of sequence was the problem resolved. Tables 6-16 through 6-19 show the rule sets in their proper sequences for both the external and internal firewalls. Note that the first rule prevents spoofing of internal IP addresses. The rule that allows responses to internal communications (rule 6 in Table 6-16) comes after the four rules prohibiting direct communications to or from the firewall (rules 2–5 in Table 6-16). In reality, rules 4 and 5 are redundant— rule 1 covers their actions. They are listed here for illustrative purposes. Next come the rules that govern access to the SMTP server, denial of ping and Telnet access, and access to the HTTP server. If heavy traffic to the HTTP server is expected, move the HTTP server rule closer to the top (for example, into the position of rule 2), which would expedite rule processing for external communications. Rules 8 and 9 are actually unnecessary because the cleanup rule would take care of their tasks. The final rule in Table 6-16 denies any other types of communications. IN FORMA TION A SSU RAN CE AND SECURITY 2 31 In the outbound rule set (see Table 6-17), the first rule allows the firewall, system, or network administrator to access any device, including the firewall. Because this rule is on the outbound side, you do not need to worry about external attackers. The next four rules prohibit access to and by the firewall itself, and the remaining rules allow outbound communications and deny all else. IN FORMA TION A SSU RAN CE AND SECURITY 2 32 Note the similarities and differences in the two firewalls’ rule sets. The rule sets for the internal filtering router/firewall, shown in Tables 6-18 and 6-19, must both protect against traffic to the internal network (192.168.2.0) and allow traffic from it. Most of the rules in Tables 6-18 and 6-19 are similar to those in Tables 6-16 and 6-17: they allow responses to internal communications, deny communications to and from the firewall itself, and allow all outbound internal traffic. Because the 192.168.2.x network is an unroutable network, external communications are handled by the NAT server, which maps internal (192.168.2.0) addresses to external (10.10.10.0) addresses. This prevents an attacker from compromising one of the internal boxes and accessing the internal network with it. The exception is the proxy server, which is covered by rule 7 in Table 6-18 on the internal router’s inbound interface. This proxy server should be very carefully configured. If the organization does not need it, as in cases where all externally accessible services are provided from machines in the DMZ, then rule 7 is not needed. Note that Tables 6-18 and 6-19 have no ping and Telnet rules because the external firewall filters out these external requests. The last rule, rule 8, provides cleanup and may not be needed, depending on the firewall. IN FORMA TION A SSU RAN CE AND SECURITY 2 33 Content Filters Content Filters Key Terms content filter A software program or hardware/software appliance that allows administrators to restrict content that comes into or leaves a network—for example, restricting user access to Web sites from material that is not related to business, such as pornography or entertainment. data loss prevention A strategy to gain assurance that the users of a network do not send highvalue information or other critical information outside the network. reverse firewall See content filter. Besides firewalls, a content filter is another utility that can help protect an organization’s systems from misuse and unintentional denial-of-service problems. A content filter is a software filter—technically not a firewall—that allows administrators to restrict access to content from within a network. It is essentially a set of scripts or programs that restricts user access to certain networking protocols and Internet locations, or that restricts users from receiving general types or specific examples of Internet content. Some content filters are combined with reverse proxy servers, which is why many are referred to as reverse firewalls, as their primary purpose is to restrict internal access to external material. In most common implementation models, the content filter has two components: rating and filtering. The rating is like a set of firewall rules for Web sites and is common in residential content filters. The rating can be complex, with multiple access control settings for different levels of the organization, or it can be simple, with a basic allow/deny scheme like that of a firewall. The filtering is a method used to restrict specific access requests to identified resources, which may be Web sites, servers, or other resources the content filter administrator configures. The result is like a reverse ACL (technically speaking, a capabilities table); an ACL normally records a set of users who have access to resources, but the control list records resources that the user cannot access. The first content filters were systems designed to restrict access to specific Web sites, and were stand- alone software applications. These could be configured in either an exclusive or inclusive manner. In an exclusive mode, certain sites are specifically excluded, but the problem with this approach is that an organization might want to exclude thousands of Web sites, and more might be added every hour. The inclusive mode works from a list of sites that are specifically permitted. In order to have a site added to the list, the user must submit a request to the content filter manager, which could be time-consuming and restrict business operations. Newer models of content filters are protocol-based, examining content as it is dynamically displayed and restricting or permitting access based on a logical interpretation of content. The most common content filters restrict users from accessing Web sites that are obviously not related to business, such as pornography sites, or they deny incoming spam e-mail. Content filters can be small add- IN FORMA TION A SSU RAN CE AND SECURITY 2 34 on software programs for the home or office, such as the ones recommended by Tech Republic17: NetNanny, K9, Save Squid, DansGuardian, and OpenDNS. Content filters can also be built into corporate firewall applications, such as Microsoft’s Forefront Threat Management Gateway (TMG), which was formerly known as the Microsoft Internet Security and Acceleration (ISA) Server. Microsoft’s Forefront TMG is actually a UTM firewall that has content filtering capabilities. The benefit of implementing content filters is the assurance that employees are not distracted by nonbusiness material and cannot waste the organization’s time and resources. The downside is that these systems require extensive configuration and ongoing maintenance to update the list of unacceptable destinations or the source addresses for incoming restricted e-mail. Some newer content filtering applications, like newer antivirus programs, come with a service of downloadable files that update the database of restrictions. These applications work by matching either a list of disapproved or approved Web sites and by matching key content words, such as ―nude,‖ ―naked,‖ and ―sex.‖ Of course, creators of restricted content have realized this and work to bypass the restrictions by suppressing such words, creating additional problems for networking and security professionals. One use of content filtering technology is to implement data loss prevention. When implemented, network traffic is monitored and analyzed. If patterns of use and keyword analysis reveal that high-value information is being transferred, an alert may be invoked or the network connection may be interrupted. Protecting Remote Connections The networks that organizations create are seldom used only by people at one location. When connections are made between networks, the connections are arranged and managed carefully. Installing such network connections requires using leased lines or other data channels provided by common carriers; therefore, these connections are usually permanent and secured under the requirements of a formal service agreement. However, a more flexible option for network access must be provided for employees working in their homes, contract workers hired for specific assignments, or other workers who are traveling. In the past, organizations provided these remote connections exclusively through dial-up services like Remote Authentication Service (RAS). As high-speed Internet connections have become mainstream, other options such as virtual private networks (VPNs) have become more popular. However, a 2015 CNN Money report indicates that over 2.1 million subscribers still use dial-up services through America Online. Why? According to a 2009 report from the Pew Research Center, 32 percent of dial-up users report that they can’t afford the upgrade. Remote Access Remote Access Key Terms Kerberos An authentication system that uses symmetric key encryption to validate an individual user’s access to various network resources by keeping a database containing the private keys of clients and servers that are in the authentication domain it supervises. Remote Authentication Dial-In User Service (RADIUS) A computer connection system that centralizes the management of user authentication by placing the responsibility for authenticating each user on a central authentication server. war dialer An automatic phone-dialing program that dials every number in a configured range (for example, 555–1000 to 555–2000) and checks whether a person, answering machine, or modem picks up. IN FORMA TION A SSU RAN CE AND SECURITY 2 35 Before the Internet emerged, organizations created their own private networks and allowed individual users and other organizations to connect to them using dial-up or leased line connections. In the current networking environment, where high-speed Internet connections are commonplace, dial-up access and leased lines from customer networks are almost nonexistent. The connections between company networks and the Internet use firewalls to safeguard that interface. Although connections via dial-up and leased lines are becoming less popular, they are still quite common. Also, a widely held view is that these unsecured, dial-up connection points represent a substantial exposure to attack. An attacker who suspects that an organization has dial-up lines can use a device called a war dialer to locate the connection points. A war dialer dials every number in a configured range, such as 555–1000 to 555–2000, and checks to see if a person, answering machine, or modem picks up. If a modem answers, the war dialer program makes a note of the number and then moves to the next target number. The attacker then attempts to hack into the network via the identified modem connection using a variety of techniques. Dial-up network connectivity is usually less sophisticated than that deployed with Internet connections. For the most part, simple username and password schemes are the only means of authentication. However, some technologies, such as RADIUS systems, TACACS, and CHAP password systems, have improved the authentication process, and some systems now use strong encryption. RADIUS, Diameter, and TACACS RADIUS and TACACS are systems that authenticate the credentials of users who are trying to access an organization’s network via a dial-up connection. Typical dial-up systems place the responsibility for user authentication on the system directly connected to the modems. If there are multiple points of entry into the dial-up system, this authentication system can become difficult to manage. The Remote Authentication Dial-In User Service (RADIUS) system centralizes the responsibility for authenticating each user on the RADIUS server. RADIUS was initially described in RFCs 2058 and 2059, and is currently described in RFCs 2865 and 2866, among others. When a network access server (NAS) receives a request for a network connection from a dial-up client, it passes the request and the user’s credentials to the RADIUS server. RADIUS then validates the credentials and passes the resulting decision (accept or deny) back to the accepting remote access server. Figure 6-20 shows the typical configuration of an NAS system. An emerging alternative that is derived from RADIUS is the Diameter protocol. The Diameter protocol defines the minimum requirements for a system that provides authentication, authorization, and accounting (AAA) services and that can go beyond these basics and add commands and/or object attributes. Diameter security uses respected encryption standards such as Internet Protocol Security (IPSec) or Transport Layer Security (TLS); its cryptographic capabilities are extensible and will be able to use future encryption protocols as they are implemented. Diameter-capable devices are emerging into the marketplace, and this protocol is expected to become the dominant form of AAA services. The Terminal Access Controller Access Control System (TACACS), defined in RFC 1492, is another remote access authorization system that is based on a client/server configuration. Like RADIUS, it contains a centralized database, and it validates the user’s credentials at this TACACS server. The three versions of TACACS are the original version, Extended TACACS, and TACACSþ. Of these, only TACACSþ is still in use. The original version combines authentication and authorization services. The extended version separates the steps needed to authenticate individual user or system access attempts from the steps needed to verify that the authenticated individual or system is allowed to make a given type of connection. The extended version keeps records for accountability and to ensure that the access attempt is linked to a specific individual or system. The TACACSþ version uses dynamic passwords and incorporates two-factor authentication. IN FORMA TION A SSU RAN CE AND SECURITY 2 36 Kerberos Two authentication systems can provide secure third-party authentication: Kerberos and SESAME. Kerberos—named after the three-headed dog of Greek mythology that guards the gates to the underworld—uses symmetric key encryption to validate an individual user to various network resources. Kerberos, as described in RFC 4120, keeps a database containing the private keys of clients and servers— in the case of a client, this key is simply the client’s encrypted password. Network services running on servers in the network register with Kerberos, as do the clients that use those services. The Kerberos system knows these private keys and can authenticate one network node (client or server) to another. For example, Kerberos can authenticate a user once—at the time the user logs in to a client computer—and then, later during that session, it can authorize the user to have access to a printer without requiring the user to take any additional action. Kerberos also generates temporary session keys, which are private keys given to the two parties in a conversation. The session key is used to encrypt all communications between these two parties. Typically a user logs into the network, is authenticated to the Kerberos system, and is then authenticated to other resources on the network by the Kerberos system itself. Kerberos consists of three interacting services, all of which use a database library: 1. Authentication server (AS), which is a Kerberos server that authenticates clients and servers. 2. Key Distribution Center (KDC), which generates and issues session keys. 3. Kerberos ticket granting service (TGS), which provides tickets to clients who request services. In Kerberos, a ticket is an identification card for a particular client that verifies to the server that the client is requesting services and that the client is a valid member of the Kerberos system and therefore authorized to receive services. The ticket consists of the client’s name and network address, a ticket validation starting and ending time, and the session key, all encrypted in the private key of the server from which the client is requesting services. Kerberos is based on the following principles: The KDC knows the secret keys of all clients and servers on the network. The KDC initially exchanges information with the client and server by using these secret keys. Kerberos authenticates a client to a requested service on a server through TGS and by issuing temporary session keys for communications between the client and KDC, the server and KDC, and the client and server. Communications then take place between the client and server using these temporary session keys. IN FORMA TION A SSU RAN CE AND SECURITY 2 37 Figures 6-21 and 6-22 illustrate this process. If the Kerberos servers are subjected to denial-of-service attacks, no client can request services. If the Kerberos servers, service providers, or clients’ machines are compromised, their private key information may also be compromised. SESAME The Secure European System for Applications in a Multivendor Environment (SESAME), defined in RFC 1510, is the result of a European research and development project partly funded by the European Commission. SESAME is similar to Kerberos in that the user is first authenticated to an authentication server and receives a token. The token is then presented to a privilege attribute server, instead of a ticket-granting service as in Kerberos, as