Chapter 17 - Data Center Architecture and Cloud Concepts.pdf
Document Details
Uploaded by Deleted User
Tags
Full Transcript
Chapter 17 Data Center Architecture and Cloud Concepts THE FOLLOWING COMPTIA NETWORK+ EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER: Domain 1.0 Networking Concepts 1.3 Summarize cloud concepts and connectivity options. Network functions virtualization (NFV) Virtual private cloud (VPC)...
Chapter 17 Data Center Architecture and Cloud Concepts THE FOLLOWING COMPTIA NETWORK+ EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER: Domain 1.0 Networking Concepts 1.3 Summarize cloud concepts and connectivity options. Network functions virtualization (NFV) Virtual private cloud (VPC) Network security groups Network security lists Cloud gateways Internet gateway Network address translation (NAT) gateway Cloud connectivity options VPN Direct Connect Deployment models Public Private Hybrid Service models Software as a service (SaaS) Infrastructure as a service (IaaS) Platform as a service (PaaS) Scalability Elasticity Multitenancy 1.8 Summarize evolving use cases for modern network environments. Software-defined network (SDN) and software-defined wide area network (SD-WAN) Application-aware Zero-touch provisioning Transport agnostic Central policy management Virtual Extensible Local Area Network (VXLAN) Data Center Interconnect (DCI) Layer 2 encapsulation Zero trust architecture (ZTA) Policy-based authentication Authorization Least privilege access Secure Access Secure Edge (SASE)/Security Service Edge (SSE) Infrastructure as code (IaC) Automation Playbooks/templates/reusable tasks Configuration drift/compliance Upgrades Dynamic inventories Source control Version control Central repository Conflict identification Branching The traditional compute model was based on a one-to-one relation of application to server. However, most applications use only a fraction of the compute resources during idle periods, and all applications collectively seldom use all of the compute resources at the same time. Virtualization allows us to partition compute resources for each guest operating system (OS) supporting an application running on the host hardware. The hypervisor allows the partitioning of compute resources. The partitioning of virtualization allows each OS to operate as if it had exclusive control of the host hardware. Compute resources consist of the central processing unit (CPU), memory, and devices related to a physical server. Cloud services allow us to pool the resources together for each host server providing virtualization. When the resources of computer, network, and storage are pooled together, the cloud gains fault tolerance and scale. This allows us to lose a host and still maintain the ability to compute the workload of the guest operating systems supporting our applications. It also allows us to add resources of compute, network, and storage to scale the cloud out for additional workloads. The scale of workloads is referred to as elasticity. The cloud model is based on a many-to-many model where the exact location of the resources doesn't matter to the end user. We can create an application by allocating available resources to a guest OS from a pool of resources. The guest OS will then gain the fault tolerance of the cloud along with the added benefit of elasticity of the cloud. In this chapter, you will learn about the data center where your private cloud would be located, as well as the public cloud. To find Todd Lammle CompTIA videos and practice questions, please see www.lammle.com. Cloud Computing Cloud computing is by far one of the hottest topics in today's IT world. Basically, cloud computing can provide virtualized processing, storage, and computing resources to users remotely, making the resources transparently available regardless of the user connection. To put it simply, some people just refer to the cloud as “someone else's hard drive.” This is true, of course, but the cloud is much more than just storage. The history of the consolidation and virtualization of our servers tells us that this has become the de facto way of implementing servers because of basic resource efficiency. Two physical servers will use twice the amount of electricity as one server, but through virtualization, one physical server can host two (or more) virtual machines, which is the reason for the main thrust toward virtualization. With it, network components can simply be shared more efficiently. Users connecting to a cloud provider's network, whether it be for storage or applications, really don't care about the underlying infrastructure because as computing becomes a service rather than a product, it's then considered an on-demand resource. Centralization/consolidation of resources, automation of services, virtualization, and standardization are just a few of the big benefits cloud services offer. Cloud computing has several advantages over the traditional use of computer resources. The following are the advantages to a cloud service builder or provider: Cost reduction, standardization, and automation High utilization through virtualized, shared resources Easier administration Fall-in-place operations model The following are the advantages to cloud users: On-demand, self-service resource provisioning Fast deployment cycles Cost effective Centralized appearance of resources Highly available, horizontally scaled application architectures No local backups Having centralized resources is critical for today's workforce. For example, if you have your documents stored locally on your laptop and your laptop gets stolen, you're pretty much screwed unless you're doing constant local backups. That is so 2005! After I lost my laptop and all the files for the book I was writing at the time, I swore (yes, I did that too) to never have my files stored locally again. I started using only Google Drive, OneDrive, and Dropbox for all my files, and they became my best backup friends. If I lose my laptop now, I just need to log in from any computer from anywhere to my service provider's logical drives, and presto, I have all my files again. This is clearly a simple example of using cloud computing, specifically SaaS (which is discussed next), and it's wonderful! So, cloud computing provides for the sharing of resources, lower cost operations passed to the cloud consumer, computing scaling, and the ability to dynamically add new servers without going through the procurement and deployment process. Characteristics of a Cloud The National Institute of Standards and Technology (NIST) defines cloud computing with five distinct characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Any service that contains these characteristics can be considered a cloud-based service or application. On-Demand Self-Service A customer can provision computer capabilities and resources, such as CPU, memory, storage, the network, instances of a virtual machine, or any other component of the service, including the service itself, without any human interaction. Broad Network Access The capabilities are accessible over a network and are not a contrived system like the old mainframe systems, where you needed a proprietary connection. Broad access includes the device as well, such as mobile devices, laptops, and desktop computers, just to name a few. Resource Pooling The intent of cloud computing is to time-share a pool of resources over many several virtual instances. If it is a public cloud, the resource pools can be allotted by customer or organization. If it is a private cloud, then chances are the resource pool will be allotted to virtual instances in the same organization. Rapid Elasticity Computer capabilities can be elastically provisioned based on the customer's requirements at the time, such as load. The same capabilities can be released when the customer's requirement requires less resources. An example of rapid elasticity is a web-based company that requires additional capacity during a peak busy time. The resources can be allocated during the peak and deallocated when the traffic reaches a nominal level. Measured Service Any cloud service should have the capability to meter the resources of CPU, network, storage, and accounts, just to name a few. In addition, most cloud services charge based on any or all of these resources. Resources usage should be monitored, reported, and ultimately controlled without the consumer ever realizing that any of these are being applied. The five characteristics of cloud computing can be found in the NIST publication SP 800-145. This document is titled “The NIST Definition of Cloud Computing” and it sets the guidelines for cloud computing. The document can be accessed at https://csrc.nist.gov/publications/detail/sp/800-145/final. Cloud Delivery Models When we discuss the cloud, names like Amazon AWS and Microsoft Azure come to mind. However, anyone can own their own cloud as long as the resources meet the criteria of the NIST standard for cloud computing. We can classify the ownership of these models within the four main categories of public, private, hybrid, and community. I often find that companies will begin entering into the cloud via a public cloud provider. Using these public clouds is like renting compute power. The costs are charged to an operational expense budget because there is no equity in the service, much like renting a house. Once companies realize the savings of virtualization, they often purchase the equipment to transform into a private cloud. The purchase of the equipment is a capital investment because we have equity in the equipment, much like owning a house. Private The private cloud model is defined as cloud infrastructure that is provisioned for exclusive use by a single organization, as shown in Figure 17.1. It can be owned, managed, and operated by the organization, a third party, or a combination of both. The infrastructure can also be located on- or off-premises. This makes the cloud resources exclusive to the owner. FIGURE 17.1 A private cloud There are several reasons to move to a private cloud deployment, such as regulations, privacy, monetary and budgetary impact, and overall control. Private clouds give the owner ultimate control of the cloud and its design. Sometimes the public cloud may not offer certain features or hardware that a private cloud can be built to support. The creation of the private cloud might not be for purposes of new technology; it could be designed to support legacy systems that may not be compatible with public cloud offerings. The private cloud model has the advantage of ultimate control, with a price that is not immediately evident. When equipment is purchased such as compute, network, and storage, the company must forecast growth over a nominal five- to seven-year period. In a public cloud, resources can be purchased on demand and relinquished when not needed, but in the private cloud model, we must acquire these additional resources and are burdened with the ownership. Obsolescence of the equipment must also be considered, because the average expected life of compute, network, and storage resources is usually five to seven years. Private clouds often need hardware refreshes every five to seven years because of newer features or end-of-life warranties. Public The public cloud model is defined as infrastructure that is provisioned for open use by the general public. It can be owned, managed, and operated by a business entity, government organization, or a combination thereof. However, the infrastructure exists on the premises of the cloud provider, as shown in Figure 17.2. FIGURE 17.2 A public cloud The public cloud is often a public marketplace for compute, network, and storage in which you can rent or lease compute time. This compute time, of course, is segregated from other customers, so there is a level of isolation between customers on the same infrastructure. Examples of public cloud providers are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud; these are just a few providers, and the list grows every day. A benefit of the public cloud is the pay-as-you-go utility model. You can purchase the compute power you need for a period of time. You are charged only for the compute time that you use or purchase, and there is no initial capital investment on the part of the customer. Another benefit to the public cloud is the elasticity of compute, network, and storage resources. If a customer is an online retailer and needs extra compute power for the holiday season, the customer can purchase more scale-out, and when the busy period is over, they can relinquish the resources. A disadvantage to the public cloud is the lack of control and hardware configuration. If custom hardware is required, then the public cloud is not an option. Heavily regulated industries might not be able to use the public cloud because of restrictions on where data can be stored and who can access it. Hybrid The hybrid cloud model is a combination of both the private and public cloud models. It is the most popular model because many businesses leverage public cloud providers while maintaining their own infrastructure, as shown in Figure 17.3. FIGURE 17.3 A hybrid cloud Many cloud providers now offer integration for private cloud software, such as Microsoft Hyper-V and VMware vSphere. This integration allows private clouds to gain the on- demand elasticity of the public cloud. When a private cloud uses the public cloud for elasticity of resources or additional capacity, it is called cloud bursting. Types of Services The term cloud has become a ubiquitous buzzword in IT, applied to anything involving hosted services. However, the National Institute of Standards and Technology (NIST) has defined three service types for cloud computing: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). There are many more service types than the three mentioned, but they are not defined by the NIST standards for cloud computing. That doesn't mean they are just buzzwords; it just means that NIST believes they fit into one of the three categories already. An example of this is a cloud provider that offers disaster recovery as a service (DRaaS); this service would fit into the IaaS service type. In addition to the NIST standard model, I will cover an immerging aaS (as a service) model called desktop as a service (DaaS), which is quickly becoming an offering that is popular among providers. SaaS Software as a service is one of the oldest models of cloud computing, existing before the term was created. It dates back to the dial-up services of the 1980s and 1990s like CompuServe, AOL, and the Dow Jones stock service. Today, SaaS providers are accessed through a web browser, such as the services of Twitter, Facebook, and Gmail. SaaS is any application that you use but do not own or maintain. The application is the provided service and is maintained by the service provider on its cloud. Facebook and Gmail are popular examples; you use their services and never have to worry about the underlying infrastructure. Social media and email are not the only examples of SaaS. There are many others that you might not even think of, such as Webex by Cisco and GitHub. The provider extends these services to you as either a pay-as-you-go, contract, or free service. PaaS Platform as a service is another model of cloud computing. PaaS allows a customer to generate code on the provider's platform that can be executed. Web hosting providers like GoDaddy and A Small Orange are examples of PaaS. You can purchase web hosting from these providers and set up a WordPress or custom web application using PHP or ASP.NET. Web hosting is not the only example of PaaS. Google App Engine is a platform that allows an application to be coded in Java, Python, or PHP. The application then is executed on Google's PaaS cloud; web hosts can even provide storage like SQL. SaaS applications can be produced on a PaaS platform. Evernote is hosted as of this writing on Google's cloud platform. Evernote is a SaaS application that allows the collecting and sharing of ideas across various mobile and desktop devices. Google App Engine is not the only PaaS provider—there are countless other providers. Amazon Web Services and Microsoft Azure are other examples, and countless other providers have begun to offer PaaS as well. Applications are isolated between customers. The processes are allotted resources by the customer and can scale out for demand. PaaS providers generally charge the customer according to CPU cycles used. IaaS Infrastructure as a service is the established model of computing that we generally associate with cloud computing. Amazon Web Services, Microsoft Azure, and Rackspace are just a few providers. Customer are allowed to use the provider's infrastructure of compute, network, and storage. When the customer needs IaaS, it is as simple as purchasing an instance of compute resources and then choosing an operating system and region of the world for the instance and connecting to it. The customer will not know the exact host server, network equipment, or storage the guest VM is running upon. All of the worries of the infrastructure are left up to the provider. Computing resources are not the only services that you can purchase from a cloud provider. For example, Amazon Web Services and Microsoft Azure offer backup services. You can purchase space on the provider's cloud and back up straight to it. Any infrastructure that you would normally purchase as a capital expense (lease or own) can be converted into an operational expense (rent) via services from the provider. Whenever I am looking to purchase physical infrastructure, I incorporate IaaS into my cost analysis. However, you must also weigh the nonmonetary saving such as infrastructure maintenance and overall administration of the infrastructure. You must ask yourself, is it better to own this infrastructure or rent this infrastructure long term? DaaS Desktop as a service is the latest offering by cloud providers, such as Microsoft and VMware, just to name a couple. In a normal compute model, the end-user computer, also known as the edge device, processes the data. This also means that the edge device can retain a copy of the data and that data can be copied off onto a USB drive. The edge device can be stolen, and depending on the data, it might mean that your company has to report it as a data loss. Another emerging threat is for the end-user computer to get infected and ransom the data. These scenarios can cost a company a lot of money to remediate. DaaS doesn't solve all the problems, but it does give the administrator a lot more control by pushing the processing of data to the cloud. Because the edge device is no longer responsible for processing the data, it can be a repurposed computer, tablet, Chromebook, or any other device. These devices are called thin clients because the only software they need to support is the client for DaaS. In many cases all that a person needs is a web browser to access the desktop in the cloud. This allows for mobility and flexibility in provisioning desktops for workers. You can even scale up or down depending on usage. Coupled with a bring your own device (BYOD) policy, a company could save some real money. Administrators gain the greatest amount of control when an organization decides to switch to DaaS. Data can be tightly controlled by turning off clipboard sharing between the thin client and the virtual desktop. USB access can be controlled, and printing can be controlled; these are just a few examples. Security patches can be centrally controlled and quickly installed as they are released. Antivirus and antimalware can be managed and monitored to thwart ransomware attempts. The best part is that, with the right mix of policies, the data remains in the cloud and never makes it to the edge device. So, there is no chance of data loss and costly proceedings can be avoided. Network Function Virtualization Network functions such as firewalls and routing can all be virtualized inside the hypervisor. They operate just like their physical versions, but we don't have to worry about power supplies failing, CPUs going bad, or anything else that can cause a physical network device to fail. We do have to worry about the host that runs the virtual network functions; however, redundancy is built into many hypervisors. Personally, I prefer to virtualize as many functions as I can possibly virtualize. The following are the most common network function virtualization (NFV) types you will encounter, but the list grows every day. If you need it for your on-premises infrastructure, then it can be virtualized and put in the cloud. Virtual Firewall A virtual firewall is similar to a physical firewall. It can be a firewall appliance installed as a virtual machine or a kernel mode process in the hypervisor. When installed as a firewall appliance, it performs the same functions as a traditional firewall. In fact, many of the traditional firewalls today are offered as virtual appliances. When virtualizing a firewall, you gain the fault tolerance of the entire virtualization cluster for the firewall— compared to a physical firewall, where your only option for fault tolerance may be to purchase another unit and cluster it together. As an added benefit, when a firewall is installed as a virtual machine, it can be backed up like any other VM and treated like any other VM. A virtual firewall can also be used as a hypervisor virtual kernel module. These modules have become popular from the expansion software-defined networking. Firewall rules can be configured for layer 2 MAC addresses or protocol along with traditional layer 3 and layer 4 rules. Virtual firewall kernel modules use policies to apply to all hosts in the cluster. The important difference between virtual firewall appliances and virtual firewall kernel modules is that the traffic never leaves the host when a kernel module is used. Compared to using a virtual firewall appliance, the traffic might need to leave the current host to go to the host that is actively running the virtual firewall appliance. Virtual Router The virtual router is identical to a physical router in just about every respect. It is commonly loaded as a VM appliance to facilitate layer 3 routing. Many companies that sell network hardware have come up with unique features that run on their virtual routing appliances; these features include VPN services, BGP routing, and bandwidth management, among others. The Cisco Cloud Services Router (CSR) 1000v is a virtual router that is sold and supported by cloud providers such as Amazon and Microsoft Azure. Juniper also offers a virtual router called the vMX router, and Juniper advertises it as a carrier-grade virtual router. Virtual Switch A virtual switch (vSwitch) is similar to a physical switch, but it is a built-in component in your hypervisor. It differs in a few respects; the first is the number of ports. On a physical switch, you have a defined number of ports. If you need more ports, you must upgrade the switch or replace it entirely. A virtual switch is scalable compared to its physical counterpart; you can just simply add more ports. Virtual Private Cloud A virtual private cloud (VPC) is a cloud environment that uses all virtual functions. It is private in the sense that it is available only to the tenant, although it might be located within a public cloud. By means of authentication and encryption, the remote access of the organization to its VPC resources is secured from other tenants. Connectivity Options By default, traffic into and out of your public cloud traverses the Internet. This is a good solution in many cases, but if you require additional security when accessing your cloud resources and exchanging your data, there are two common solutions that we will discuss. The first is a virtual private network (VPN) that sends data securely over the Internet or dedicated connections. The second solution is to install a private non- Internet connection, and then a direct connection can be configured. Virtual Private Network Cloud providers offer site-to-site VPN options that allow you to establish a secure and protected network connection across the public Internet. The VPN connection verifies that both ends of the connection are legitimate and then establishes encrypted tunnels to route traffic from your data center to your cloud resources. If a bad actor intercepts the data, they will not be able to read it due to the encryption of the traffic. VPNs can be configured with redundant links to back up each other or to load-balance the traffic for higher-speed interconnections. Another type of VPN allows desktops, laptops, tablets, and other devices to establish individual secure connections into your cloud deployment. Private Direct Connection A dedicated circuit can be ordered and installed between your data center and an interconnection provider or directly to the cloud company. This provides a secure, low- latency connection with predictable performance. Direct connection speeds usually range from 1 Gbps to 10 Gbps and can be aggregated together. For example, four 10 Gbps circuits can be installed from your data center to the cloud company for a total aggregate bandwidth of 40 Gbps. It is also a common practice to establish a VPN connection over the private link for encryption of data in transit. There are often many options when connecting to the cloud provider that allow you to specify which geographic regions to connect to as well as which areas inside of each region, such as storage systems or your private virtual cloud. Internet exchange providers maintain dedicated high-speed connections to multiple cloud providers and will connect a dedicated circuit from your facility to the cloud providers as you specify. There are several ways to connect to a virtual server that is in a cloud environment: Remote Desktop: While the VPN connection connects you to the virtual network, an RDP connection can be directly to a server. If the server is a Windows server, then you will use the Remote Desktop Connection (RDC) client. If it is a Linux server, then the connection will most likely be an SSH connection to the command line. File Transfer Protocol (FTP) and Secure File Transfer Protocol (SFTP): The FTP/FTPS server will need to be enabled on the Windows/Linux server, and then you can use the FTP/FTPS client or work at the command line. This is best when performing bulk data downloads. VMware Remote Console: This allows you to mount a local DVD, hard drive, or USB drive to the virtual server. This is handy for uploading ISO or installation media to the cloud server. Cloud Gateways A gateway, either physical or virtual, arbitrates access to a set of resources, and in the case of a cloud gateway, it connects local applications to cloud-based storage. In some cases, it also enables a legacy application that cannot speak the same language as the public cloud to interact by translating between the traditional storage-area network (SAN) or network-attached storage (NAS). This section provides additional examples of gateways and the functions they can perform. Internet Gateway An Internet gateway is a physical or virtual system that stands between a LAN and the Internet. While in a home situation this is provided by the Internet service provider (ISP), in an enterprise network, the organizational IT team will probably configure this device. One of the options is to implement a proxy server, to which all Internet traffic is directed. The proxy server makes the connection to the web server on behalf of the source device and then returns results to the source device. From a security standpoint, this is beneficial in that it will appear to the outside world that all traffic is coming from the proxy server and not the original device. Network Address Translation Gateway Network address translation (NAT) is a feature found in firewalls and many router platforms that allows for the translation of private IP addresses to public IP addresses at the network edge. While one of the driving forces beginning the development of NAT was the conservation of the public IPv4 address space, it also has a security component in that the process helps to hide the interior addressing scheme. There are three types of NAT that can be implemented. In static NAT, each private IP address is mapped to a public IP address. While this does not save any of the public IPv4 address space, it does have the benefit of hiding your internal network address scheme from the outside world. In dynamic NAT, a pool of public IP addresses is obtained that is at least equal to the number of private IP addresses that require translation. However, rather than mapping the private IP addresses to the public IP addresses, the NAT device maps the public IP addresses from the pool on a dynamic basis much like a DHCP server does when assigning IP addresses. Finally, port address translation (PAT) is a form of NAT in which all private IP addresses are mapped to a single public IP address. This provides both benefits of saving the IPv4 address space and hiding the network address scheme. This system is called port address translation because the ephemeral port numbers that devices choose as the source port for a connection (which are chosen randomly from the upper ranges of the port numbers) are used to identify each source computer in the network. This is required since all devices are mapped to the same public IP address. Multitenancy The term tenant is used to describe a group of users or devices that share a common pool of resources or common access with specific privileges. A popular example of a tenant is the Microsoft 365 platform. When you sign up your organization and associate it with Microsoft 365, each user or device you add is grouped to your tenant. You can then manage the privileges for your users across your entire organization. Another classic example of a tenant is when you create a private virtualization cloud in your organization with Hyper-V or VMware vSphere. You have a single tenant that can share the pool of resources, and you can define policies for all of the VMs across your organization. Now that we have a broad definition of a tenant, let's get into what a multitenant is. Simply put, it's a platform that supports multiple tenants at the same time. Microsoft 365 is a great example of this. When your organization is created, it is scaled over many different servers. These servers also have other organizations (tenants) processing on them, so they are considered multitenant. Another example of a multitenant is a virtualization cloud. It is possible to have two organizations defined on the same set of hardware. Using resource pool policies, the multiple tenants could be configured so they would not affect each other. This is common practice with ISPs that offer a hosted virtualization service. There is often middleware that allows the users to create and modify VMs without directly accessing the hypervisor platform. Elasticity Elasticity is one of the five essential characteristics of cloud computing, and it is defined here, as per the NIST publication SP 800-145: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider. Cloud providers offer configuration of virtual CPU, virtual RAM, network bandwidth down and up, storage, and many other resources for a virtual machine or virtual service. When that service needs more resources, elasticity allows the consumer to add or remove the resources without any human interaction from the hosting provider. The customer of course has to pay for these resources, but in traditional circumstances of real hardware, your only option would be to purchase new hardware or add upgrades to existing hardware. Elastic upgrades can normally be performed through a GUI, a command line, or an API for automation and orchestration. They can sometimes even be added without a reboot or disruption of service. Scalability Scalability follows along the same lines as elasticity. Both elasticity and scalability technically increase resources for an application or service. Elasticity is considered to be tactical in its approach, and scalability is considered to be strategic in its approach. As an example, if you see a rise in CPU utilization for a virtual machine, you can simply increase the virtual CPU count and decrease its virtual CPU when the load has subsided. What if your company has a website in a US data center where 80% of your business is and your company wanted to expand to an overseas location? Strategically, you should place the web server across both data centers. This is where a scale-out cluster comes in handy. With the use of a load balancer, you could scale your website over several data centers all over the world. The previous example is a classic geographic scale-out scenario. There are many other examples like this, but the concept remains pretty much the same: they are all strategic in nature. Keep in mind that the application needs to support scalability, and some applications are written as to be optimized for scalability. Your organization needs to take a strategic approach from the design of the application to deployment of the application to make it scalable. Network Security Groups One of the ways in which access to a VPC can be secured is through the use of network security groups. For example, a Microsoft Azure network security group can be used to filter network traffic to and from Azure resources in an Azure virtual network. You can use a rule to specify allowed traffic by destination, port number, or protocol number and then apply that rule to a network security group. This group can specify membership by subnet or by network interface on a virtual machine. Network Security Lists A networking security list is a set of ingress and egress security rules that apply to all virtual network interfaces in any subnet with which the security list is associated. It acts as a virtual firewall by confining all incoming and outgoing traffic to that allowed by the security list. Security Implications/Considerations While an entire book could be written on the security implications of the cloud, there are some concerns that stand above the others. Among them are these: While clouds increasingly contain valuable data, they are just as susceptible to attacks as on-premises environments. Cases such as the Salesforce.com incident in which a technician fell for a phishing attack that compromised customer passwords remind us of this. Customers are failing to ensure that the provider keeps their data safe in multitenant environments. They are failing to ensure that passwords are assigned, protected, and changed with the same attention to detail they might desire. No specific standard has been developed to guide providers with respect to data privacy. Data security varies greatly from country to country, and customers have no idea where their data is located at any point in time. Relationship Between Local and Cloud Resources When comparing the advantages of local and cloud environments and the resources that reside in each, several things stand out: A cloud environment requires very little infrastructure investment on the part of the customer, while a local environment requires an investment in both the equipment and the personnel to set it up and manage it. A cloud environment can be extremely scalable and at a moment's notice, while scaling a local environment either up or out requires an investment in both equipment and personnel. Investments in cloud environments involve monthly fees rather that capital expenditures as would be required in a local environment. While a local environment provides total control for the organization, a cloud takes some of that control away. While you always know where your data is in a local environment, that may not be the case in a cloud, and the location may change rapidly. EXERCISE 17.1 Exploring Cloud Services In this exercise, you explore the cloud service offering for Microsoft Azure. 1. Set up an Azure account a free account on the Azure portal via https://portal.azure.com. 2. After you have signed into the portal, click the menu in the upper-left corner of the web page and select All Services. 3. Explore the Compute, Networking, and Storage categories. This exercise simply gives you a glimpse of the enormous cloud offering for Microsoft Azure. Other providers have equal portfolios of services. For extra credit, research some of the services to get a better understanding of what they represent. As an example, virtual networks, firewalls, and NAT gateways are all network virtual functions. How many others are there in that category? Infrastructure as Code With the new hyperscale cloud data centers, it is no longer practical to configure each device in the network individually. Also, configuration changes happen so frequently it would be impossible for a team of engineers to keep up with the manual configuration tasks. Infrastructure as code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. The concept of infrastructure as code allows all configurations for the cloud devices and networks to be abstracted into machine-readable definition files instead of physical hardware configurations. IaC manages the provisioning through code so manually making configuration changes is no longer required. These configuration files contain the infrastructure requirements and specifications. They can be stored for repeatable use, distributed to other groups, and versioned as you make changes. Faster deployment speeds, fewer errors, and consistency are advantages of infrastructure as code over the older, manual process. Deploying your infrastructure as code allows you to divide your infrastructure into modular components that can be combined in different ways using automation. Code formats include JSON and YAML, and they are used by tools such as Ansible, Salt, Chef, Puppet, Terraform, and AWS CloudFormation. Automation/Orchestration Automation and orchestration define configuration, management, and the coordination of cloud operations. Automation involves individual tasks that do not require human intervention and are used to create workflows that are referred to as orchestration. This allows you to easily manage very complex and large tasks using code instead of a manual process. Automation is a single task that orchestration uses to create the workflow. By using orchestration in the cloud, you can create a complete virtual data center that includes all compute, storage, database, networking, security, management, and any other required services. Very complex tasks can be defined in code and used to create your environment. Common automation tools used today include Puppet. Puppet, Docker, Jenkins, Terraform, Ansible, Kubernetes, CloudBees, CloudFormation, Chef, and Vagrant. Playbooks/Templates/Reusable Tasks Automation requires creating all the details of a task that are scheduled to occur automatically. These details can be contained in playbooks or templates. A playbook is somewhat like a batch file in that it lists all tasks or commands to be executed, and then the tasks or commands are run when the playbook is scheduled to run. Automation tools such as Ansible use these playbooks to guide their actions. Templates are preconfigured playbooks that vendors of automation tools provide for common tasks. These reusable tasks can be deployed whenever desired without the need to create a new playbook. There are two approaches to orchestrate the tasks of automation: imperative and declarative. Both come with advantages and disadvantages. Imperative An imperative approach is probably the route most of us would take. It's a classic approach of listing all of the steps to get to the desired state. For example, an imperative approach might look like the following pseudocode. You might think that the pitfall is configuring specifics like hostname or IP, but those can be turned into variables. The pitfall is actually the fact that the install of PHP is dependent on the Apache web server getting installed properly. If this were more complex and step 234 errored, we would need to debug the other 233 lines to see what changed. Install Ubuntu 20.04 Configure hostname:ServerA Configure IP:192.168.1.10 Install Apache 2.4.24 Set home directory Install PHP 7.4.13 Enable PHP module Install MySQL 5.7.23 Set MySQL password:Password20! Configure firewall:open TCP 80 Declarative A declarative approach is a desired state that you want to reach in the end. Basically, we know that when we are done we want to have an Ubuntu 20.04 server with Apache, PHP, and MySQL installed. How we get there is really irrelevant to the script or the developer. The same defaults need to be set, such as the hostname, IP address, MySQL password, and so on. The following pseudocode represents a declarative approach. The orchestration software would contain the specifics on how to install the appropriate pieces. Ubuntu::install { '20.04': hostname => 'ServerA', ip => '192.168.1.10', } apache:package install { 'dev.wiley.com': port => '80', docroot => '/var/www/', module => 'php' open_fw => 'true' } mysql:package install { 'db_app': password => 'Password20!' } Configuration Drift/Compliance When configurations are handled manually, over time systems may fall out of compliance with policy because they have not been updated to reflect the new policy. This is called configuration drift. Many automation tools such as Puppet can automate the process of identifying and correcting policy compliance issues. Upgrades Another area where automation is used is in the application of upgrades to operating system and applications. One of the earliest versions of this was Windows Server Update Services (WSUS), a service that downloaded updates and hot fixes over the Internet and then applied them automatically to systems over the LAN. Newer automation tools can also do this as well. Dynamic Inventories Maintaining a proper inventory of organizational assets can be difficult especially for large organizations. Some tools, such as Ansible, can perform what are called dynamic inventories, which use plugins to extract information from the source of inventory information. Dynamic inventory updates can be scheduled or run at the beginning of a job to get the most up-to-date information. Source Control When using infrastructure as code, source control of the code is important. Let's survey some of the issues and techniques involved in source control. Version Control A version control system is designed to track changes in source code and other text files during the development of a piece of software. This allows the user to retrieve any of the previous versions of the original source code and the changes that are stored. It is important that the latest versions are in use in all instances. Central Repository It is important for all code to be housed in a central repository. This model is utilized to create a single source of truth, providing significant benefits to visibility, collaboration, and consistency within data management Conflict Identification Code conflicts occur when two or more developers make incompatible changes to the same file or codebase, resulting in errors, bugs, or merge failures. Conflict resolution tools can identify and resolve code conflicts when they occur. They can identify differences between conflicting files or codebases, and allow you to choose or edit the correct version. Some of the most common conflict resolution tools are Git, GitHub, and Bitbucket. Branching Code branching enables development teams to work on different parts of a project without impacting each other. The codebase is often referred to as the trunk, baseline, master, or mainline. Developers create branches––originating either directly or indirectly from the mainline––to experiment in isolation. This keeps the overall product stable. Software-Defined Networking As modern networks grew in complexity and size, it has become increasingly difficult to configure, manage, and control them. There has traditionally been no centralized control plane, which means to make even the simplest of changes many switches had to be individually accessed and configured. With the introduction of software-defined networking, a centralized controller is implemented, and all of the networking devices are managed as a complete set and not individually. This greatly reduced the number of configuration tasks required to make changes to the network and allows the network to be monitored as a single entity instead of many different independent switches and routers. Benefits of Software-Defined Networking Software-defined networking has a number of benefits over traditional physical networking. Let's look at some of these benefits. Application-Aware Any system including a software-defined network (SDN) that has built-in information or “awareness” about individual applications is said to be application-aware. Application awareness enables the system to better interact with these applications. Application- aware networks can take network queries from individual applications and, in some cases, can facilitate easier transaction channels. It can also improve efficiency of network administration and maintenance. Zero-Touch Provisioning Zero-touch provisioning provides the ability to configure and remotely deploy multiple network devices without the need to touch each individually. It not only saves time but eliminates the human errors that can occur when done manually. Many SDN controllers, such as the Omada Cloud-Based Controller by TP-Link, use zero-touch provisioning for more efficient deployments. Transport Agnostic SDN systems typically can work with different types of transport mechanisms without being dependent on any one of them. Systems such as these that are not confined to a particular transport medium are said to be transport agnostic. When discussing a transport-agnostic SDN, we are usually referring to a transport-agnostic overlay network. This is a network architecture that abstracts the underlying physical network infrastructure and creates a virtual network overlay on top of it. This approach can replace a plethora of legacy and proprietary branch network and security equipment to simplify operations, lower costs, and provide greater control of the orchestration, monitoring, and visibility of the infrastructure. Central Policy Management Most SDN systems provide central policy management, which means IT, cloud, and security teams can gain clear, holistic visibility across all network environments—on- premises and private and public clouds—enabling unified management of all on- premises firewalls and cloud-security controls. Security policies can be applied consistently from a single pane of glass using a uniform set of commands and syntax without requiring disparate management tools for different deployments. Components of Software-Defined Networking The functions of software-defined networking operate on several layers. Figure 17.4 details a schematic of a common SDN controller. Let's look at these layers and what operations occur on the various layers. Application Layer The application layer contains the standard network applications or functions such as intrusion detection/prevention appliances, load balancers, proxy servers, and firewalls that either explicitly and programmatically communicate their desired network behavior or network requirements to the SDN controller. Control Layer The control layer, or management plane, translates the instructions or requirements received from the application layer devices, proceeds the requests, and configures the SDN-controlled devices in the infrastructure layer. The control layer also pushes to the application layer devices information received from the networking devices. The SDN controller sits in the control layer and processes configuration, monitoring and any other application-specific information between the application layer and infrastructure layer. FIGURE 17.4 SDN controller schematic The northbound interface is the connection between the controller and applications, while the southbound interface is the connection between the controller and the infrastructure layer. Infrastructure Layer The infrastructure layer, or forwarding plane, consists of the actual networking hardware devices that control the forwarding and processing for the network. This is where the spine/leaf switches sit and are connected to the SDN controller for configuration and operation commands. The spine and leaf switches handle packet forwarding based on the rules provided by the SDN controller. The infrastructure layer is also responsible for collecting network health and statistics such as traffic, topology, usage, logging, errors, and analytics and sending this information to the control layer. SDN Planes SDN architectures are often broken into three main functions: the management plane, the control plane, and the data plane, also known as the forwarding plane. Management Plane The management plane is the configuration interface to the SDN controllers and is used to configure and manage the network. The protocols commonly used are HTTP/HTTPS for web browser access, Secure Shell (SSH) for command-line programs, and application programming interfaces (APIs) for machine- to-machine communications. The management plane is responsible for monitoring, configuring, and maintaining the data center switch fabric. It is used to configure the forwarding plane. The management plane is considered to be a subset of the control plane Control Plane The control plane includes the routing and switching functions and protocols used to select the path used to send the packets or frames as well as a basic configuration of the network. Data Plane The data plane refers to all the functions and processes that forward packets/frames from one interface to another; it moves the bits across the fabric. Application Programming Interfaces Software-defined networking removes the control plane intelligence from the network devices by having a central controller manage the network instead of having a full operating system (Cisco IOS, for example) on the devices. In turn, the controller manages the network by separating the control and data (forwarding) planes, which automates configuration and the remediation of all devices. So instead of the network devices each having individual control planes, we now have a centralized control plane, which consolidates all network operations in the SDN controller. APIs allow for applications to control and configure the network without human intervention. The APIs are another type of configuration interface just like the CLI, SNMP, or GUI interfaces, which facilitate machine-to-machine operations. Southbound APIs Logical southbound interface (SBI) APIs (or device-to-control-plane interfaces) are used for communication between the controllers and network devices. They allow the two devices to communicate so that the controller can program the data plane forwarding tables of your routers and switches. Since all the network drawings had the network gear below the controller, the APIs that talked to the devices became known as southbound, meaning, “out the southbound interface of the controller.” And don't forget that with software-defined networking, the term interface is no longer referring to a physical interface! Unlike northbound APIs, southbound APIs have many standards. Let's talk about them now: OpenFlow Describes an industry-standard API, which the ONF (opennetworking.org) defines. It configures white label switches, meaning that they are nonproprietary, and as a result defines the flow path through the network. All the configuration is done through NETCONF. NETCONF Although not all devices support NETCONF yet, what this provides is a network management protocol standardized by the IETF. Using RPC, you can install, manipulate, and delete the configuration of network devices using XML. NETCONF is a protocol that allows you to modify the configuration of a networking device, but if you want to modify the device's forwarding table, then the OpenFlow protocol is the way to go. onePK A Cisco proprietary SBI that allows you to inspect or modify the network element configuration without hardware upgrades. This makes life easier for developers by providing software development kits for Java, C, and Python. OpFlex The name of the southbound API in the Cisco ACI world is OpFlex, an open- standard, distributed control system. Understand that OpFlex first sends detailed and complex instructions to the control plane of the network elements in order to implement a new application policy—something called an imperative SDN model. On the other hand, OpFlex uses a declarative SDN model because the controller, which Cisco calls the APIC, sends a more abstract, “summary policy” to the network elements. The summary policy makes the controller believe that the network elements will implement the required changes using their own control planes, since the devices will use a partially centralized control plane. Northbound APIs To communicate from the SDN controller and the applications running over the network, you'll use northbound interfaces (NBIs). By setting up a framework that allows the application to demand the network setup with the configuration that it needs, the NBIs allow your applications to manage and control the network. This is priceless for saving time because you no longer need to adjust and tweak your network to get a service or application running correctly. The NBI applications include a wide variety of automated network services, from network virtualization and dynamic virtual network provisioning to more granular firewall monitoring, user identity management, and access policy control. This allows for cloud orchestration applications that tie together for server provisioning, storage, and networking that enables a complete rollout of new cloud services in minutes instead of weeks! Sadly, as of this writing, there is no single northbound interface that you can use for communication between the controller and all applications. So instead, you use various and sundry northbound APIs, with each one working only with a specific set of applications. Most of the time, applications used by NBIs will be on the same system as the APIC controller, so the APIs don't need to send messages over the network since both programs run on the same system. However, if they don't reside on the same system, Representational State Transfer (REST) comes into play; it uses HTTP messages to transfer data over the API for applications that sit on different hosts. Virtual Extensible Local Area Network Virtual eXtensible Local Area Network (VXLAN) is a tunneling protocol that tunnels Ethernet (layer 2) traffic over an IP (layer 3) network. It can be used to address the scalability issues found in large cloud environments. Using a VXLAN enables us to move data in the data center for a VLAN over the fastest path using layer 3. The VXLAN accomplishes this by encapsulating the frame within an IP packet, which allows the frame to traverse data centers and retain VLAN information. It also allows for the integration of virtual infrastructure without compromise of the virtual network traffic. Layer 2 Encapsulation Limitations Addressed by VXLAN VLANs provide a limited number of layer 2 VLANs (typically using 12-bit VLAN ID). VXLAN increases scalability up to 16 million logical networks (with 24-bit VNID) and allows for layer 2 adjacency across IP networks. Physical pods in the data center may not have layer 2 connectivity. VLAN extends layer 2 segments over the underlying network infrastructure so that tenant workload can be placed across the DC to these pods. The Spanning Tree protocol (STP) causes issues with layer 2 communication by blocking some potential delivery paths to prevent switching loops. VXLAN improves network utilization as VXLAN packets are transferred through the underlying network based on its layer 3 header and can take complete advantage of layer 3 routing and link aggregation protocols to use all available paths. Data Center Interconnect While VXLAN provides the ability to seamlessly get layer 2 traffic across a layer 3 network, Data Center Interconnect (DCI) provides the ability to get data seamlessly from one data center to another. Typically, DCI is achieved by connecting data centers through a VPN, leased lines, or the Internet. Overlay networks such as VXLAN can be built on top of an existing physical network, enabling the creation of scalable and flexible inter-data-center connections. Overlay networks can simplify the process of connecting and managing different data centers. Zero Trust Architecture Zero trust architecture (ZTA), also called perimeter-less network security, is a concept that when applied to connectivity options means no user or device is trusted even if they have been previously authenticated. Every request to access data needs to be authenticated dynamically to ensure least privileged access to resources. Let's look at some of the concepts and techniques used to support ZTA. Policy-Based Authentication Policy-based authentication is an authentication system that uses rules sets called policies to manage authentication processes. Some implementations call this attribute- based access control (ABAC). Attributes are requirements placed on characteristics of the request that must be met for successful authentication. The following are some examples of attributes: Time of day when request was made Device from which the request was sourced By combining multiple attributes, a policy can be created that controls all requests according to configured attributes. Authorization While authentication identifies the user or device, authorization determines what they can do (for example, read a document, manage a printer, etc.). Policy-based authorization can also be configured. For example, between 9 and 5, Joe may be able to edit a document, but between 5 and 9, he may only be able read it. Least Privilege Access Whenever an administrator grants a user the right to do something normally done by the administrator, such as manage a printer or change permissions, it is referred to as privileged access. The granting of all rights and permissions, especially privileged access, should be guided by a principle called least privilege access, which prescribes that only the minimum rights or permissions needed to do the job should be granted. This helps support a ZTA. Secure Access Secure Edge/Security Service Edge Techniques that support ZTA include Secure Access Secure Edge (SASE) and Security Service Edge (SSE). Let's look at these two connectivity options for cloud-based design. SASE Secure Access Secure Edge is a security framework that adheres to ZTA and supports ZTA and software-defined networking. It departs from the centralized corporate data center secured by on-premises network perimeter design and creates a converged cloud- delivered platform that securely connects users, systems, endpoints, and remote networks to apps and resources. The following are some of its traits: Access is granted based on the identity of users and devices. Both infrastructure and security solutions are cloud-delivered. Every physical, digital, and logical edge is protected. Users are secured no matter where they work. SSE Security Service Edge uses integrated, cloud-centric security capabilities to facilitate safe access to websites, SaaS applications, and private applications. You might think of it as a subset of SASE. SSE provides the security service elements of a comprehensive SASE. Some examples of its fundamental security capabilities include the following: Zero trust architecture (discussed earlier in this chapter). Secure web gateway (SWG) protects users from web-based threats by connecting to a website on behalf of a user, while using filtering, malicious content inspection, and other security measures. Cloud Access Security Broker (CASB) enforces an organization's security, governance, and compliance policies while allowing authorized users to access and consume cloud resources. Firewall as a service (FWaaS) provides consistent application and security enforcement of policies across all locations and users. Summary In this chapter, we went into great detail on cloud computing because it continues to evolve and take on more and more of IT workloads. You learned about the most common services models, including infrastructure as a service, platform as a service, and software as a service. You learned about the various types of clouds, such as private, public, and hybrid clouds. Next, you learned about infrastructure as code (IaC). You learned how you can automate