AWS Certified Solutions Architect Associate (SAA-C02) Study Guide PDF

Summary

This book is a study guide for the AWS Certified Solutions Architect Associate (SAA-C02) exam. It covers topics such as cloud computing, AWS services, and security and includes a table of contents (with page numbers).

Full Transcript

AWS Certified Solutions Architect Associate (SAA-C02) – Exam Guide Mike Gibbs MS, MBA, CCIE #7417, GPC-PCA, AWS-CSA Nathan Vontz www.gocloudarchitects.com AWS Certified Solutions Architect Associate (SAA-C02) – Exam Guide by Mike Gibbs MS,...

AWS Certified Solutions Architect Associate (SAA-C02) – Exam Guide Mike Gibbs MS, MBA, CCIE #7417, GPC-PCA, AWS-CSA Nathan Vontz www.gocloudarchitects.com AWS Certified Solutions Architect Associate (SAA-C02) – Exam Guide by Mike Gibbs MS, MBA, CCIE #7417, GCP- PCA, AWS-CSA & Nathan Vontz Port Saint Lucie, FL 34953 www.gocloudarchitects.com © 2021 Mike Gibbs & Nathan Vontz All rights reserved. No portion of this book may be reproduced in any form without permission from the publisher, except as permitted by U.S. copyright law. For permissions contact: [email protected] Disclaimer LIABILITY: Mike Gibbs, Nathan Vontz, GoCloudArchitects and its organizations and agents specifically DISCLAIM LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES and assumes no responsibility or liability for any loss or damage suffered by any person as a result of the use or misuse of our these materials or any of the information or content on our website, blogs and social media posts. Mike Gibbs, Nathan Vontz, GoCloudArchitects and its organizations and agents assume or undertakes NO LIABILITY for any loss or damage suffered as a result of the use or misuse of any information or content or any reliance thereon. Table of Contents Chapter 1 Introduction to Cloud Computing 1 The History of the Enterprise Network and Data Center 1 Connections to the Cloud 5 How to Access and Manage Resources in the Cloud 9 Chapter 2 Storage Options on the AWS Cloud Platform 11 AWS Primary Storage Options 11 Amazon Simple Storage Service (S3) 12 Elastic Block Storage (EBS) 21 Elastic File System (EFS) 24 AWS Storage Gateway 25 Migrating Data to AWS 27 Storage for Collaboration 28 Amazon FSx for Windows 28 Chapter 3 Computing on the AWS Platform (EC2) 31 Amazon Machine Images (AMI) 32 Autoscaling 33 Instance Purchasing Options 33 Tenancy Options 35 Securing EC2 Access 35 IP Addresses for EC2 Instances 37 Accessing EC2 Instances 37 Chapter 4 Introduction to Databases 39 Relational Databases 46 DynamoDB 46 Data Warehousing on AWS 48 Database Storage Options 49 Backing Up the Database 50 Scaling the Database 51 Designing a High-Availability Database Architecture 55 Chapter 5 The AWS Virtual Private Cloud 58 The OSI Model 58 IP Addressing 59 Routing Tables and Routing 65 Internet Gateways 68 NAT Instances 69 NAT Gateway 70 Elastic IP Addresses (EIPs) 71 Endpoints 72 VPC Peering 74 AWS CloudHub 77 Network Access Control Lists (NACL) 79 Security Groups 80 Chapter 6 AWS Network Performance Optimizations 83 Placement Groups 83 Amazon Route 53 86 Load Balancers 89 Chapter 7 Security 94 AWS Shared Security Model 94 Principle of Least Privilege 96 Industry Compliance 97 Identity and Access Management 97 Identity Federations 102 Creating IAM Policies 107 Further Securing IAM with Multifactor Authentication 111 Multi-Account Strategies 112 Preventing Distributed Denial of Service Attacks 114 Amazon Web Application Firewall (WAF) 115 AWS Shield 116 AWS Service Catalog 117 AWS Systems Manager Parameter Store 118 Chapter 8 AWS Applications and Services 120 AWS Simple Queueing Service (SQS) 120 AWS Simple Notification Service (SNS) 124 AWS Simple Workflow Service (SWF) 126 AWS Kinesis 127 AWS Elastic Container Service (ECS) 132 AWS Elastic Kubernetes Service (EKS) 135 AWS Elastic Beanstalk 136 AWS CloudWatch 137 AWS Config 138 AWS CloudTrail 140 AWS CloudFront 141 AWS Lambda 144 AWS Lambda@Edge 146 AWS CloudFormation 147 AWS Certificate Manager (ACM) 148 Chapter 9 Cost Optimization 150 Financial Differences Between Traditional Data Centers and Cloud Computing 150 Optimizing Technology Costs on the AWS Cloud 150 AWS Budgets 153 AWS Trusted Advisor 154 Chapter 10 Building High Availability Architectures 156 What is Availability? 156 Building a High Availability Network 156 Chapter 11 Passing the AWS Exam 160 Chapter 12 Practice Exam 162 Chapter 1 Introduction to Cloud Computing The History of the Enterprise Network and Data Center Ever since computing resources provided a competitive advantage, organizations have been investing in their computing resources. Over time technology became not only a competitive advantage but a necessary resource to be competitive in today’s business environment. Since technology can bring extreme advances in productivity, sales, marketing, communication, and collaboration, organizations invested more and more into technology. Organizations, therefore, built large and complex data centers, and connected those data centers to an organization’s users with specialized networking, security, and computing hardware resources. The enterprise data center became huge networks, often requiring thousands of square feet of space, incredible amounts of power, cooling, hundreds, if not thousands, of servers, switches, routers, and many other technologies. Effectively, the enterprise and especially the global enterprise environments became massive networks—just like a cloud computing environment. The net result was a powerful private cloud environment. Global enterprise data centers and high-speed networks work well. However, these networks come with a high cost and high level of expertise to manage these environments. Network and data center technology is not simple, and it requires a significant staff of expensive employees to design, operate, maintain, and fix these environments. Some large enterprise technology environments take billions of dollars to create and operate. Global enterprise networks and data centers still have merit for high security, ultrahigh- performance environments requiring millisecond-level latency, and ultrahigh network and computing performance. An example environment that benefits from this traditional model are global financial environments, where shaving milliseconds off network, server, and application performance can equate to a significant competitive advantage. However, for many customers, the costs of procuring and operating the equipment are just too costly. Recent advances in network, virtualization, and processing power make the transition to the cloud computing feasible. Why Now Is the Optimal Time for Cloud Computing Let’s first look at virtualization, as it is the key enabling technology of cloud computing. Server performance has increased dramatically in recent years. It is no longer necessary to have a server or multiple servers for every application. Since today’s servers are so powerful, they can 1 be partitioned into multiple logical servers in a physical server, reducing the need for so many servers. This reduces space, power, and cooling requirements. Additionally, virtualization makes it simple to move, add, or change server environments. Previously, any time you wanted to make a server change, you had to buy a new server, which could take weeks or months, install the operating system and all the applications dependencies, and then find a time to upgrade the server when it wasn’t being used. This process was lengthy, as changes to any part of an IT environment can affect many other systems and users. With virtualization, to upgrade a server you have to copy only the server file to another machine, and it’s up and running. Server virtualization became so critical in improving hardware resource utilization for computing, that soon organizations explored moving the network to virtualized servers. Now routing, firewalling, and many other functions can be shifted to virtualized services with software-defined networking. For several years organizations have migrated their traditional datacenters to virtualized enterprise data centers, and it has worked well. However, network speed (bandwidth) has made such significant gains, while the cost of this high-performance networking has decreased substantially. Therefore, it is now possible to move the data center to a cloud computing environment and still achieve high performance with lower total costs. With the ability to purchase multiple 10-gigabit-per-second links to AWS, it’s now feasible to connect an organization to a cloud provider at almost the same speed as if the application is in the local data center, but with the benefits of a cloud computing environment. Hybrid Cloud A hybrid cloud combines a standard data center with outsourced cloud computing. For many organizations, the hybrid cloud is the perfect migration to the cloud. In a hybrid architecture, the organization can run its applications and systems in its local data center and offload part of the computing to the cloud. This provides an opportunity for the organization to leverage its investment in its current technology while moving to the cloud. Hybrid clouds provide an opportunity to learn and develop the optimal cloud or hybrid architecture. Applications for hybrid cloud include: Disaster recovery – Run the organization computing locally, with a backup data center in the cloud. On-demand capacity – Prepare for spikes in application traffic by routing extra traffic to the cloud. High performance – Some applications benefit from the reduced latency and higher network capacity available on-premises, and all other applications can be migrated to the cloud to reduce costs and increase flexibility. Specialized workloads – Move certain workflows to the cloud that require substantial development time, i.e., machine learning, rendering, transcoding. Backup – The cloud provides an excellent means to back up to a remote and secure location. 2 The diagram below shows an example of a hybrid cloud computing environment. Pure Cloud Computing Environment In a pure cloud computing environment, all computing resources are in the cloud. This means servers, storage, applications, databases, and load balancers are all in the cloud. The organization is connected to the cloud with a direct connection or a VPN connection. The speed and reliability of the connection to the cloud provider will be the key determinant of the performance of this environment. 3 The diagram below shows an example of a pure cloud computing environment on the AWS platform. A pure cloud computing environment has several advantages: Scalability – The cloud provides incredible scalability. Agility – Adding computing resources can occur in minutes versus weeks in a traditional environment. Pay-as-you-go pricing – Instead of purchasing equipment for maximum capacity, which may be idle 90 percent of the time, exactly what is needed is purchased when needed. This can provide tremendous savings. Professional management – Managing data centers is very complicated. Space, power, cooling, server management, database design, and many other components can easily overwhelm most IT organizations. With the cloud, most of these are managed for you by highly skilled individuals, which reduces the risk of configuration mistakes, security problems, and outages. Self-healing – Cloud computing can be set up with health checks that can remediate problems before they have a significant effect on users. 4 Enhanced security – Most cloud organizations provide a highly secure environment. Most enterprises would not have access to this level of security due to the costs of the technology and the individuals to manage it. Connections to the Cloud If an organization moves its computing environment to the cloud, then the connection to the cloud becomes critical. If the connection to the cloud fails, then the organization can no longer access cloud resources. The performance needs and an organization’s dependency on IT will determine the connection requirements to the cloud. For most organizations, getting a “direct” connection to the cloud will be the preferred method. A direct connection is analogous to a private line in the networking world because it is effectively a wire that connects the organization to the cloud. This means guaranteed performance, bandwidth, and latency. As long as the connection is available, performance is excellent. This is unlike a VPN connection over the internet, where congestion anywhere on the internet can negatively affect performance. Since network connections can fail, a direct connection is generally combined with a VPN backup over the internet. A VPN can send the data securely over the internet to AWS. A VPN provides data security via encryption and permits the transfer of routing information and the use of private address space. VPNs work by creating an IP security (IPsec) tunnel over the internet. 5 The diagram below shows an example of a direct connection to the AWS platform. VPN Connection to AWS The simplest and cheapest means to connect to AWS is a VPN. A VPN provides a means to “tunnel” traffic over the internet in a secure manner. Encryption is provided by IPsec, which provides a means to provide encryption (privacy), authentication (identifying of the user), data authenticity (meaning the data has not been changed), and non-repudiation (meaning, the user can’t say they didn’t send the message after the fact). However, the problem with VPN connections is that while the connection speed to the internet is guaranteed, there is no control of what happens on the internet. So, there can be substantial performance degradation based upon the availability, routing, and congestion on the internet. VPN-only connections are ideal for remote workers and small branches of a few workers, where if they lose connectivity, there will not be significant costs to the organization. 6 The diagram below shows an example of a VPN connecting to the AWS platform. 7 High-Availability Connections Connecting to the cloud with high availability is essential when an organization depends upon technology. The highest availability architectures will include at least two direct connections to the cloud. Ideally, each connection is with a separate service provider, a dedicated router, and each router connected to different power sources. This configuration provides redundancy to the network connection, power failures, and the routers connecting to the cloud. For organizations that need 99.999 percent availability, this type of configuration is essential. For even higher availability, there can be a VPN connection as a backup to the other direct connections. High Availability at Lower Costs A lower cost means to achieve high availability is to have a dedicated connection and a VPN backup to the cloud. This will work well for most organizations, assuming they can tolerate reduced performance when using the backup environment. 8 Basic Architecture of the AWS Cloud The AWS cloud comprises multiple regions connected with the Amazon high-speed network backbone. Many regions exist, but to understand the topology, here is a simplified explanation. Think of a region as a substantial geographic space (a significant percentage of a continent). Now, each region has numerous data centers. Each data center within a region is classified as an availability zone. For high-availability purposes and to avoid single points of failure, applications and servers can be in multiple availability zones. Large global organizations can optimize performance and availability by connecting to multiple regions with multiple availability zones per region. How to Access and Manage Resources in the Cloud There are three ways to manage cloud computing resources on AWS. The methods to configure AWS cloud computing resources are the AWS Management Console, the Command Line Interface, and connecting via an API through the software development kit. AWS Management Console The AWS Management Console is a simple-to-use, browser-based method to manage configurations. There are numerous options, with guidance and help functions. Most users will use the management console to set things up and perform basic system administration, while performing other system administration functions with the command line interface. You can access the management console at this URL. https://aws.amazon.com/console/ AWS CLI The AWS Command Line Interface enables you to manage computing resources via Linux commands and JavaScript Object Notation (JSON) scripting. This is efficient but requires more knowledge, training, and organizations need to know exactly what is needed and how to configure properly. With the CLI, there is no guidance as in the AWS Management Console. You will access the CLI by using secure shell. If you’re using a Linux, Unix, or MacOS computer you can access the secure shell from a terminal window. If your using windows, you will need to install an SSH application, one popular application is putty. You can download Putty at no cost from this URL. https://www.putty.org AWS SDK The AWS Software Development Kit (SDK) provides a highly effective method to modify and provision AWS resources on demand. This can be automated and can provide the ultimate 9 scalability. This will require sophisticated knowledge to build the programming resources and is recommended for experts. 10 Chapter 2 Storage Options on the AWS Cloud Platform There are several storage options available to AWS cloud computing customers. They are: AWS Simple Storage Service (S3), Elastic Block Storage, Elastic File System, Storage Gateways, and WorkDocs. In traditional data centers, there are two kinds of storage—block storage and file storage. Block storage is used to store data files on storage area networks (SANs) or cloud-based storage environments. It is excellent for computing situations where there is a need for fast, efficient, and reliable data transportation. File storage is stored on local systems, servers, or network file systems. AWS Primary Storage Options In the AWS cloud environment, there are three types of storage: block storage, object storage, and file storage. Block Storage Block storage places data into blocks and then stores those blocks as separate pieces. Each block has a unique identifier. This type of storage places those blocks of data wherever it is most efficient. This enables incredible scalability and works well with numerous operating systems. Object Storage Object-based storage differs from block storage. Object storage breaks data files into pieces called objects. It then stores those objects in a single place that can be used by multiple systems that have network access to the storage. Since each object will have a unique ID, it’s easy for computing resources to access data on file-based storage. Additionally, each object has metadata or information about the data to make it easier to find when needed. File Storage File storage is traditional storage. It can be used for a systems operating system and network file systems. Examples of this are NTFS-based volumes for Windows systems and NFS volumes for Linux/UNIX systems. These volumes can be mounted and directly accessed by numerous computing resources simultaneously. 11 AWS Object Storage The AWS platform provides an efficient platform for block stage with Amazon Simple Storage Service, otherwise known as S3. Amazon Simple Storage Service (S3) Amazon S3 provides high-security, high-availability, durable, and scalable object-based storage. S3 has 99.999999999 percent durability and 99.99 percent availability. Durability refers to a file getting lost or deleted. Availability is the ability to access the system when you need access to your data. So, this means data stored on S3 is highly likely to be available when you need it. Since S3 is object-based storage, it provides a perfect opportunity to store files, backups, and even static website hosting. Computing systems cannot boot from object-based storage or block-based storage. Therefore, S3 cannot be used for the computing platforms operating system. Since block-based storage is effectively decoupled from the server’s operating system, it has near limitless storage capabilities. 1 S3 is typically used for these applications: Backup and archival for an organization’s data. Static website hosting. Distribution of content, media, or software. Disaster recovery planning. Big data analytics. Internet application hosting. Amazon S3 is organized into buckets. Each bucket is a container for files stored on S3. Buckets create a top-level name space, meaning they must be globally unique and can be accessed by their DNS name. An example of this address can be seen below: http://mybucketname.s3.amazonaws.com/file.html 12 The diagram below shows the organizational structure of S3. Since the buckets use DNS-type addressing, it is best to use names that follow standard DNS naming conventions. Bucket names can have up to sixty-three characters including letters, numbers, hyphens, and periods. It’s noteworthy that the path you use to access the files on S3 is not necessarily the location to where the file is stored. The URL used to access your file is really a pointer to the database where your files are stored. S3 functions a lot like a database behind the scenes, which enables you do incredible things with data stored on S3 like SQL queries. An organization can have up to 100 buckets per account without requesting a bucket limit increase from AWS. Buckets are placed in different geographic regions. When creating the bucket, to achieve the highest performance, place the bucket in a region that is closest. Additionally, this will decrease data transfer changes across the AWS network. For global enterprises, it may be necessary to place buckets in multiple geographic regions. AWS S3 provides a means for buckets to be replicated automatically between regions. This is called cross-region replication. S3 Is Object-Based Storage S3 is used with many services within the AWS cloud platform. Files stored in S3 are called objects. AWS allows most objects to be stored in S3. Every object stored in an S3 bucket has a 13 unique identifier, also known as a key. A key can be up to 1,024 bytes, which are comprised of Unicode characters and can include slashes, backslashes, dots, and dashes. Single files can be as small as zero bytes, all the way to 5 TB per file. This provides ultimate flexibility. Objects in S3 can have metadata (information about the data), which can make S3 extremely flexible and can assist with searching for data. The metadata is stored in a name- value pair environment, much like a database. Since metadata is placed in a name-value pair, S3 provides a SQL-based method to perform SQL-based queries of your data stored in S3. To promote scalability, AWS S3 uses an eventually consistent system. While this promotes scalability, it is possible that after you delete an object, it may be available for a short period. Securing Data in S3 The security of your data is paramount, and S3 storage is no exception. S3 is secured via the buckets policy and user policies. Both methods are written in JSON access-based policy language. Bucket policies are generally the preferred method to secure your data in S3. Bucket policies allow for granular control of the security of your digital assets stored in S3. Bucket policies are based on IAM users and roles. S3 bucket policies are similar to the way Microsoft authenticates users and grants access to certain resources based upon the user’s role, permissions, and groups in active directory. 2 S3 also allows for using ACLs to secure your data. However, the control afforded to your data via an ACL is much less sophisticated then IAM policies. ACL-based permissions are essentially read, write, or full control. The diagram below shows how ACL and bucket policies are used with AWS S3. 14 S3 Storage Tiers AWS S3 offers numerous storage classes to best suit an organization’s availability and financial requirements. The main storage classes can be seen below: Amazon S3 Standard Amazon S3 Infrequent Access (Standard-IA) Amazon S3 Infrequent Access (Standard-IA) – One Zone Amazon S3 Intelligent-Tiering Amazon S3 Glacier Amazon S3 Standard Amazon S3 Standard provides high-availability, high-durability, high-performance, and low- latency object storage. Given the performance of S3, it is well suited for storage of frequently accessed data. For most general-purpose storage requirements, S3 is an excellent choice. Amazon S3 Infrequent Access (Standard-IA) Amazon S3 Infrequent Access provides the same high-availability, high-durability, high- performance, and low-latency object storage as standard S3. The major difference is the cost, which is substantially lower. However, with S3 Infrequent Access, you pay a retrieval fee every time data is accessed. This makes S3 extremely cost-effective for long-term storage of data not frequently accessed. However, access fees might make it cost-prohibitive for frequently accessed data. Amazon S3 Infrequent Access (Standard-IA) – One Zone Amazon S3 Infrequent Access provides the same service as Amazon S3-IA but with reduced durability. This is great for backups of storage due to its low cost. Amazon S3 Intelligent Tiering Amazon S3 Intelligent Tiering provides an excellent blend of S3 and S3-IA. Amazon keeps frequently accessed data on S3 standard and effectively moves your infrequently accessed data to S3-IA. So, you have the best and most cost-effective access to your data. Amazon S3 Glacier Amazon Glacier offers secure, reliable, and very low-cost storage for data that does not require instant access. This storage class is perfect for data archival or backups. Access to data must be requested; after three to five hours the data becomes available. Data can be accessed sooner by paying for expedited retrievals. Glacier also has a feature called vault lock. Vault lock can be used for archives that require compliance controls, i.e., medical records. With vault lock, data 15 cannot be modified but can be read when needed. Glacier, therefore, provides immutable data access, meaning it can’t be changed while in Glacier. Managing Data in S3 This next section describes how to manage and protect your data on S3. S3 Lifecycle Management The storage tiers provided by S3 provide an excellent means to have robust access to data with a variety of pricing models. S3 lifecycle management provides an effective means to automatically transition data to the best S3 tier for an organization’s storage needs. For example, let’s say you need access to your data every day for thirty days, then infrequently access your data for the next thirty days, and you may never access your data again but want to maintain for archival purposes. You can set up a lifecycle policy to automatically move your data to the optimal location. You can store your data in S3, then after thirty days, have your data automatically moved to S3- IA, and after thirty days, the data can be moved to Glacier for archival purposes. This can be seen in the diagram below. That’s the power of the lifecycle policies. Lifecycle policies can be configured to be attached to the budget or specific objects as specified by an S3 prefix. S3 Versioning To help secure S3 data from accidental deletion, S3 versioning is the AWS solution. Amazon S3 versioning protects data against accidental or malicious deletion by keeping multiple versions of each object in the bucket, identified by a unique version ID. Therefore, multiple copies of objects are stored in S3 when versioning is enabled. Every time there is a change to the object, S3 will store another version of that object. Versioning allows users to 16 preserve, retrieve, and restore every version of every object stored in their Amazon S3 bucket. If a user makes an accidental change or even maliciously deletes an object in your S3 bucket, you can restore the object to its original state simply by referencing the version ID besides the bucket and object key. Versioning is turned on at the bucket level. Once enabled, versioning cannot be removed from a bucket; it can be suspended only. The diagram below shows how S3 versioning maintains a copy of all previous versions by using a Key ID. Multifactor Authentication Delete To provide additional protection from data deletion, S3 supports multifactor authentication to delete an object from S3. When an attempt to delete an object from S3 is made, S3 will request an authentication code. The authentication code will be a one-time password that changes every few seconds. This one-time authentication code can be provided from a hardware key generator or a software-based authentication solution, i.e., Google Authenticator. Organizing Data in S3 As previously discussed, S3 storage is very similar to a database. Essentially the data is stored as a flat arrangement in a bucket. While this scales well, it is not necessarily the most organized structure for end users. S3 allows the user to specify a prefix and delimiter parameter so the user can organize their data in what feels like a folder. Essentially the user could use a slash / or backslash \ as a delimiter. This will make S3 storage look and feel like a traditional Windows or Linux file system organized by folders.3 For example: mike/2020/awsvideos/storage/S3.mp4 mike/2020/awsvideos/compute/ec2.mp4 mike/2020/awsvideos/database/dynamo.mp4 17 Encrypting Your Data S3 supports a variety of encryption methods to enhance the security of your data. Generally, all data containing sensitive or organization proprietary data should be encrypted. Ideally, you encrypt your data on the way to S3, as well as when the data is stored (or resting) on S3. A simple way to encrypt data on the way to S3 is to use https, which uses SSL to encrypt your data on its way to the S3 bucket. Encrypting data on S3 can be performed using client-side encryption or server-side encryption. 4,5 Client-Side Encryption Client-side encryption means encrypting the data files prior to sending to AWS. This means the files are already encrypted when transferred to S3 and will stay encrypted when stored on S3. To encrypt files using client-side encryption, there are two options available to use. Files can be encrypted with a client-side master key or a client master key using the AWS key management system (KMS). When using client-side encryption, you maintain total control of the encryption process, including the encryption keys. Server-Side Encryption Alternatively, S3 supports server-side encryption. Server-side encryption is performed using S3 and KMS. Amazon S3 automatically will encrypt when storing and decrypt when accessing your data on S3. There are several methods to perform server-side encryption and key management. These options are discussed below. SSE-KMS (Customer-Managed Keys with AWS KMS) The SSE-KMS is a complete key management solution. The user manages their own master key, but the key management system manages the data key. This solution provides extra security by having separate permissions for using a customer-managed key, which provides added protection against unauthorized access of your data in Amazon S3. Additionally, SSE-KMS helps with auditing by providing a trail of how, who and when your data was accessed. SSE-S3 (AWS-Managed Keys) SSE-S3 is a fully integrated encryption solution for data in your S3 bucket. AWS performs all key management and secure storage of your encryption keys. Using SSE-S3, every object is encrypted with a unique encryption key. All object keys are then encrypted by a separate master key. AWS automatically generates new encryption keys and automatically rotates them monthly. 18 SSE-C (Customer-Provided Keys) SSE-C is used in environments when you want total autonomy over key management. When using SSE-C, AWS S3 will perform all encryption and decryption of your data, but you have total control of your encryption keys. Tuning S3 for Your Needs There are a few tuning options to make S3 perform even better. They are covered below. Presigned URLS All objects stored in S3 are private by default. Meaning, only the owner has access to the objects in their bucket. Therefore, to share a file with another user or organization, you need to provide access to the object. You can generate a presigned URL to share an S3-based object with others. When you generate the presigned URL, the owner’s encryption key is used to sign the URL, which allows the receiver temporary access to the object. You can use the following authentication options to generate a presigned URL. Each method has a different expiration to the authorization provided by the presigned URL. These options can be seen in the table below: Presigned URL Expiration Method Expiration Time IAM Instance Profile Up to 6 Hours AWS Security Token Service Up to 36 Hours IAM User Up to 7 Days Temporary Token When the Token Expires Multipart Uploads AWS S3 allows for objects of up to 5 GB in size to be stored. However, when trying to upload large files, many things can go wrong and interrupt the file transmission. It is a best practice to send files larger than 100 MB as a multipart upload. In a multipart upload, the file is broken into pieces, and each piece is sent to S3. When all the pieces are received, S3 puts the pieces back together into a single file. This helps dramatically if anything goes wrong with the transmission; only a part of the file needs to be re-sent instead of the whole file. This improves both the speed and reliability of transferring large files to S3. 19 Range Gets S3 supports large files. Sometimes you need access to some of the data in the file, but not the entire object. A range get allows you to request a portion of the file. This is highly useful when dealing with large files or when on a slow connection, i.e., mobile network. Cross-Region Replication S3 cross-region replication automatically copies the objects in an S3 bucket to another region. Therefore, all S3 buckets will be synchronized. After cross-region replication is turned on, all new files will be copied to the region for which cross-region replication has been enabled. Objects in the bucket before turning on cross-region replication will need to be manually copied to the new bucket. Cross-region replication is especially useful when hosting a global website’s backend on S3. If website users experience latency due to the distance of the users to the website, the website can be hosted locally off a local S3 bucket. An example would be a US company with the website hosted in New York that suddenly gets a large customer base in Japan. The S3 bucket can be manually copied to a bucket in Asia. With cross-region replication enabled, any future changes to the website’s main bucket in New York will be automatically copied to the S3 bucket in Asia. This can dramatically reduce latency and provide a disaster recovery backup of the main web site. The diagram below shows cross-region replication copying files from one region to another region. 20 Storage for Computing Resources The next class of storage is storage directly available to computing platforms. These include instance storage, elastic block storage, and elastic file systems. Instance Storage An instance storage is essentially a transient storage platform for an elastic computing instance. It is temporary in that when the instance is stopped, the volume is automatically deleted. This will be covered in much more depth when we discuss the EC2 platform. The diagram below shows how EC2 instance storage is used within the AWS platform. Note the instance storage is deleted upon EC2 termination. Elastic Block Storage (EBS) Elastic block storage is a high-performance block storage platform for using EC2 instances and AWS databases. An EBS volume can be mounted and used by EC2 instances, and the data on an EBS volume is not deleted on an EC2 instance termination. EBS volumes are designed for high throughput and high-transaction workloads. This storage is ideal for databases, enterprise applications, containers and big data applications. EBS is scalable and can scale to multiple petabytes of data. EBS is highly available with the availability of 99.999 percent, so it’s perfect for mission-critical use. EBS functions like a virtual hard drive. EBS is automatically backed up to another availability zone to protect against any device failure. 6 21 The diagram below shows how EBS instances are attached to an EC2 instance. Backing Up EBS Volumes EBS volumes can be backed up to other regions in the form of EBS snapshots. An EBS snapshot is a point-in-time copy of your data. A snapshot can be shared with other users and can be copied to other regions. Snapshots can be versioned, and multiple versions can be maintained. Therefore, you can restore your data to any previous snapshots. EBS snapshots are incrementally backed up to optimize speed and storage space. Unlike traditional incremental backups, you can delete some backups and still make a complete restore of current data stored on EBS volumes.7 The diagram below shows how EBS volumes are backed up using snapshots. 22 EBS Volume Types There are several types of EBS volumes available. Choosing the right volume type can make a substantial difference in the performance of your system. This next section describes the EBS volumes available and how to choose the right EBS volume type to meet the systems requirements. The four types of EBS volumes are EBS Provisioned IOPS (io1), EBS General Purpose SSD (gp2), EBS Throughput Optimized HDD (st1), and EBS Cold HDD (sc1). EBS Provisioned IOPS (io1) EBS-provisioned IOPS is the highest performance SSD storage available on the EBS platform. This platform enables the user to purchase guaranteed speed of read and write performance, in terms of inputs and outputs per second (IOPS). EBS-provisioned IOPS is optimal for applications that require high disk input and output performance (IO). It’s designed for databases or other applications requiring low latency. The throughput speeds provided by EBS PIOPS volumes are up to 1,000 MB/second. EBS General Purpose SSD (gp2) EBS general purpose SSD (gp2) is an SSD-based storage solution. It provides a good balance of price and performance. GP2 is an excellent platform for a boot volume, as it has good performance and does not get erased upon EC2 instance shutdown. GP2 is great for transactional workloads. It is great for dev and test environments that require low latency, at a much lower cost than PIOPS based volumes used in production environments. EBS Throughput Optimized HDD (st1) EBS Throughput Optimized HDD (st1) is low-cost magnetic storage. St1 is designed for frequently accessed workloads. It supports a fairly significant throughput of 500 MB/second but with higher latency than with SSD-based volumes. St1 is designed for situations that require a low-cost option to store substantial data. It works well with applications that have sequential read and writes to the disk. EBS Cold HDD (sc1) EBS Cold HDD (sc1) is the lowest cost option for EBS volumes. Sc1 is for workloads that do not require frequent disk access but still need a reliable storage platform. Choosing the Correct EBS Volume Type Choosing the correct EBS volume type can make the difference between optimal performance and performance problems within the system. The primary determination of volume selection is determining whether the application requires high throughput, low latency, or both. 23 For applications requiring high throughput and low latency, choose EBS-provisioned IOPS. Applications that require moderate throughput, but low latency will perform well on EBS general purpose SSD volumes. For applications that require high throughput but are less sensitive to latency, EBS Throughput Optimized HDD are a great option, especially if the application performs sequential read and writes of data. Lastly, EBS Cold HDD is perfect for storing a large amount of infrequently accessed data that is not sensitive to higher latencies than SSD volumes. Elastic File System (EFS) The next storage for AWS is the Elastic File System (EFS). EFS is a high-performance, highly scalable file system for networked computers. It is essentially the AWS version of the network file system (NFS) originally invented by Sun Microsystems. Since EFS in a network file system, it can be accessed simultaneously by many computing instances. EFS is best used when a high- performance network file system is required. Think of corporate file shares or multiple servers needing access to the same files simultaneously. There are two versions of EFS available. The two versions of EFS are standard and infrequent access. Standard EFS is the normal version and the highest performance option. EFS-Infrequent access is a lower-cost option for files not accessed frequently. EFS has numerous benefits: Scalable – High throughput, high IOPS, high capacity, and low latency. Elastic – EFS will automatically adjust sizing to meet the required storage capacity. Pricing – Pay for what is used. POSIX compatible – This enables access from standard file servers both on premises and on the cloud. Works with traditional NFS-based file permissions and directories. 24 The diagram below shows how EFS is mounted and used by multiple EC2 instances on the AWS platform. AWS Storage Gateway Connecting on-premises environments to the cloud and achieving optimal performance often requires some tuning. This is especially true when connecting servers to object-based storage on S3. A great option to connect the on-premises data center to S3-based storage is with a storage gateway. This makes the storage look and feel like a network file system to the on- premises computing systems.9 A storage gateway is a virtual machine that runs in the data center. It is a prebuilt virtual machine from AWS available in VMware, Hyper V, or KVM. This virtual machine acts as a virtual file server. The on-premises computing resources read and write to storage gateway, which then synchronizes to S3. You can access the storage gateway with SMB- or NFS-based shares. SMB is optimal for Windows devices, while NFS is optimal for Linux machines. 25 Three types of storage gateways are available for different applications. They are volume gateways, both cached volumes and stored volumes, and tape gateways. The diagram below shows an example of a storage gateway on the AWS platform. Storage Gateway Cached Volume In a volume gateway cached volume, the organization’s data is stored on S3. The cache volume maintains frequently accessed data locally on the volume gateway. This provides low-latency file access for frequently accessed files to the on-premises systems. Storage Gateway Stored Volume In a storage gateway stored volume, the on-premises systems store your files to the storage gateway, just like any other file server. The storage gateway then asynchronously backs up your data to S3. The data is backed up to S3 via point in time snapshots. Since the data is on S3 via a snapshot, these snapshots can be used by EC2 instances should something happen to the on- premises data center. This provides an excellent and inexpensive offsite disaster recovery option. Tape Gateway The tape gateway provides a cloud backup solution. It essentially replaces backup tapes used by some enterprise environments for deep archival purposes. Since it is virtual, there is no need to maintain physical tapes and the infrastructure to support a tape-based backup solution. With the tape gateway, data is copied to Glacier or Deep Archive. 26 Migrating Data to AWS Moving an organization’s data center to the cloud involves setting up the cloud infrastructure as well as moving an organization’s data to AWS. Data can be copied to the cloud over a VPN or a direct connection. But, if there is a tremendous amount of data, or rapid timelines to move to the cloud, it might be necessary to move files to the cloud in a more efficient manner. Additionally, it might be cheaper to send data directly to S3, as opposed to adding additional high-performance, high-capacity direct connections to AWS. To that end, there are two methods to send data directly to AWS instead of using a network connection. These methods are the AWS Snowball and the AWS Import/Export service. AWS Snowball The AWS Snowball is a rugged computer with substantial storage that can be rented from AWS. AWS ships the Snowball to the customer. The customer places the Snowball on their network and copies the files they want to move to AWS. The files on the Snowball are encrypted. The organization then ships the Snowball to AWS. Once the Snowball is received by AWS, AWS employees move the data from the Snowball to the organization’s cloud computing environment. The diagram below shows an example of an AWS Snowball. AWS Import/Export Service The import-export service is essentially a means to ship data to AWS. Essentially, you copy your data to a hard drive or hard drives. The organization then ships the hard drives to AWS. Once the hard drives are received by AWS, they move the data from the hard drives to your cloud computing environment. 10 27 The diagram below shows how the AWS Import/Export service is used to quickly move large amounts of data to AWS. Storage for Collaboration In recent years there has been a significant move to cloud storage for collaboration. This is especially useful for working on creative projects, documents, or presentations. AWS has created Amazon WorkDocs for this purpose. Amazon WorkDocs Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. It is similar in functionality to Google Drive or Dropbox. WorkDocs enables collaboration across projects and facilitates things like shared document editing. It is a simple solution with pay-as-you-go pricing. WorkDocs can be accessed with a web interface or with client software. Client software is available for Windows and macOS clients. This is secure storage and meets all main forms of compliance standards, i.e., HIPAA, PCI, and ISO.11 Amazon FSx for Windows Some organizations heavily utilize the Windows platform for many critical services. Organizations that need native windows file servers can use Amazon FSx. Amazon FXs are hosted Microsoft Windows file servers. FSx is a Windows server, therefore it supports Microsoft features such as storage quotas and active directory integration. As a Windows file server, it uses the Server Message Block (SMB) protocol. Windows-based SMB file shares can be accessed by Windows, macOS, and Linux hosts. FSx is a high-availability service with high-availability single and multiple availability zone options. FSx provides data protection through encryption, both in transit and at rest. Additionally, to protect against data loss, FSx provides fully managed backups. 28 The diagram below shows an example of AWS FSx being used for Windows hosts. Labs 1) Create a budget to protect yourself from costly overages during your lab practice. Link on how to create a budget below. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-create.html 2) Create an S3 bucket and upload several small files. Link on how to create an S3 bucket below. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html 3) Generate a presigned URL on one of the files you uploaded to the s3 bucket. Link on how to pre-sign a URL below. https://docs.aws.amazon.com/cli/latest/reference/s3/presign.html 4) Set up static website hosting. You will need a basic html page to upload to S3. You can easily save a word document as a webpage and upload to S3. Link on how to create static website hosting below. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html 5) Create an EC2 instance. Link on how to create EC2 instance below. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html 6) Secure Shell (SSH) into EC2 instance. Link on how to SSH into EC2 instance below. https://docs.aws.amazon.com/quickstarts/latest/vmlaunch/step-2-connect-to- instance.html 7) Create an EBS Volume. Link on how to create an EBS volume below. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-volume.html 29 8) Mount EBS volume to EC2 Instance. Link on how to mount EBS volume is below. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html 9) Create an EFS Volume. Link on how to create EFS volume below. https://docs.aws.amazon.com/efs/latest/ug/gs-step-two-create-efs-resources.html 10) Shut down all services and instances when you are finished this section to avoid being changed for running AWS instances. Summary AWS has a comprehensive storage platform for computing instances, backup, disaster recovery, and collaboration purposes. Now that you know the options available, you can choose the best option for your application’s requirements. 30 Chapter 3 Computing on the AWS Platform (EC2) This chapter describes the computing options available within the AWS Platform. AWS provides numerous computing options that can be sized based upon an organization’s needs. Sizing compute resources are similar to sizing virtual machines in a traditional data center. Servers should be configured based upon the CPU, memory, storage (size and performance), graphics processing unit (GPU), and network performance requirements. 12 AWS primary computing platform is called Elastic Compute Cloud (EC2). EC2 servers are virtual machines launched on the AWS platform. EC2 servers are called instances. EC2 instances are sized like any virtualized server environment. AWS has numerous options to meet an organization’s needs. Instance types should be chosen based upon the need for the following computing options: CPU cores (virtual CPUs) Memory (DRAM) Storage (capacity and performance) Network performance Instances are available in a wide range of sizes and configurations to provide a means to perfectly size a system based upon an organization’s needs. AWS has a multitude of possible server configurations to assist the organization with properly sizing their computing instances.13 A summary of the EC2 instance types is below: EC2 Instance Types Specialty Use Case A1 Arm-based Workloads Web Servers C5 Compute Optimized Batch Processing, Media Transcoding G3 GPU Based Workloads Machine Learning I3 High Speed Storage Data Warehousing, High Performance Databases M5 General Purpose Databases M6 General Purpose Application Servers, Gaming Servers R5 Memory Optimized Caches, High Performance Databases T3 Burstable Computing Platform Web apps, Test Environments X1 Lowest Pricing Per GB DRAM Bid Data Processing Engines, In-Memory Databases Every instance type can be ordered in various sizes. Check with AWS for sizing, as sizes are subject to change. EC2 instances support Windows and Linux operating systems. An organization can build its own virtual machines, or can start with a prebuilt virtual machine. A prebuilt virtual machine is called an Amazon Machine Image (AMI). It is important to note that AMIs will need a storage volume for the instance. The storage volume may be an instance volume or an EBS volume. It is 31 essential to choose the correct storage volume type. Standard instance volumes will be deleted upon instance reboot or termination. EBS volumes are persistent, and data will be available after reboot or instance termination. Amazon Machine Images (AMI) Since the AMI is the basis for the virtual machine, it’s important to choose the correct AMI. AMIs can be obtained from the following sources: Published by AWS – These images are prebuilt by Amazon for a variety of needs. They are available in a variety of operating systems. AWS Marketplace – These are prebuilt machines by AWS partners. These machines are generally created for specific uses (i.e., licensed software from a third party). AMIs from existing instances – This is generally a customer-created AMI. It’s created from an existing server. This enables a full machine copy, with all applications and package dependencies installed. AMIs from uploaded servers – An AMI can be created from a physical server to virtual machine conversion. Additionally, an AMI can be imported from a virtual machine conversion (i.e., VMware or VirtualBox virtual machine). A complete AMI has several components: Operating system from one of the four AMI sources. Launch permissions for the instance. A block device mapping of storage volumes in the system. Automating the First Boot for EC2 Instances Starting with a full AMI is a great way to launch an EC2 instance. It is fast and efficient. However, the base AMI may not have all the necessary services installed and may need the latest software patches. There are two possible ways to take an AMI and configure it for an organization’s use. The first option is to launch the EC2 instance with the premade AMI. After the instance boots, manually update the operating system and then install necessary services. Alternatively, the instance can be automatically updated upon boot with a bootstrap script. Bootstrap scripts for Linux can be as simple as a basic shell script. Windows systems can be bootstrapped with a power shell script. 32 Autoscaling One of the key drivers to the move to the cloud is autoscaling. In the traditional environment, servers are configured for peak demand plus a growth factor to meet increased demand over time. With traditional servers, capacity must always exceed demand, because if demand were to increase beyond the server’s capacity, either the site would become unavailable or business could be lost. Oversizing can become very expensive, causing the organization to pay for resources that may never use. Autoscaling completely changes the computing paradigm. Autoscaling enables an organization to size its computing resources based upon average demand. If demand for computing resources increases, autoscaling can spin up another instance as needed. Therefore, an organization will have almost unlimited computing capacity, while paying for only what is used. Many solutions architects consider autoscaling one of best features of the cloud. The diagram below shows how autoscaling increases compute instances to handle increased load. Instance Purchasing Options AWS has numerous options for purchasing computing instances. Each type of instance is optimized for different computing requirements. The type of computing instances available can be seen below: On-demand instances Reserved instances Scheduled reserved instances Spot instances Dedicated hosts 33 On-Demand Instances On-demand computing instances are computing instances that are available when needed. On- demand instances are charged by either the second or hour of use, and facilitate autoscaling. On-demand instances are optimal when reliable computing capacity is needed without complete knowledge of how long the application will be used or its capacity requirements. Reserved Instances A reserved instance is an instance where an organization commits to purchase a specified compute capacity for a specified period of time. It is essentially a contract to purchase a computing instance and can be for one to three years. By purchasing instances based upon long-term use, the organization can receive substantial savings over on-demand pricing. Reserved instances are optimal when an organization knows how long the application will be used and its capacity requirements. Scheduled Reserved Instances A scheduled reserved instance is a special type of reserved instance. This type of instance is optimal when you have a need for a specific amount of computing power on a scheduled basis. For example, if an organization has a mission-critical batch job that runs every Saturday and Sunday. Spot Instances A spot instance is an instance that is pulled from unused AWS capacity. Spot instances are sold in an auction-like manner in which an organization places a bid. If the bid price is equal to or greater than the spot price, a spot instance is purchased. Spot instances are deeply discounted and can be a great option. The drawback of spot instances is that they can be terminated if the spot price goes above the price that was bid on the instances. Spot instances are ideal when an organization needs extra computing capacity at a great price for non-mission-critical use. Dedicated Hosts A dedicated host is a dedicated server. Dedicated hosts are bare metal servers. A bare metal server is a physical server without an operating system or applications installed. With a dedicated host, the organization can install any operating system or application required. Dedicated hosts are optimal when an organization needs access to system-level information, such as actual CPU usage. Dedicated hosts are an excellent option when an application that has a license that is dedicated to a physical machine. 34 Tenancy Options After the computing platform is chosen, it is necessary to determine the tenancy of the computing platform. The tenancy, or where the instances are located, can make a substantial impact on performance and availability. The tenancy options are: Shared tenancy Dedicated instances Dedicated hosts Placement groups Shared Tenancy This is the standard tenancy for an EC2 instances. With shared tenancy, the physical server hosted at AWS will contain virtual machines for several customers. Dedicated Instance This is a server that is completely dedicated to a single customer. This server can house multiple virtual machines. All virtual machines on the dedicated instance belong to the single customer who owns the instance. Dedicated Host As previously described, this is a bare metal server and is dedicated to a single customer. Placement Groups AWS uses the concept of placement groups. Placement groups are where the computing instances are located. There are three options for placement groups. The options below are covered in the VPC section: Clustered Spread Partitioned Securing EC2 Access Keeping an organization’s technology secure requires an end-to-end security posture. EC2 instances are no exception, and they should be kept as secure as possible. Keeping the instance patched with the latest security updates and turning off unnecessary services is a major part of securing EC2 instances. AWS provides a security feature called security groups. Security groups can dramatically help in locking down EC2 services. Note that security groups are an essential 35 part of many AWS services. Security groups will be covered in depth in the security section of this book. Security Groups A security group is a virtual firewall that lets an organization configure what network traffic is allowed into the server or AWS service. There are several key concepts with security groups: The default security policy is to deny traffic. An organization must explicitly permit desired traffic for the security group, or all traffic will be denied. Security groups include the source and destination IP address, protocol TCP/UDP, and port numbers. The security group is only as effective as its configuration. Be as specific as possible and allow only the traffic necessary into the system. Block all other traffic. The diagram below shows how security groups protect compute instances by keeping unwanted traffic out of the server. 36 IP Addresses for EC2 Instances In order for an EC2 instance to function on a network, the instance must have an IP address. EC2 instances can have a private IP address, public IP address, or both. Instances also come with a fully qualified domain name assigned by AWS that can be used to connect to an EC2 instance. There are several key components to addressing EC2 instances: IP addresses are assigned to network interfaces and not computing systems. Depending upon the instance type, an instance can have multiple network interfaces. Each network instance must be on a different subnet, then other interfaces on the EC2 instance. Each IP address assigned to an interface must be unique within the VPC. IP addresses can be IPv4 or IPv6. All interfaces are automatically assigned an IPv6 globally unique address, which can be manually disabled. EC2 instances with public IP addresses with an internet gateway are reachable from the internet. EC2 instances with private IP addresses are not reachable from the internet unless a NAT instance and an internet gateway is used. EC2 instances with private IP addresses and a NAT gateway without an internet gateway will not be reachable from the internet, but will be able to connect to the internet for operating system patches and other needed connectivity. Note the section on VPCs discusses networking and IP addressing in much more depth. Accessing EC2 Instances Accessing and managing EC2 instances are critically important to maintaining a high-availability architecture. EC2 instances can be accessed via the following methods: Directly from the EC2 console. Via Secure Shell (SSH) for Linux machines. Via Remote Desktop Protocol (RDP) for Windows systems. Some management can be configured over the API using the AWS SDK. Labs 1) Launch an EC2 instance in the same manner as in the previous section. Install an application of your choice. Create an AMI from this instance. Link on how to create an AMI from an EC2 instance below. https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create- ami-from-instance.html 37 2) Create a new instance from the AMI you created. Link on how to create an EC2 from an AMI below. https://aws.amazon.com/premiumsupport/knowledge-center/launch-instance-custom- ami/ 3) Delete all running instances when you are completed with this lab. 38 Chapter 4 Introduction to Databases What Is a Database? Databases have become such a critical component of modern information systems. But what exactly is a database and why are databases used? Databases are applications that allow for storage of large amounts of information. Databases can help an organization find key performance metrics from their data in order to make strategic business decisions. In the modern application environment, most web applications and many enterprise applications integrate with databases for information storage, management, and access to information. AWS has four forms of databases: Relational databases NoSQL databases Data warehousing databases Data lakes Relational Databases Relational databases are the most common form of databases. Relational databases help organizations find the relationships between different aspects of their business by showing how data is related to each other. Relational databases store data in a manner that is very similar to a spreadsheet, with columns and rows.13 39 The diagram below shows how relational databases help show the relationship between different variables. NoSQL Databases A NoSQL database stands for “not only SQL”. NoSQL databases facilitate enhanced flexibility and scalability by allowing a more flexible database schema. NoSQL databases can handle structured and nonstructured data. Being able to work with structured and unstructured data can allow NoSQL databases to scale far beyond relational databases.14 NoSQL databases are optimal under the following circumstances: When you need to store large amounts of unstructured data. When the database schema may change. When you need flexibility. When an organization needs rapid deployment of the database. 40 The diagram below shows how NoSQL databases work with key value pairs. Data Warehousing Databases Data warehouses are designed to assist with business intelligence and analytics. Data warehouses are designed to perform analysis on large amounts of historical business data. A data warehouse is comprised of the following components: Database to store data. Tools for visualizing the data. Tool for prepping and loading data. 41 The diagram below shows an example of a data warehouse on the AWS platform. Data Lakes A data lake is not exactly a database, but it includes database elements, so it’s included in the database section of this book. 15, 16 What Is a Data Lake? A data lake is a repository that allows an organization to store structured and unstructured data in the same place. Data lakes allow an organization to store and analyze extremely large amounts of data. 42 The diagram below shows an example of a data lake on the AWS platform. Benefits of a Data Lake Data lakes allow an organization to store virtually any type of data at an almost unlimited scale. Unlike a database, a data lake can store data in its native format until it is needed by another application. In a data lake, the data can be queried and searched for relevant data and solutions to current problems. Data lakes are highly adaptable and can be changed at any time to meet an organization’s requirements. Relational Databases Relational databases are the most common type of database and are best for structured data. Relational databases show the relationships between different variables stored in the database. With relational databases, as soon as data is written, it will be immediately available for query. The instant consistency is based upon relational databases following the ACID model. 17 More information on the ACID model can be seen below: Atomic – Transactions are all or nothing. Consistent – Data is consistent immediately after writing to the database. Isolated – Transactions do not affect each other. Durable – Data in the database will not be lost. 43 In the AWS environment multiple relational databases are supported. Supported databases include: Amazon Aurora MariaDB Microsoft SQL Server MySQL Oracle Database PostgreSQL Amazon Aurora Amazon Aurora is the Amazon-branded relational database service. It is a fully managed database service. Aurora is MySQL- and PostgreSQL-compatible database. Aurora is a high- performance database with speeds up to five times faster than MySQL and three times faster than PostgreSQL databases. Aurora is typically used in enterprise applications and software as a service (SAAS) applications. MariaDB MariaDB is an open-source relational database. It was created by the developers of the MySQL database. MariaDB has additional features and advanced functionality when compared to MySQL. MariaDB supports a larger connection pool and is comparatively faster than MySQL but doesn’t support data masking and dynamic columns. Microsoft SQL Server Microsoft SQL Server is the Microsoft-branded relational database solution. AWS supports multiple versions of Microsoft SQL: SQL Server 2008 SQL Server 2012 SQL Server 2014 Microsoft SQL Server enables organizations to bring their Windows-based workflows to the cloud. Microsoft SQL Server offers tools including the SQL Server Management Studio to help manage the infrastructure. Microsoft SQL Server supports high-availability clustering and failover options, but in a different manner than other relational databases. Please check Microsoft documentation for the most up-to-date information when configuring Microsoft SQL databases. 44 AWS supports four versions of the Microsoft SQL databases: Enterprise Express Standard Web MySQL MySQL is one of the original open-source relational databases and has been around since 1995. MySQL is extremely popular and is used in a wide variety of web applications. Oracle Databases Oracle is one of the most popular relational databases in the world. It has an extensive feature set and functionality. Unlike open-source databases, oracle databases are developed, licensed, and managed by Oracle. AWS RDS for Oracle supports multiple versions of the Oracle database: Standard Standard One Enterprise Each of these supported versions of AWS RDS for Oracle have different performance, flexibility, and scalability options. AWS offers two licensing options with the AWS: License included – In this version the database is licensed to AWS. Two license options are available for this option: Standard Edition One Standard Edition Two Bring your own license – You have a license for Oracle, and you host your database on AWS. This provides much more license flexibility and is available with these license options: Standard Enterprise Standard Edition One Standard Edition Two 45 PostgreSQL Database PostgreSQL is an open-source relational database. It has a very advanced feature set, and enhanced functionality when compared to MySQL. DynamoDB Many enterprises require a database with near unlimited scalability and flexibility beyond what can be achieved with a relational database. AWS offers a fully managed NoSQL database called DynamoDB for organizations that need the scalability and flexibility of a NoSQL database. 18, 19,20,21 DynamoDB is a fully managed, high-availability NoSQL database service. DynamoDB has multiple advantages: Fully managed by AWS, so there is less management for the organization. Because it is serverless, there is near unlimited scalability, as the database is not bound to the capacity of a physical server or servers. Additionally, AWS manages servers, operating systems, and security. High availability – By default, DynamoDB is placed in multiple availability zones. High-performance storage – DynamoDB uses high performance SSD storage. Data protection – All data is encrypted by default in DynamoDB. Low latency – DynamoDB can be configured for sub millisecond latency when used with DynamoDB Accelerator (which is an in-memory cache for DynamoDB). Backups – DynamoDB can be backed up with minimal or no effect on database performance. Key Things to Know about DynamoDB DynamoDB has a very flexible schema, as it is not bound to the same table and column scheme used by relational databases. This flexibility allows for significant customization and scalability. DynamoDB works best with name/value pairs in the primary index. DynamoDB can also have secondary indexes, which allows applications to use additional query patters over traditional SQL databases. DynamoDB secondary indexes can be global or local. Global indexes can span across all database partitions. Secondary partitions have virtually unlimited capacity. The only real limitation is a key value cannot exceed 10 gigabytes (GB). Local secondary indexes have the same partition key as the base table. 46 The diagram below shows an example of the DynamoDB key value pair architecture. DynamoDB by default does not follow the ACID model used by SQL databases, which helps increase its scalability. DynamoDB by default uses the BASE model: Basically Available – The system should be available for queries. Soft State – Data in the database may change over time. Eventually Consistent – Writes to the database are eventually consistent. This means that a database read immediately after a write may not be available, but will be over time. DynamoDB with the default configuration follows the eventually consistent model. However, DynamoDB can be configured to have strongly consistent reads if needed. DynamoDB Pricing DynamoDB is priced based upon throughput. To achieve the best performance and pricing, it is necessary to provision read and write capacity. Read and write capacity are provisioned prior to use and should be set to accommodate for the organization’s needs. Autoscaling can also be used with DynamoDB. Autoscaling will watch the databases and scale up as needed, but it will not scale down. Since DynamoDB autoscaling does not scale back down, this method can lead to higher long-term costs. 47 DynamoDB can also be set up for on-demand capacity. However, on-demand capacity is more expensive than provisioned capacity. Therefore, for optimal pricing with DynamoDB, its best to know read and write capacity when the database is provisioned. When to Use DynamoDB DynamoDB is the optimal choice when near unlimited database scalability and low latency are required. Additionally, DynamoDB is an excellent choice when storing data from a large number of devices, such as internet of things (IoT). Some common DynamoDB use cases are: Gaming applications Storing game state Player data stores Leaderboards Financial applications Storing a large number of user transactions E-Commerce applications Shopping carts Inventory tracking Customer profiles and accounts Data Warehousing on AWS Data warehousing is becoming a critical business tool. Data warehouses enable businesses to get actionable insights from their data. The AWS data warehousing solution is called Amazon Redshift. Amazon Redshift is a fast, powerful, and fully managed data warehouse solution. Amazon Redshift supports petabyte-scale data warehousing. Amazon Redshift is based upon PostgreSQL and supports SQL queries. Additionally, Redshift will work with many applications that perform SQL queries.22 Redshift is a highly scalable platform. The Redshift architecture is built around clusters of computing nodes. The primary node is considered a leader node, with supporting nodes called compute nodes. Compute nodes support the leader node. Queries are directed to the leader node. 48 The diagram below shows how data warehousing is performed with Amazon Redshift and other AWS services. Scaling Amazon Redshift Performance Scaling Amazon Redshift is achieved by adding additional nodes. Amazon offers two types of nodes: Dense compute nodes – Dense compute nodes are based upon high speed SSD RAID arrays. Dense storage nodes – Dense storage nodes are based upon magnetic disk RAID arrays. Generally speaking, SSD-based arrays have higher throughput than magnetic arrays. SSD-based arrays have much higher IOPS than magnetic-based RAID arrays and perform much better in applications that require high input/output (IO) performance. Database Storage Options After determining the optimal database for an organization’s needs, it is necessary to determine the proper storage options for the database. AWS databases are stored on EBS volumes, and the database storage options are: Previsioned IOPS (PIOPS) Highest performance 49 Lowest latency Highest throughput General purpose SSD High performance Lower latency Moderate throughput Magnetic storage Moderate performance Moderate throughput Lowest cost Highest latency Designed for light IO requirements Database Management and Optimizations There are many components to successfully architecting and scaling an enterprise-wide database. This section will cover the following topics: Backing up the database. Scaling the database. Designing for high availability. Protection of data with encryption. Extraction, transforming, and loading tools (ETL). Backing Up the Database Databases are automatically backed up by AWS. Database backups copy the entire server, not just the data stored on the database. Backups can be retained for up to thirty-five days, which is configurable from one to thirty-five days. Automated backups happen at a defined window each day. While the database is being backed up, it may be unavailable or have significantly degraded performance.23 Databases can also be backed up manually. Manual backups are in the form of a DB snapshot. DB snapshots are a point-in-time copy of the databases EBS volume. DB snapshots are maintained until manually deleted. 50 The diagram below shows how a database is manually backed up on the AWS platform. Restoring a Database Backup Databases can easily be restored from backups. When an organization needs to restore a database, a new database instance is created. Since a new database is created, it will have a new IP address and DNS name. Therefore, if a database is restored, it may be necessary to update other applications with the new IP address or DNS name. The diagram below shows how a database is restored from a snapshot image. Scaling the Database Databases are a mission-critical application for many enterprises. As a mission-critical application, the database must scale to meet an organization’s needs. There are several 51 methods to increase the scalability of the database. Often it will take a combination of scaling methods to meet an organization’s needs. The first scaling method refers to scaling up versus scaling out. Scaling up refers to simply increasing the capacity of the server housing the database. At some point scaling up is not feasible, as a database can exceed the capacity of even the most powerful servers. Scaling out, by comparison, involves adding additional compute instances. Scaling out has two options: partitioning for NoSQL databases and read replicas for relational databases. Scaling Out for NoSQL Databases Partitioning the database involves chopping the database into multiple logical pieces called shards. The database has the intelligence to know how to route the data and the requests to the correct shard. Effectively, sharding breaks down the database into smaller, more manageable pieces. Partitioning the database is effective for NoSQL databases like DynamoDB and Cassandra. Scaling Out for Relational Databases Relational databases are scaled out by adding additional servers. With relational databases, the additional servers are called read replicas. A read replica is a read-only copy of the main database instance. Read replicas are synchronized in near real time. Read replicas are used to decrease the load on the main database server by sending read requests to read replicas as opposed to the primary server. This reduces CPU, memory, and disk IO on the main server. AWS supports up to five read replicas. Read replicas are helpful in the following scenarios: When there is a lot of read activity. To increase performance, but read replicas are for performance and not disaster recovery. When query traffic is slowing things down. For more capacity, as offloading read requests from the main database can save valuable resources. Database Caching Database caching is another means to increase the scalability of a database. Caching is a method that takes frequently accessed information and places it in memory, so the request does not need to be forwarded to the database server. Caching works by taking the first request for information to the database server. The server then responds to the request, and the cache temporarily stores the results of the request in memory. Future requests for the same information will be responded to by the cache and will not be sent to the database server. Future requests for information not stored in the cache will be sent to the server. Caching data reduces requests to the server, freeing up server resources. To prevent stale data, the cache 52 will not keep information in memory forever, instead the cache has a timeout to expire old data and make room for fresh data. This timeout, referred to as the time to live (TTL) can be configured based upon an organization’s needs. AWS supports two caching types. These are ElastiCache for Memcached and ElastiCache for Redis. Memcached is designed for simplicity. ElastiCache for Redis has a substantial feature set and functionality. Caching is an excellent method to increase scalability. Caching is beneficial only when there are frequent requests for the same information, or queries, as if all requests are for new information, they will all be sent to the main server, mitigating any benefit to the cache. The diagram below shows an example of a database caching on the AWS platform. Database Queueing Database queueing can make a significant improvement in increasing the write performance of a database. Additionally, queueing can significantly help reduce CPU usage and other resources in the database. AWS has a queueing services called Simple Queue Service (SQS). Using SQS effectively decouples the database writes from the actual database.24 The database works as in the following manner: 1. Data is sent to the SQS queue. 2. The queue looks at the database. 3. If the database is free, the message is sent from the queue to the database and removed from the queue. 53 4. If the database is busy or unavailable, the message waits in the queue until the database is available. 5. When the database is available, the message is sent to the database and removed from the SQS queue. AWS offers two versions of the SQS queue: AWS Standard SQS Queue – This is a simple queue to temporarily store messages prior to being written to the database. With Standard SQS queues, there is no guarantee to the order of messages leaving the queue. This is the default option. First in, first out (FIFO) – This option guarantees that the messages will exit the queue in the order that they were received The diagram below shows an example of an SQS queue on the AWS platform. How Does SQS Help? SQS helps by reducing write contention to the database. SQS helps with availability as well; if the database is temporarily down, messages can be retained in the queue. Since messages are retained in the queue until the database is ready, it can dramatically smooth CPU, memory, and disk IO performance. Additionally, the number of messages in the SQS queue can be used to scale out the database or computing service using the SQS queue. When to Use SQS SQS is optimal to use in the following circumstances: To increase scalability when there are a lot of write requests to the system. To decrease the load on the database behind the SQS queue. When it’s not known exactly how much performance is needed, but the organization wants to be able to account for large spikes in traffic. When extra insurance is desired that critical messages won’t be lost. When you want to decouple your application to increase availability, modularity, and scalability. 54 Designing a High-Availability Database Architecture Given the crucial nature of databases in the enterprise computing environment, making sure the database is available when needed is of utmost importance. The key to all high-availability designs is to avoid any single point of failure. As a reminder, the AWS network is divided into regions and zones. Regions are large geographic areas, while an availability zone (AZ) is really a data center inside of a region. Regions may have many availability zones. A high-ability database architecture will have database instances placed in multiple availability zones (multi-AZ). In a multi-AZ environment, there are multiple copies of the database, one in every availability zone. It is important to note that multi-AZ environments do not increase database performance. Multi-AZ environments are for redundancy to enhance availability purposes. In a multi-AZ environment, data from the primary (master) database is synchronously copied to the backup database in the other AZ. If the primary database were to fail, the database instance in the other AZ will take over. A failover to the backup database will be triggered in the following circumstances: The primary database instance fails. There is an outage in an availability zone. The database instance type is changed. The primary database is under maintenance (i.e., patching an operating system). A manual failover has been initiated (i.e., reboot with failover). The diagram below shows a high-availability application and database architecture. 55 Labs 1) Create a MySQL database. Link on how to create a MySQL database below. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.Cre atingConnecting.MySQL.html 2) Create a read replica of the database. Link on how to create a read replica below. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#US ER_ReadRepl.Create 3) Promote the read replica to be the master database. Link on how to promote a read replica below. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html 4) Set up DynamoDB. Link on how to set up DynamoDB below. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUp.Dyn amoWebService.html 5) Create a table in DynamoDB. Link on how to create a table in DynamoDB below. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting- started-step-1.html 56 6) Delete all databases and tables when you are completed this lab to avoid being billed for services that you are not using. 57 Chapter 5 The AWS Virtual Private Cloud What Is the AWS VPC? The AWS VPC is essentially a private virtual network inside the AWS network. While AWS is physically a shared network, each VPC is logically isolated from other AWS customers. The AWS network supports private and public addresses for each of their customers. Since AWS customers are logically separated, there is no contention for IP address space between VPCs. Each VPC will have its own routing table that is responsible for directing traffic. As with any network, a proper IP addressing scheme is essential for scalability.26 The diagram below shows an example logically isolated customer VPCs on the AWS platform. The OSI Model Throughout this book, especially the VPC section, we often reference the open systems interconnect (OSI) model. The OSI model divides network communications into seven layers. Each layer of this model has specific functions for network communication. Knowledge of the OSI model can be helpful in troubleshooting and understanding the components of network communication. The table below shows the seven layers of the OSI model and the associated functionality at each level.27 58 Operating Systems Interconnection Model (OSI) Layer Number Function Examples Name Application 7 User Interface HTTP, DNS, SSH Data Presentation 6 Presentation and Data Encryption TLS Data Session 5 Controls Connection Sockets Data Transport 4 Protocol Selection TCP, UDP Segments Network 3 Logical Address IP Address Packets Datalink 2 Hardware Connection MAC Address Frames Physical layer 1 Physical Connection Wire, Fiber Bits IP Addressing An IP address is a logical address assigned to a computing device that identifies that device on the network. An IP address is similar to an address on a home. In order for mail to be delivered, the house must have a unique address so that the postal service can identify the correct home and deliver the mail. Every address must be unique, even if the only thing that separates similar looking addresses is the postal code. IP addresses are no different. For any device to talk to another device on an IP network, their addresses must be unique. There are two versions of IP addresses: IPv4 – Original IP address used in networking and has been in use since 1970s and is still the dominant address space. IPv6 – Newer IP addressing model is designed to overcome the limitations of IPv4. IP addresses operate at layer 3 of the OSI model. An IP address is a logical address, in that they are assigned to a network interface. IP addresses are not hard coded, like a MAC address (layer 2), that is permanently assigned to an ethernet interface. IP addresses are a 32-bit address, which was perfect when the internet was formed. However, the internet grew far beyond what anyone expected, and there were not enough public IP addresses. The Internet Engineering Task Force (IETF) came up with two solutions to the IP address space shortage.28,29 The first solution is to provide private IP address space as specified by the Request for Comments (RFC) 1918 and IPv6 addresses. Private IP addresses are to be used on internal networks and are not globally routable. Private IP addresses are available in the following address space: 10.0.0.0/8 172.16.0.0/16 through 172.31.0.0/16 192.168.0.0/16 59 IP Address Classes IPv4 has five classes of IP address space. Address classes are legacy, and not really used in today’s modern environment. IP address classes are covered for historical purposes and so the reader can better understand classless interdomain routing (CIDR). Class A addresses 1.0.0.0- 126.255.255.255/8 Class B addresses 128.0.0.0- 191.255.255.255/16

Use Quizgecko on...
Browser
Browser