🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

ITSS 4371 Test 1 Review Class 1 – Cloud Fundamentals Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapid...

ITSS 4371 Test 1 Review Class 1 – Cloud Fundamentals Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. What is “the cloud?” o According to cloud providers, it’s the “the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.” o Ultimately, it’s data centers connected by the internet or wide area networks. Cloud Computing is a general term used to describe a form of network-based computing that takes place over the Internet o Basically, a step onwards from utility computing. o Utility computing is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. o A collection/group of integrated and networked hardware, software and Internet infrastructure (called a platform). o Using the Internet for communication and transport provides hardware, software and networking services to clients These platforms hide the complexity and details of the underlying infrastructure from users and applications by providing very simple graphical interface or API (Application Programming Interface). Basic Cloud Characteristics o The “no-need-to-know” in terms of the underlying details of infrastructure, applications interface with the infrastructure via the APIs. o The “flexibility and elasticity” allows these systems to scale up and down at will. o Utilizing resources of CPU, storage, server capacity, load balancing, and databases. o The “pay as much as used and needed” type of utility computing and the “always on!, anywhere and any place” type of network-based computing. Cloud computing allows customers to purchase just what they need in terms of service layers, which include: o Hardware: This includes the physical computers and network equipment necessary to connect to the internet and the virtualization needed to run a virtual server instance. o Operating System: This includes the software that controls the basic functions of the computer; it manages hardware and software resources, and it provides common services for computer programs. o Middleware: This includes such things as web server software and database management system software. o Application: Software that performs specific tasks unrelated to the actual operation of the computer itself that the end user interacts with. Thus, layers are packaged together “as a service” in the following ways: o Software as a Service (SaaS): This is a complete, turn-key solution that includes the hardware, operating system, middleware, and application. Think Salesforce.com. o Platform as a Service (PaaS): This includes hardware, software, and middleware. All you add is your application. Think AWS EC2 instance. o Infrastructure as a Service (IaaS): This includes the hardware and virtualization. You provide the operating system, middleware, and application. Think AWS, Microsoft Azure, and Google Cloud as a base offering. 1 ITSS 4371 Test 1 Review Cloud Models o Public cloud simply means customers share the same physical hardware. In the public cloud, instances are running on the same pool of physical servers that every other public customer of the provider is using. That means your servers could be running on the same servers used by your most- hated competitor. o Private cloud is cloud infrastructure running on hardware dedicated to a given customer. If security and privacy are a concern, or you’re philosophically opposed to having your pristine server instances running alongside the unwashed masses, cloud providers are happy to lease hardware that will be dedicated to running your instances, and only your instances… but it’ll cost ya. Private cloud is typically charged a fixed monthly fee, which negates the pay-for-only-what-you-use advantage of public cloud computing. o Virtual private cloud combines the pay-for-only-what-you-use advantage of public cloud computing while operating with a modicum of isolation. ▪ Run on the same pool of servers as public cloud customers. ▪ Using a block or private IP addresses that keeps instances logically separated from the public instances. o Community cloud is cloud infrastructure in which multiple organizations share resources and services based on common operational and regulatory requirements. Members of a community cloud are organizations that have common business requirements. These requirements are usually driven by the need for shared data, shared services, or shared industry regulations. This means they’re typically organizations in the same industry or departments of the same organizational body. o Hybrid cloud is a computing environment that combines an on-premises data center with a public cloud, allowing data and applications to be shared between them. For industries that work with highly sensitive data, such as banking, finance, government, and healthcare, using a hybrid cloud model may be their best option. For example, some regulated industries require certain types of data to be stored on-premises while allowing less sensitive data to be stored on the cloud. Cloud Management and Service Creation Tools o Include three types of software: ▪ Physical and virtual infrastructure management software. ▪ Unified management software. ▪ User-access management software. o These software interact among themselves to automate provisioning of cloud services. Why Cloud? o Cloud computing offers a value proposition that is different from traditional enterprise IT environments. o Cloud computing can offer economies of scale that would otherwise be unavailable. ▪ Exploit virtualization and aggregate computing resources. o With minimal upfront investment, cloud computing enables global reach of services and information through an elastic utility computing environment that supports on-demand scalability. o Cloud computing can also offer pre-built solutions and services, backed by the skills necessary to run and maintain them, potentially lowering risk and removing the need for the organization to retain a group of scarce highly skilled staff. Advantages of The Cloud o Agility: The ability to respond quickly to changes in demand. Agility in cloud computing means being able to spin up a new server in minutes. 2 ITSS 4371 Test 1 Review o Elasticity: The ability to dynamically increase and decrease resources programmatically. o Lower Cost: In cloud computing, you only pay for what you use, and because you don’t own the hardware, you lower your capital expenditures and fund your IT infrastructure through operating expenditures. Not only is it a lower cost, it’s also a known cost, because you know how much it will cost to spin up new server instances. o Continuous Availability & Survivability: Cloud providers operate multiple data centers within a given area and operate in multiple areas within a region and in multiple regions around the world. An application can be distributed to run across multiple regions o No Obsolescence: Cloud providers continuously upgrade their network and data center infrastructure. o Best-of-Breed Security: Cloud providers provide robust network security. o Economies of Scale: Because cloud providers buy hardware and network access in high volume, they’re able to offer lower prices for use of that hardware. Disadvantages of Cloud Computing o Regulatory Requirements: Some countries require that companies operating within their national borders maintain physical control over certain kinds of data, usually for the purposes of taxation. o Security & Privacy: When you deploy to the cloud, you’re essentially outsourcing your IT infrastructure, which leaves a third party in the driver’s seat. While cloud providers provide best-of-breed security, customers have still been hacked. Security is a shared responsibility—something cloud providers are quick to remind customers of. o Vendor Lock-In: Moving between cloud providers isn’t as easy as copying files over from one to another. If services are built in a cloud provider using their proprietary technology, those services may need to be redesigned to use non-proprietary or an equivalent proprietary technology offered by the other provider. Class 2 – Introduction to AWS History o Amazon Web Services was started in 2006 in response the Amazon’s challenges surrounding implementing IT infrastructure for its e-commerce site, amazon.com. ▪ IT infrastructure is largely commodified, meaning the components of a company’s IT infrastructure offer no competitive advantage. o The first service offered was Simple Storage Service, aka S3, which offered seemingly infinite scalable and secure storage for significantly below market cost at $0.15/GB. ▪ S3 currently hosts over 100 trillion objects! o A few months later, AWS rolled out Elastic Compute Cloud, aka EC2, which offers cloud-based virtual machine servers (instances) that are programmatically scalable. ▪ There are over 400 instance types across 67 instance families. Today o AWS is a global company with global reach. ▪ It operates in 245 countries and territories. ▪ It operates 33 Regions, with seven more in planning. ▪ It operates 105 Availability Zones, with 21 more planned. Each Availability Zone typically has three to five data centers. ▪ It operates over 600 CloudFront POPs and 13 regional edge caches. ▪ It operates 41 Local Zones and 29 Wavelength Zones. ▪ It offers 135 Direct Connect locations. o AWS offers over 200 cloud-based services. 3 ITSS 4371 Test 1 Review AWS Management Console o The AWS Management Console is a web-based application that contains and provides centralized access to all individual AWS service consoles. o Features of the AWS Management Console are: ▪ Navigate to AWS service consoles: Use Unified Navigation to access recently visited service consoles, view and add services to a Favorites list, access your console settings, and access AWS User Notifications. ▪ Search for AWS services and other AWS information: Use Unified Search to search for AWS services and features, and AWS marketplace products. ▪ Customize the console: Use Unified settings to customize various aspects of the AWS Management Console. This includes the language, default Region, and more. ▪ Run CLI commands: AWS CloudShell is accessible directly from the console. Use CloudShell to run AWS CLI commands against favorite services. ▪ Access all AWS event notifications: Use the AWS Management Console to access notifications from AWS User Notifications and AWS Health. ▪ Customize AWS Console Home: Customize the AWS Console Home experience by using widgets. ▪ Create and manage AWS applications: Manage and monitor the cost, health, security posture, and performance of applications using myApplications in AWS Console Home. ▪ Chat with Amazon Q: Get Generative Artificial Intelligence (AI) assistant powered answers to your AWS service questions directly from the console. You can also get connected with a live agent for additional support. ▪ Control AWS account access in your network: Use AWS Management Console Private Access to limit access to the AWS Management Console to a specified set of known AWS accounts when the traffic originates from within a network. AWS Service Console o Each AWS service has its own individual service console; these service consoles are separate from the AWS Management Console. o Settings chosen in Unified Settings, such as visual mode and default language, are applied to these consoles. o Service consoles offer a wide range of tools for cloud computing, as well as account and billing information. o To learn more about a specific service and its console, for example Amazon Elastic Compute Cloud, navigate to its console using search box in the navigation bar and access the Amazon EC2 documentation from the AWS Documentation website. o When navigating to an individual service’s console, access features of the AWS Management Console using Unified Navigation at the top of console. Class 3 – Identity Access Management What is IAM? o AWS IAM is a web service that allows you to securely control access to AWS services. o IAM enables you to manage user permissions, allowing or deny access to AWS resources. o IAM is used to control who is authenticated (signed in) and authorized (has permission) to use resources. Identities 4 ITSS 4371 Test 1 Review o The initial identity that has access to all AWS services and resources is the root user. ▪ Root user credentials are required to: Change account settings. Restore IAM user permissions. Close an AWS account. Activate IAM access to Billing & Cost Management Console. Sign up for AWS GovCloud. Register as a seller in the Reserved Instance Marketplace for EC2. Recovery of AWS Key Management Service. Link account to AWS MTurk Requester account. Configure an Amazon S3 bucket to enable MFA. Edit or delete an Amazon S3 bucket policy that denies all principals. Edit of delete an Amazon SQS policy that denies all principals Access Management o Access is granted to a principal. A principal may be a: ▪ IAM user ▪ Federated User ▪ IAM role ▪ Application o After authentication during login, a request is made to grant the principal access to resources. Access is granted if the principal has been assigned the necessary permissions. Service Availability o IAM is a service that is eventually consistent. Eventual consistency, aka optimistic replication, is an availability model in distributed computing used to achieve high availability. ▪ Eventual consistency is BASE (basically available, soft-state, eventually consistent) in contrast to ACID (atomicity, consistency, isolation, durability) as used in some database replication and transactional concurrency. ▪ Eventual consistency guarantees that if no new updates are made to a given data item, eventually all accesses to that data item will return the last updated value. Why Use IAM? o Using IAM allows you to: ▪ Grant shared access to your AWS account. ▪ Assign granular permissions to different users for different resources. ▪ Securely provide credentials for applications that run on EC2 instances. ▪ Add two-factor authentication to your account and to individual users for extra security. ▪ Allow users who already have passwords elsewhere—e.g. your corporate network or with an internet identity provider—to access your AWS account; aka identity federation. ▪ Identify who made requests to resources using AWS CloudTrail and IAM identities. ▪ Support the processing, storage, and transmission of credit card data by a merchant or service provider; IAM allows for PCI/DSS Level 1 compliance. When to Use IAM? o You use IAM when: ▪ Performing different job functions. o How you use IAM differs, depending on the work that you do in AWS. 5 ITSS 4371 Test 1 Review ▪ Service user – If you use an AWS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more advanced features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. ▪ Service administrator – If you're in charge of an AWS resource at your company, you probably have full access to IAM. It's your job to determine which IAM features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. ▪ IAM administrator – If you're an IAM administrator, you manage IAM identities and write policies to manage access to IAM. Class 4 – Storage What is Amazon S3? o Amazon Simple Storage Service (S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. o S3 provides features like: ▪ Data storage as objects within buckets ▪ Metadata tagging ▪ Data transfer acceleration ▪ Security through access control lists S3 Storage Classes o Standard ▪ Designed for frequently accessed data, offering high durability and availability. o Intelligent-Tiering ▪ Automatically moves data between two access tiers when access patterns change, optimizing costs. o Standard-IA (Infrequent Access) ▪ For data accessed less frequently but requires rapid access when needed. o One Zone-IA ▪ Lower-cost option for infrequently accessed data that does not require multiple Availability Zone resilience. o Glacier ▪ Low-cost storage for data archiving with retrieval times ranging from minutes to hours. o Glacier Deep Archive ▪ Lowest-cost storage for long-term data archiving with retrieval times of 12 hours or more. S3 Security Features o Access Control ▪ Use AWS Identity and Access Management (IAM) to manage access to your S3 resources. o Encryption ▪ S3 supports both server-side encryption and client-side encryption to protect data. o Compliance ▪ S3 meets various compliance standards, including PCI-DSS, HIPAA/HITECH, and FedRAMP. o S3 Data Transfer o Transfer Acceleration 6 ITSS 4371 Test 1 Review ▪ Speeds up content uploads to S3 using Amazon CloudFront’s globally distributed edge locations. o Cross-Region Replication ▪ Automatically replicates data across different AWS regions for disaster recovery (if configured!). Amazon EBS (Elastic Block Store) o Overview: Provides scalable, high-performance block storage for use with Amazon EC2 instances. o Use cases include: ▪ Databases ▪ File systems ▪ Applications requiring high throughput and low latency Amazon EFS (Elastic File System) o Overview: A scalable file storage service for use with AWS Cloud services and on-premises resources. o Use cases include: ▪ Content management ▪ Web serving ▪ Data sharing AWS Snow Family o Snowcone ▪ Small, rugged, and secure edge computing and data transfer device. o Snowball ▪ Petabyte-scale data transport solution. o Snowmobile ▪ Exabyte-scale data transfer service using a secure truck. Cost Management o Pricing models: ▪ Pay-as-you-go pricing ▪ Reserved instances ▪ Savings plans o Cost Optimization Strategies ▪ Use of different storage classes ▪ Lifecycle policies ▪ Use monitoring tools to optimize costs Class 5 – Compute Introduction to EC2 o Definition: Amazon EC2 provides scalable computing capacity in the AWS cloud. o Purpose: Allows users to run virtual servers, known as instances, on-demand. o Benefits: Flexibility, scalability, and cost-effectiveness. Features of EC2 o Variety of Instance Types: General purpose, compute-optimized, memory-optimized, storage- optimized, and GPU instances. o Flexible Pricing Models: On-Demand, Reserved, and Spot Instances. EC2 Pricing Models – On-Demand o Definition: Pay for compute capacity by the hour or second with no long-term commitments. o Benefits: ▪ Flexibility: Ideal for applications with unpredictable workloads or short-term requirements. 7 ITSS 4371 Test 1 Review ▪ No Upfront Costs: Pay only for the compute capacity you use. ▪ Ease of Use: Simple to start and stop instances as needed. o Use Cases: ▪ Development and Testing: Suitable for development, testing, and staging environments. ▪ Short-Term Projects: Ideal for short-term, spiky, or unpredictable workloads. ▪ Startups and Small Businesses: Great for businesses that need flexibility without long-term commitments. EC2 Pricing Models - Reserved o Definition: Commit to using EC2 instances for a one- or three-year term in exchange for a significant discount compared to On-Demand pricing. o Benefits: ▪ Cost Savings: Up to 75% discount compared to On-Demand pricing. ▪ Predictability: Provides cost predictability for long-term workloads. ▪ Flexibility: Convertible Reserved Instances allow you to change instance types, operating systems, and tenancies. o Types: ▪ Standard Reserved Instances: Offer the highest discount but are less flexible. ▪ Convertible Reserved Instances: Offer a lower discount but allow changes to instance attributes. ▪ Scheduled Reserved Instances: Reserve capacity for specific time periods. o Use Cases: ▪ Steady-State Workloads: Ideal for applications with predictable usage patterns. ▪ Enterprise Applications: Suitable for long-term, mission-critical applications. ▪ Big Data and Analytics: Great for big data processing and analytics workloads that run continuously. EC2 Pricing Models – Spot o Definition: Bid for unused EC2 capacity at significantly reduced rates compared to On-Demand pricing. o Benefits: ▪ Cost Savings: Up to 90% discount compared to On-Demand pricing. ▪ Scalability: Access to additional compute capacity at a lower cost. ▪ Flexibility: Ideal for flexible, fault-tolerant, and stateless applications. o Considerations: ▪ Interruption: Instances can be terminated by AWS with a two-minute warning if the capacity is needed for On-Demand instances. ▪ Bidding: Set a maximum price you’re willing to pay for Spot Instances. o Use Cases: ▪ Batch Processing: Suitable for batch jobs, data analysis, and processing tasks. ▪ Big Data and Analytics: Ideal for big data workloads that can handle interruptions. ▪ CI/CD Pipelines: Great for continuous integration and continuous deployment pipelines. ▪ Testing and Development: Useful for testing and development environments that can tolerate interruptions. EC2 Instance Types o General Purpose: Balanced compute, memory, and networking resources. o Compute-Optimized: Ideal for compute-bound applications. Compute-optimized instances, such as the C5 and C6i instance families, are designed to deliver high performance for these types of compute- 8 ITSS 4371 Test 1 Review intensive applications. They provide a high ratio of compute power to memory, making them ideal for tasks that require significant processing power. ▪ High-Performance Web Servers: Applications that handle a large number of requests per second, real-time data processing, and high-traffic websites. ▪ Batch Processing Workloads: Large-scale data processing tasks, image and video encoding, and scientific simulations. ▪ Machine Learning Inference: Running trained machine learning models, real-time recommendation engines, and natural language processing tasks. ▪ Gaming Servers: Hosting multiplayer online games, real-time game physics calculations, and high-frequency trading platforms. ▪ Media Transcoding: Converting video and audio files, real-time streaming, and video rendering. ▪ Scientific Computing: Computational fluid dynamics, genomic sequencing, and weather forecasting. o Memory-Optimized: Designed for memory-intensive applications. Memory-optimized instances, such as the R5 and X1 instance families, are designed to deliver high performance for these types of memory- intensive applications. They provide a high ratio of memory to compute power, making them ideal for tasks that require significant memory resources. ▪ In-Memory Databases: ▪ Applications like Redis, Memcached, and SAP HANA. ▪ Real-time data processing and analytics. ▪ High-throughput transaction processing. ▪ Big Data Analytics: ▪ Running large-scale data analytics frameworks like Apache Spark and Hadoop. ▪ Real-time data warehousing and ETL (Extract, Transform, Load) processes. ▪ Data mining and machine learning model training. ▪ High-Performance Computing (HPC): Genomic research and bioinformatics. Computational fluid dynamics (CFD) and finite element analysis (FEA). Weather modeling and climate simulations. ▪ Enterprise Applications: Large-scale ERP (Enterprise Resource Planning) systems. CRM (Customer Relationship Management) applications. Business intelligence and reporting tools. ▪ Caching Solutions: Large-scale caching for web applications. Content delivery networks (CDNs) and edge caching. Session storage for high-traffic websites. ▪ Real-Time Big Data Processing: Stream processing applications using Apache Flink or Apache Storm. Real-time analytics and monitoring systems. IoT (Internet of Things) data processing. o Use Cases of EC2 ▪ Web Hosting: Hosting websites and web applications. ▪ Batch Processing: Running batch jobs and processing large datasets. ▪ Big Data Analytics: Analyzing large volumes of data. 9 ITSS 4371 Test 1 Review EC2 Security o Security Groups: Act as virtual firewalls to control inbound and outbound traffic. o Key Pairs: Securely connect to instances using SSH. o IAM Roles: Manage permissions for EC2 instances. EC2 Storage Options o EBS: Elastic Block Store for persistent block storage. o Instance Store: Temporary storage that is physically attached to the host. o S3: Simple Storage Service for object storage. EC2 Elastic Block Storage o Amazon Elastic Block Store (EBS) is a network-attached storage service that provides persistent block- level storage volumes for use with Amazon EC2 instances. Key points: ▪ Persistence: Unlike instance store, EBS volumes persist independently of the life of an EC2 instance. This means that data stored on an EBS volume remains available even if the instance is stopped or terminated. ▪ Scalability: EBS volumes can be easily resized to meet changing storage needs without downtime. ▪ Backup and Recovery: EBS supports snapshots, which are point-in-time backups of volumes. These snapshots can be used to restore volumes or migrate data across different regions. ▪ Performance: EBS offers multiple volume types optimized for different workloads, including SSD- backed volumes for transactional workloads and HDD-backed volumes for throughput-intensive tasks. ▪ Data Protection: EBS volumes can be encrypted to protect data at rest and in transit. o Limitations: ▪ Availability Zone Dependency: EBS volumes are tied to a specific Availability Zone and can only be attached to instances within the same zone. This means that if an Availability Zone goes down, access to the EBS volume will be lost. ▪ Volume Limits: The number of EBS volumes you can attach to an instance depends on the instance type and size. For example, most Nitro-based instances (a type of EC2 instance that leverages the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor technology) support a maximum of 28 attachments, which includes EBS volumes, network interfaces, and NVMe (Non-Volatile Memory Express, a high-performance, scalable host controller interface designed to address the needs of modern non-volatile storage devices like SSDs) instance store volumes. ▪ Performance Variability: The performance of EBS volumes can vary based on the volume type and the instance type. For consistent performance, it’s recommended to use EBS-optimized instances. ▪ Snapshot Costs: While EBS snapshots provide a way to back up data, they can be costly, especially for large volumes or frequent snapshots. ▪ Not Suitable for Shared Storage: EBS volumes cannot be shared between multiple instances. For shared storage, you would need to use services like Amazon EFS. EC2 Instance Store o An instance store in an AWS EC2 instance provides temporary block-level storage. This storage is physically attached to the host computer and is ideal for temporary data that changes frequently, such as buffers, caches, and scratch data. Key points: 10 ITSS 4371 Test 1 Review ▪Temporary Storage: Data in an instance store persists only during the lifetime of the instance. If the instance is stopped, terminated, or fails, the data is lost. ▪ Performance: Instance stores offer high I/O performance, making them suitable for applications that require temporary storage with fast access. ▪ Cost: There is no additional charge for using instance store volumes; they are included in the cost of the instance. ▪ Use Cases: Common use cases include temporary storage for web servers, caches, and other ephemeral data. o Limitations: ▪ Data Persistence: Data stored in an instance store is temporary and will be lost if the instance is stopped, terminated, or fails. This makes it unsuitable for storing critical or persistent data. ▪ Resizing: Instance store volumes cannot be resized. You are limited to the storage capacity provided by the instance type. ▪ Performance Variability: The performance of instance stores can vary based on the instance type and workload. ▪ No Data Recovery: There is no way to recover data from an instance store if it is lost due to instance termination or failure. ▪ Limited Use Cases: Due to its temporary nature, instance store is best suited for temporary data like caches, buffers, and scratch data. EC2 Management Tools o AWS Management Console: Web-based interface for managing AWS services. o CLI: Command Line Interface for scripting and automation. o SDKs: Software Development Kits for various programming languages. Class 6 – Containers Intro to DevOps o Definition: DevOps is a set of practices, tools, and a cultural philosophy that automates and integrates the processes between software development and IT teams. It emphasizes team empowerment, cross- team communication, and collaboration. This approach aims to shorten the systems development life cycle and provide continuous delivery with high software quality. What is DevOps? o DevOps started as, and continues to be, a convergence of ideas to eliminate the division between development and operations departments to streamline software development and facilitate IT operations and maintenance. o Like Agile, which it liberally borrows from, DevOps breaks large projects into smaller deliverables and multiple deployments, which are easier to manage from design to deployment and operations. o DevOps is different from Agile in that Agile ignores operations (its sole focus is development); DevOps explicitly includes operations. Note that Waterfall included operations. o The long-term vision for DevOps is to achieve a level of efficiency and automation that can enable a continuous delivery pipeline. How Companies Use DevOps o Continuous Integration/Continuous Deployment (CI/CD): ▪ Continuous Integration (CI): Developers frequently merge code changes into a central repository where automated builds and tests are run. This helps catch bugs early and improves software quality. 11 ITSS 4371 Test 1 Review ▪ Continuous Deployment (CD): Automates the release of validated code to production environments, ensuring that new features and fixes are deployed quickly and reliably. o Infrastructure as Code (IaC): ▪ Definition: Manages and provisions computing infrastructure through machine-readable definition files, rather than physical hardware configuration. ▪ Benefits: Ensures consistency, reduces manual errors, and allows for version control of infrastructure configurations. DevOps & Containers o Definition: Containers are lightweight, portable, and self-sufficient environments that include all necessary components to run an application, such as executables, libraries, and configuration files. o Benefits: ▪ Consistency: Containers ensure that the application runs the same way in different environments, reducing the “it works on my machine” problem. ▪ Efficiency: Containers use fewer resources compared to traditional virtual machines, allowing for higher density and better utilization of computing resources. ▪ Scalability: Containers can be easily scaled up or down to handle varying loads, making them ideal for microservices architectures. Intro to Docker & Kubernetes o Overview: Docker and Kubernetes are essential tools in modern software development. Docker is a platform for containerizing applications, while Kubernetes is an orchestration system for managing containerized applications. o Importance: They enable developers to build, deploy, and manage applications more efficiently, ensuring consistency across different environments and improving scalability and reliability. Docker Architecture o Components: ▪ Docker Engine: The core component that creates and runs Docker containers. ▪ Docker Hub: A cloud-based registry service for storing and sharing Docker images. ▪ Docker Compose: A tool for defining and running multi-container Docker applications. Kubernetes Architecture o Control Plane Components: ▪ API server: The application programming interface (API) server in Kubernetes exposes the Kubernetes API (the interface used to manage, create and configure Kubernetes clusters) and serves as the entry point for all commands and queries. ▪ etcd: The etcd is an open source, distributed key-value store used to hold and manage the critical information that distributed systems need to keep running. In Kubernetes, the etcd manages the configuration data, state data and metadata. ▪ Scheduler: This component tracks newly created pods and selects nodes for them to run on. The scheduler considers resource availability and allocation restraints, hardware and software requirements, and more. ▪ Controller-manager: A set of built-in controllers, the Kubernetes controller-manager runs a control loop that monitors the shared state of the cluster and communicates with the API server to manage resources, pods or service endpoints. The controller-manager consists of separate processes that are bundled together to reduce complexity and run in one process. ▪ Cloud-controller-manager: This component is similar in function to the controller-manager link. It links to a cloud provider’s API and separates the components that interact with that cloud platform from those that only interact within the cluster. Docker vs. Kubernetes 12 ITSS 4371 Test 1 Review o Differences: ▪ Docker is primarily for containerization, while Kubernetes is for orchestration. ▪ Docker manages individual containers, whereas Kubernetes manages clusters of containers. o Similarities: Both are essential for modern DevOps practices and work well together to streamline application deployment and management. Amazon ECS (Elastic Container Service) o Definition: A fully managed container orchestration service that makes it easy to deploy, manage, and scale containerized applications. o Key Features: ▪ Integration with AWS Services: Deep integration with AWS services like IAM, CloudWatch, and VPC. ▪ Scalability: Automatically scales your applications up and down based on demand. ▪ Flexibility: Supports both Docker and AWS Fargate. ▪ Security: Provides robust security features, including IAM roles and security groups. Amazon ECS Architecture o Components: ▪ Clusters: Logical grouping of tasks or services. ▪ Tasks: Single running instances of a container. ▪ Services: Allows you to run and maintain a specified number of tasks simultaneously. ▪ Task Definitions: Blueprint for your application, specifying which Docker containers to use. AWS Fargate o Definition: A serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. o Key Features: ▪ Serverless: No need to manage servers or clusters. ▪ Seamless Scaling: Automatically scales your applications based on demand. ▪ Integration with ECS and EKS: Works seamlessly with both ECS and EKS. ▪ Security: Provides isolation at the task level. AWS Fargate Architecture o Components: ▪ Tasks: Single running instances of a container. ▪ Task Definitions: Blueprint for your application, specifying which Docker containers to use. 13

Use Quizgecko on...
Browser
Browser