Unit 04 - Introduction to Containers Orchestration.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

SE 4455 Cloud Computing Unit 4: Introduction to Containers Orchestration Fadi AlMahamid, Ph.D. Contents 01 Overview of Container Orchestration 02 Introduction to Kubernetes (K8s) 03 Hands-on with Kubernetes 01 Overview of Container Orchestration Introduction to Container Orchestration ➔ Definition:...

SE 4455 Cloud Computing Unit 4: Introduction to Containers Orchestration Fadi AlMahamid, Ph.D. Contents 01 Overview of Container Orchestration 02 Introduction to Kubernetes (K8s) 03 Hands-on with Kubernetes 01 Overview of Container Orchestration Introduction to Container Orchestration ➔ Definition: Container Orchestration automates the deployment, management, scaling, and networking of containers. ➔ Importance in Cloud Computing: Essential for managing life cycles of containers, especially in large, dynamic environments. ➔ Key Goals: Efficient Resource Utilization: Maximizes resources while ensuring application performance. High Availability: Ensures applications are always operational, managing failovers and backups smoothly. Scalability: Easily adjusts to increased loads. Load Balancing: Distributes network traffic efficiently for better performance. Benefits of Container Orchestration ➔Simplified Management: Automates complex tasks, reducing manual work. ➔Automated Deployment and Management: Streamlines workflows, improving efficiency. ➔Enhanced Security: Manages security policies and access controls. ➔Examples of Tools: Docker Swarm for simpler use cases Apache Mesos for datacenter-like environments. 02 Introduction to Kubernetes (K8s) Overview of Kubernetes (K8s) ➔Definition: Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters. It groups containers that make up an application into logical units for easy management and discovery. ➔Why Kubernetes? Market Share: Widely adopted in the industry. Community Support: Robust community and ecosystem. Flexibility: Adaptable to different environments. Kubernetes Main Features ➔ Container Management: Automates deployment, scaling, and operations of application containers. ➔ Service Discovery and Load Balancing: Assigns DNS names or IP addresses to containers and balances loads. ➔ Storage Orchestration: Automatically mounts and manages storage systems of various types. ➔ Automated Rollouts and Rollbacks: Manages the deployment process, ensuring that only healthy containers are deployed. ➔ Automatic Bin Packing: Optimizes resource allocation to containers based on their requirements. ➔ Self-Healing: Automatically replaces or restarts failing containers to ensure application reliability. ➔ Secret and Configuration Management: Safely stores and manages sensitive information, integrating it seamlessly with containerized applications. Common Use Cases of Kubernetes ➔Microservices Architecture: Managing and scaling microservices efficiently. ➔Cloud-native Applications: Building and deploying scalable applications in public, private, or hybrid clouds. ➔CI/CD Pipelines: Integrating with tools like Jenkins for continuous development and deployment. Kubernetes Architecture Overview ➔Kubernetes Architecture: Kubernetes architecture is designed for distributed systems that are scalable and resilient. ➔Key Elements of Kubernetes Architecture: 1. 2. 3. 4. 5. Cluster Nodes Pods Services Labels and Selectors Cluster Worker Node Container Kubectl Kubelet Pod Worker Node Container Container Container Master Node Pod Container Kubelet Worker Node Container Kubelet Container Pod Pod Kubernetes Core Components ➔ Clusters: A Cluster is a collection of Nodes that run containerized applications. The Kubernetes cluster is the environment where all Kubernetes components, resources, and workloads operate. ➔ Nodes: Nodes are the worker machines (physical or virtual) that host the running applications. Each Node in a Kubernetes cluster has the necessary components to run Pods, the smallest deployable units in Kubernetes. ➔ Pods: Pods are the basic units of deployment in Kubernetes. Each Pod represents a single instance of a running process in your cluster and can contain one or more containers. Containers in the same Pod share the same network namespace and storage, allowing them to communicate and share data more easily. ➔ Services: Services are abstractions that define a logical set of Pods and a consistent method to access them. They are used to enable communication between different Pods or external sources and the Pods. ➔ Labels and Selectors: Labels are key-value pairs attached to Kubernetes objects, like Pods and Services, for identification and organization. Selectors are used to select a group of objects based on the labels they carry, enabling efficient resource management and organization within the cluster. Additional Key Concepts ➔Kubectl: The command-line tool that allows users to communicate with the cluster. ➔Kubelet: An agent that runs on each node in the cluster, ensuring containers are running in a Pod. ➔Volumes: Provide a way to store data that can be accessed by containers in a Pod. ➔Namespaces: Enable multiple teams or projects to use the same cluster without conflict. Kubernetes Clusters Clusters ➔ A Kubernetes cluster forms the backbone of Kubernetes' container orchestration capabilities. It consists of a set of interconnected nodes that work together to run containerized applications. ➔ The cluster provides a unified environment that abstracts away the underlying infrastructure, allowing you to deploy and manage applications seamlessly across a fleet of machines. ➔ Each Cluster contains: 1 Master Node 1 or More Worker Nodes ➔ Each Kubernetes cluster has a unique internal network and IP address range, known as ClusterIP. ➔ ClusterIP is mainly used for internal communication within the cluster. For example, it allows communication between Pods and Services within the cluster without exposing them to the external network. Kubernetes Nodes Understanding Kubernetes Nodes ➔ Definition: Nodes are worker machines in Kubernetes, where containers are deployed. ➔ Types of Nodes: Master Node: Responsible for global decisions about the cluster (like scheduling), as well as detecting and responding to cluster events (like starting up a new pod). Worker Nodes: Execute the work, running the containers in Pods. ➔ Node Management: Joining a node to a cluster. Node health checks. Scaling nodes in the cluster. Nodes Components ➔ Master Node (Control Plane): API Server: The front-end for the Kubernetes control plane. etcd: Reliable distributed data store that saves the cluster state and configuration. Scheduler: Watches for newly created pods and selects a node for them to run on. Controller Managers: These run controller processes, handling routine tasks in the cluster. ➔ Work Node (Date Plane): Kubelet: Communicates with the master node. Container Runtime: Software that runs containers (e.g., Docker). Kube-proxy: Network proxy on each node. Kubernetes Pods Understanding Pods ➔ Definition: The smallest deployable units in Kubernetes ➔Characteristics: Hosts one or more containers Shares network, storage Lifecycle (creation, deletion) ➔Types: Single-container pods multi-container pods Types of Pods ➔ Single-container Pods: The most common Kubernetes use case; one container per Pod. The container is tightly coupled to the Kubernetes infrastructure and its lifecycle. Ideal for simple applications or for a single responsibility in a larger application. ➔ Multi-container Pods: Contain multiple containers that need to work together. Containers in a multi-container Pod are tightly coupled and share resources. ➔ Common use cases include: Sidecar containers that enhance or extend the functionality of the main container. Helper containers that assist or are dependent on the main application container. Communication and Storage in Pods ➔Networking: Each Pod is assigned a unique IP address within the cluster, allowing them to communicate with other Pods and services. ➔Storage: Pods can specify shared storage volumes that can be accessed by all containers in the Pod, allowing them to share data. Pod Lifecycle and Management ➔ Creation: Defined typically via YAML or JSON manifest files. ➔Health Checks: Kubernetes can perform health checks on Pods and restart containers that fail. ➔Scaling and Updating: Pods are often managed by Deployments or ReplicaSets which handle scaling and rolling updates. Kubernetes Services Kubernetes Services ➔Kubernetes Services are a critical concept for making applications accessible in Kubernetes. ➔A Service in Kubernetes is an abstraction that defines a logical set of Pods and a consistent way to access them. ➔Purpose: Exposing applications running on Pods as network services ➔Types of Services: ClusterIP (default) NodePort LoadBalancer Types of Services in Kubernetes - ClusterIP ➔Default type of service. ➔Assigns a unique IP address inside the cluster. ➔Used for internal communication between services, meaning it can only be accessed within the cluster. ➔Ideal for use cases where you need to expose a service within the cluster (e.g., back-end components of an application). Types of Services in Kubernetes - NodePort ➔Exposes the service on each Node’s IP at a static port (NodePort). ➔Allows external traffic to access the service via a known IP address and port number. ➔Useful when you need to expose a service to outside traffic and are not using an external load balancer. ➔Automatically includes a ClusterIP service, to which the NodePort service routes. Types of Services in Kubernetes - LoadBalancer ➔Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the service. ➔Ideal for production environments, where you need to expose a service to the internet and distribute traffic efficiently. ➔Often used in cloud environments for externally-accessible web applications. Service Discovery in Kubernetes ➔ DNS: Kubernetes supports DNS-based service discovery, which assigns a DNS name to the service. Applications within the cluster can use this DNS name to discover and communicate with the service. This simplifies the discovery process as the DNS name remains consistent even if the underlying Pod IPs change. ➔ Environment Variables: When a Pod runs on a Node, Kubernetes sets a series of environment variables for each active service. These variables follow the pattern _ and allow Pods to easily locate and connect to services. Labels and Selectors Labels ➔ Definition: Labels are key-value pairs that are attached to objects, like Pods, Services, and Deployments, in Kubernetes. ➔ Purpose: Labels are used to organize and select subsets of objects. They provide a flexible method of marking objects for operational purposes like deployments, service routing, and performing batch actions (like deleting). ➔ Characteristics: They are arbitrary and set by the user to impart meaningful and relevant information about the object (e.g., "release": "stable", "environment": "dev"). Labels are not unique and multiple objects can have the same label(s). They can be dynamically created and modified at runtime. Selectors ➔ Definition: Selectors are a way to select a group of objects based on the labels they carry. ➔ Purposes: Efficient Resource Allocation: They enable precise identification and grouping of cluster resources like Pods, Nodes, and Services based on label criteria. Targeted Operations: Essential for directing operations (deployments, updates, scaling) to the correct resources, ensuring accurate and intended modifications. Optimizing Resource Utilization: They help in optimizing the use of resources within the cluster, maintaining efficiency and performance by allocating resources based on specific requirements. Label Best Practices ➔Clear and Consistent Labeling Scheme: Adopt a standard convention for labeling objects to maintain clarity. ➔Avoid Over-Labeling: Use labels thoughtfully to ensure they remain manageable and useful. ➔Use for Affinity and Anti-Affinity: Labels can be used to influence pod scheduling decisions (e.g., scheduling pods on different nodes for high availability). 03 Hands-on with Kubernetes Create New Cluster Enabling Kubernetes Engine (GKE) Create New Cluster – Autopilot Create New Cluster – Autopilot Create New Cluster – Autopilot Create New Cluster – Autopilot Create New Cluster – Autopilot Create New Cluster – Standard Cluster Create New Cluster – Standard Cluster Create New Cluster – Standard Cluster Edit Cluster Configurations Cluster Nodes Node Components ➔Run the following commands: > docker –v > kubetel --version ➔The nodes contains required component/software utilities required by Kubernetes Cluster Using Kubernetes Command-line interface (kubectl) Deploy Application using kubectl 1. Set default Cloud Region and Zone: > gcloud config set compute/region us-central1 > gcloud config set compute/zone us-central1-a 2. Create Kubernetes (GKE) Cluster: > gcloud container clusters create --machine-type=e2medium lab-cluster 3. Cluster Authentication Credentials: > gcloud container clusters get-credentials lab-cluster Deploy Application using kubectl Kubernetes Engine ➔ Kubernetes clusters Compute Engine ➔ VM instances Deploy Application using kubectl 4. Deploy Application: > kubectl create deployment nginx --image=nginx:1.10.0 5. Create a Kubernetes Service: > kubectl expose deployment nginx -type=LoadBalancer --port 8080 Deploy Application using kubectl ➔Important Deployment Notes: The Deployment automatically creates the Pods. Deployment controller decides where to place the Pods in the cluster (which nodes to use) based on the current load and resources of the nodes. Deployment controlled by GKE Deploy Application using kubectl ➔You can list created objects using: kubectl get deployments kubectl get pods kubectl get services Controlled Deployment Deploy Application using Deployment File ➔Create a Specification File (.yaml) ➔ Deploy the file: > kubectl create -f pod_file.yaml > kubectl create -f deployment_file.yaml ➔Use kubectl apply for resources that will need to be updated or modified over time, as it is more flexible and better suited for handling changes. YAML File (Pod Specification) ➔apiVersion: v1 ➔kind: Pod: This indicates that the file is for creating a Pod. ➔metadata: Defines the name and labels of the Pod. ➔spec: Details the specifications of the containers within the Pod. containers: A list of containers to be included in the Pod. image: The Docker image to use. args: Arguments passed to the container at startup. ports: The container ports to be exposed. resources: Defines the CPU and memory resource limits. YAML File (Pod Specification) – Example YAML File (Deployment Specification) ➔ apiVersion: apps/v1 ➔kind: Deployment: This indicates that the file is for creating a Deployment. ➔metadata: Metadata about the Deployment, like its name. ➔spec: Specifications for the Deployment. replicas: The number of Pod replicas to maintain. selector: Selects the Pods that belong to this Deployment. template: The template for the Pods that will be created. metadata: Metadata for the Pods. spec: Specifications for the containers within the Pod. YAML File (Deployment Specification) - Example YAML File (Service Specification) ➔ apiVersion: v1 ➔ kind: Service: Defines that the file is for creating a Kubernetes Service. ➔ name: "monolith": Sets the name of the Service. ➔ spec: selector: app: "monolith" secure: "enabled": Selects Pods with these labels to be part of this Service. ports: protocol: "TCP": Specifies the network protocol (TCP). port: 443: The external port that the Service will be accessible on. targetPort: 443: The port on the Pod that the Service routes to. nodePort: 31000: Exposes the Service on each Node's IP at this port. type: NodePort: Defines the type of Service, which in this case is NodePort. YAML File (Service Specification) – Example Scaling the Deployment ➔To manually scale the deployment, you can either update the replicas field in your YAML file and apply the changes, or use the kubectl scale command > kubectl scale deployment nginx-deployment -replicas=5 Deleting the Deployment ➔If you want to remove the deployment and its associated resources from your cluster, use: > kubectl delete deployment nginx-deployment More on Kubernetes After Unit 05 I hope you enjoy the content!

Use Quizgecko on...
Browser
Browser