Kubernetes and Cloud-Native Applications PDF

Summary

This document provides an overview of Kubernetes and cloud-native applications, encompassing topics such as containerization, the path to cloud-native technologies, and Kubernetes architecture. It's a helpful guide for understanding and implementing cloud-native solutions.

Full Transcript

THE Kubernetes and Cloud-Native Applications CLOUDUTION Trajche Krstev, MSc Solution Architect Packet Core and Telco Cloud [email protected] Agenda - Container as a Service - THE CLOUD EVOLUTION - The path to cloud native - The Cloud Native Trail Map - C...

THE Kubernetes and Cloud-Native Applications CLOUDUTION Trajche Krstev, MSc Solution Architect Packet Core and Telco Cloud [email protected] Agenda - Container as a Service - THE CLOUD EVOLUTION - The path to cloud native - The Cloud Native Trail Map - Containers background - Kubernetes – Architecture - KUBERNETS Overview Container as a Service (CaaS) - Container as a Service (CaaS) is a cloud computing model that provides a platform for managing, deploying, and scaling containerized applications. CaaS enables developers and IT teams to run containers, manage orchestration, and leverage infrastructure with minimal manual configuration. It simplifies container lifecycle management, including provisioning, scheduling, scaling, and monitoring, using platforms like Kubernetes or Docker Swarm. Key Characteristics: Benefits: Container Management: Offers tools for deploying and managing Developer Efficiency: Simplifies container deployment and scaling. containers. Portability: Containers ensure consistent application performance Orchestration Support: Built-in support for container orchestration across environments. platforms (e.g., Kubernetes). Pay-as-You-Go (PAYG): Pay only for the compute and storage Flexibility: Works across hybrid, public, or private cloud environments. resources used, optimizing costs. API-Driven Automation: Provides APIs to automate container workflows Rapid Deployment: Enables continuous integration and delivery and operations. (CI/CD) practices. THE CLOUD EVOLUTION The path to cloud native - Clod Native Computing Foundation ( CNCF ) is the Linux Foundation community around the container and cloud native application ecosystem - Hosts key open-source projects required to realize the cloud native application stack - This includes Kubernetes - Cloud Native Landscape Project is intended to be a resource map to help enterprises and developers through the previously uncharted terrain of cloud native technologies. - While there are innumerable routes for deploying a cloud native application, CNCF Projects represent a particularly well-traveled, tested and trusted path. - https://github.com/cncf/trailmap?tab=readme-ov-file - The CNCF Landscape - https://landscape.cncf.io/ The Cloud Native Trail Map – An overview for enterprises starting their cloud native journey. The Cloud Native Trail Map – An overview for enterprises starting their cloud native journey. The Cloud Native Trail Map – An overview for enterprises starting their cloud native journey. (minimal) Containers background - Containers provide a sandboxed execution environment for processes. - Container images are stored in a registry. - Lightweight: cost of instantiation and footprint is low - Only CPU and memory usage of the process, Filesystem layers are shared between containers. - Many containers can be run in a host - Communicate between them and the external world using a bridge or similar. (minimal) Containers background - Docker images background - Container images are stored in a registry. - Can be retrieved by name, multiple versions can be stored Containers beyond a single server - Docker works by default in a single server. - Container orchestration platforms expand to a cluster of servers: by incorporating a control plane by supporting application life cycle management and other basic core features by providing distributed networking, storage, etc. - Container orchestration platforms include: Docker Swarm, Apache Mesos,.. and Kubernetes Kubernetes – Architecture - Container orchestration platform which is the de facto standard CaaS initially developed by Google and released as an open-source project in 2014 - Control plane made of Master nodes: typically replicated (3+) for high availability - Runtime plane made of Worker nodes (aka “minions”): massively scalable (5000+ nodes) Kubernetes – Architecture (The Control Plane) - The API server exposes the Kubernetes API to clients: REST based APIs. The Kubernetes API server acts as the front- end for the control plane, providing a APIs for users and other components to interact with the cluster. It handles all requests to the Kubernetes cluster and serves as the gateway for all control plane components - The etcd database is a distributed highly reliable key value data store: Used by Kubernetes to store cluster configuration and runtime state. Storing all cluster data, including configuration settings, state information, and metadata. It ensures data consistency and availability across the control plane. - The scheduler assigns nodes for the workloads to run on: according to application and cluster policies. The scheduler is responsible for placing pods on appropriate nodes based on resource availability, workload requirements, and policies. It ensures that applications are deployed efficiently and optimally across the cluster. - Cluster Management: The control plane maintains the desired state of the cluster. It manages the lifecycle of applications and their components, including deployment, scaling, and updates. - The controller manager orchestrates all the Kubernetes resources: this includes nodes, workloads, configurations. This component runs various controllers that monitor the state of the cluster and make adjustments to maintain the desired state. For example, it can manage replication, handle node health, and ensure the availability of services. The cloud controller manager interacts with the underlying cloud, when applicable: integrates with OpenStack, AWS, Azure, GCE … Kubernetes – Architecture (The Control Plane) Application Programming Interface (API) - API server: Offers a REST interface. Objects modeled in YAML or JSON formats. Consumed by command line tools, libraries, or HTTP clients (curl). - Kubectl: Command line tool. Automatically discovers the API endpoints and credentials using the kubeconfig file. - Namespaces: Light multitenancy support. Resource quota enforcement. Kubernetes – Architecture (The User Plane) - Kubelet: agent that runs in every worker - Kubelet interacts with the Docker runtime for supporting the control plane managing the containers using Docker proprietary interfaces - kube proxy: Configures and manages the networking for applications and services - Container Runtime Interface (CRI) consists of APIs, specs, and libraries for container runtimes to be - kube dns: provides name resolution (including managed by kubelet services) for applications (replaced by CoreDNS) - Container Network Interface (CNI) defines a common interface between the container runtime - Container runtime: provides the container and the networking execution capabilities See https://containerd.io/ KUBERNETES 1. KUBERNETES CLUSTER 2. PODS 3. DEPLOYMENTS 4. SERVICES KUBERNETES 1. KUBERNETES CLUSTER Source: https://kubernetes.io/docs/concepts/overview/components/ KUBERNETES 2. PODS Pods are the smallest unit of the deployment in Kubernetes. A Pod can contain: - single container - multiple containers All containers in a Pod are scheduled together in a worker. Pods are ephemeral; - run until process termination - are never recreated/rescheduled Declaring the Pod in a YAML file; - the kind attribute identifies the object as a Pod - the name is used to identify the Pod This example declares a Pod with: - a main container, named server, using a contained image named “example-pod”, with /bin/server as entrypoint - a sidecar container, named sidecar, using a contained image named “example-sidecar”, with /bin/sidecar as entrypoint The Pod is created by posting the YAML file to the API server using: - kubectl apply –f example-pod.yaml KUBERNETES 2. PODS Pod networking: ports A Pod can expose different ports: - As services exposed are implemented by each container, ports must be declared at container level - The containerPort attribute defines the port number - The protocol attribute can be set to TCP, UDP, and SCTP. When omitted, defaults to TCP KUBERNETES 2. PODS Pod storage: Volumes Storage space that survives the lifecycle of individual containers in a Pod, but not the Pod itself: - when containers are restarted, can reattach to the volume and use existing data - but , when the Pod is terminated, the storage is gone forever - Volumes are defined at Pod level - Containers declare volume mounts KUBERNETES 2. PODS Pod storage: Persistent Volumes Storage space that survives the lifecycle of the Pod itself. Persistent Volumes (PV): Represent storage space available in the cluster. They are either manually provisioned by the cluster operator, or automatically on demand. Storage classes are used to select a storage backend. Persistent Volume Claims (PVC): Represent storage requests by applications. Kubernetes tries to find available PVs that satisfy the claim and binds PV and PVC together. Pods use volumes by referencing the PVC KUBERNETES Pod resource usage 2. PODS It is possible to control the CPU and memory utilization of individual containers in the Pod by setting: - a minimum (requests) - a maximum (limits) Requests are used to reserve resources to the container: Pod will be scheduled on workers that have resources available Limits are used to control the resources a Pod can consume: - CPU: if exceeded, execution is capped - Memory: if exceeded for some time, container is terminated Plays a central role in autoscaling KUBERNETES 2. PODS Pod Configuration: ConfigMaps are used to store Pod configuration - Key value pairs , similar to a.properties or.ini file. Can be created from existing directories, files, and YAML definitions. ConfigMaps are exposed to Pods as: - environment variables - Volumes: When configmap is updated , volume contents are also updated KUBERNETES 3. DEPLOYMENTS A Deployment manages a set of Pods to run an application workload. Pods will not be rescheduled in the event of a failure thus Deployments make sure that a number of stateless replicas run at any time. This is achieved by; - defining a number of replicas for each Pod - ensuring the number of replicas is kept at all times The “replicas” attribute indicates the number of Pod instances to be created by the Deployment. Kubernetes tries to always maintain the number of active replicas e.g. it creates new Pod when existing Pod is destroyed. KUBERNETES 3. DEPLOYMENTS Other Workload Controller Use Cases: KUBERNETES In Kubernetes, a Service is a method for exposing a network application that is running as one of more Pods in your cluster with the Service API. Services are given a name that will be used for service discovery. Endpoints for the service are those Pods with a label matching the 4. SERVICES selector; - this eliminates the need to redefine the Service as Pods are created/destroyed. Ports in the Pod (targetPort) are exposed by the service, optionally under a different port number (port) - several ports can be exposed in a Service. Services get a virtual IP automatically. Although it is possible to; - specify a value by setting the “clusterIp” field to the desired IP address - create a “headless” service, setting the “clusterIp’ field to “none” - let Kubernetes pick a virtual IP, omitting the “clusterIp” field Kubernetes uses an embedded DNS server that is configured to be used by every Pod as DNS for service discovery. Kubernetes – Networking basics - Every Pod has a single network interface, and a single IP address. - There is a single internal network, named the cluster network , shared between all the Pods and Services (of all namespaces!). - NetworkPolicies can be used to restrict communications to certain set of Pods. - There is a single external network, named the ingress network , used to connect the Pods to the Internet and receive external traffic towards the cluster. Kubernetes – Receiving external traffic - By default, Pods and Services are only accessible within the Kubernetes cluster. - To receive external traffic, you must either: 1. Expose individual Pods (not recommended) 2. Expose Services + Use an Ingress resource Kubernetes – Receiving external traffic 1. Expose individual Pods (not recommended) - Pods can be directly exposed in the worker node they run on - Using hostNetwork , the Pod uses the default network namespace of the worker, and sees its network interfaces: This means any port exposed by the Pod (not only the ones declared) is directly accessible in the worker - Using hostPort , a port exposed by the Pod is directly exposed in the worker: http://:10080 Kubernetes – Receiving external traffic 2. Expose Services Service exposure is governed by the type attribute: - ClusterIp : default mode, service is only accessible within the cluster - NodePort : service is automatically exposed on every worker in a port within the 30000-32767 range - LoadBalancer : configures an external LB available in the infrastructure (typically in cloud environments) - ExternalName : service requests are resolved to an external DNS name of an external service - Cluster IPs, Load Balancer IPs, and NodePort ports can be specified - Using clusterIP , loadBalancerIP , and nodePort attributes, respectively - VIP addresses can be specified with the externalIPs attribute Kubernetes – Receiving external traffic 2. Expose Services – Accessing Services Kubernetes – Scaling Deployments, StatefulSets (and others) can be scale: - manually , by changing the number of replicas using the kubectl scale command - using the kubectl edit command and changing the replicas attribute - automatically , by defining a HorizontalPodAutoscaler based on: Resource metrics (CPU, memory) or Custom metrics (Pod and external) SPEAK UP questions, comments?

Use Quizgecko on...
Browser
Browser