Unit 03 - Containerization and Virtualization.pdf
Document Details
Uploaded by EntrancedDobro6607
Tags
Full Transcript
SE 4455 Cloud Computing Unit 3: Containerization and Virtualization Fadi AlMahamid, Ph.D. Contents 01 Introduction to Virtualization and Virtual Machines (VMs) 02 Introduction to Containers and their Advantages 03 VMs vs. Containers 04 Docker Fundamentals 05 Docker Networking and Volume Management 0...
SE 4455 Cloud Computing Unit 3: Containerization and Virtualization Fadi AlMahamid, Ph.D. Contents 01 Introduction to Virtualization and Virtual Machines (VMs) 02 Introduction to Containers and their Advantages 03 VMs vs. Containers 04 Docker Fundamentals 05 Docker Networking and Volume Management 01 Introduction to Virtualization and Virtual Machines (VMs) Understanding Virtualization ➔General Definition: Virtualization is the creation of a virtual (rather than actual) version of physical resources, such as hardware platforms, storage devices, and network resources. ➔Scope: Includes hardware virtualization (like VMs), as well as storage, network, and memory virtualization. ➔Purpose: To centralize administrative tasks while improving scalability and workloads. What Are Virtual Machines? ➔ Virtual Machine (VM): A VM is an abstraction of a physical computer system, created through the process of virtualization. It behaves and operates as a distinct computer system, capable of running its own operating system and applications. ➔ Core Characteristics: Abstraction: A VM abstracts the hardware of a physical computer, including the CPU, memory, storage, and network resources. Independence: Each VM operates independently of others, even though it may share the same underlying physical hardware. Flexibility: VMs can be configured with various operating systems and hardware environments, offering a flexible platform for a wide range of computing tasks. ➔ Components: Each VM contains its own virtual CPU, memory, hard drive, and network interface. Virtualization, Simulation, and Emulation ➔ Virtualization: It often involves creating a virtual version of something, like computer hardware. In the context of VMs, it typically refers to the partitioning of a physical machine into multiple "virtual" machines, each with its own operating system, running on the same hardware. ➔ Simulation: Simulation, on the other hand, is about modeling the behavior or characteristics of a system. It doesn't necessarily recreate the hardware or software environment but simulates the behavior of the system. In computer science, simulation often refers to the process of modeling the behavior of a program or a process. ➔ Emulation: Emulation is the process of replicating (mimic) the functionality of one system using another system. It's about making one system behave like another. VMs between Simulation and Emulation ➔ Simulation vs. Emulation: Simulation: This involves creating an abstract model of a particular system. Unlike emulation, simulation does not aim to replicate hardware or software but rather to model its behavior. Emulation: It involves mimicking the functionality of one system using a different system. The emulator achieves this by replicating the hardware environment of the emulated system. ➔ Virtual Machines - A Case of Simulation: VMs are more accurately described as simulators rather than emulators. VMs simulate a complete hardware system, from CPU to network resources. They provide a simulated environment of a computer system which can run its own operating system and applications, as if it were a separate physical entity. How VMs Simulate Hardware? ➔A VM simulates a physical machine's resources such as CPU, memory, storage, and network interfaces. ➔The guest OS and applications on a VM are not aware that they are running on virtualized hardware. ➔The VM believes it has access to physical resources, but these resources are managed and allocated by the hypervisor. Hypervisor ➔ Definition: A Hypervisor, or Virtual Machine Monitor (VMM), is a software layer that allows multiple VMs to share physical resources. It manages VMs by allocating physical resources like CPU, memory, and storage to each VM. ➔ Resource Management by Hypervisors: CPU: The hypervisor allocates CPU resources from the host machine to the VM. Memory: Distributes physical RAM to VMs in the form of virtual memory. Storage: VMs use virtual hard disks, which are files on the host's physical storage. Networking: Virtual network interfaces are created for VMs, allowing them to connect to the host's physical network. ➔ Hypervisor Types: Bare-metal hypervisors: run directly on the host's hardware. Hosted hypervisors: run on a conventional operating system Hosted vs. Bare-metal hypervisors VM 1 App 1 *** VM n App n Guest OS *** App 1 *** VM 1 App n App 1 Guest OS Host OS Host Hardware Hosted hypervisors App n Guest OS Hypervisor Physical Server *** VM n Physical Server *** App 1 *** App n Guest OS Hypervisor Host Hardware Bare-metal hypervisors VMs Key Features ➔Key Features: Isolation: Each VM is isolated from others. Hardware Independence: VMs can run on any physical hardware. Snapshotting: Capturing the state of a VM at a specific point in time. VMs Common Uses - Server Consolidation ➔ Definition: The process of combining multiple small physical servers into fewer, larger servers. ➔ Benefits: Cost Efficiency: Reduces hardware costs by minimizing the number of physical servers required. Energy Savings: Fewer servers mean lower energy consumption and cooling requirements. Improved Management: Simplifies the management of server resources. ➔ Use Case: A company with numerous underutilized servers can consolidate them into a smaller number of VMs running on a few physical servers, leading to significant cost savings and efficiency improvements. VMs Common Uses - Development Environments ➔ Definition: Development Environments within the context of VMs refer to creating isolated and customizable spaces on a computer system where developers can build, test, and refine applications without affecting the primary operating system or production environment. ➔ Benefits: Flexibility: VMs provide a safe and isolated environment for testing and development without affecting the main production environment. Replicability: Easy to replicate environments across different VMs for testing under various conditions. Rapid Provisioning: Quick setup and teardown of environments for different projects or versions. ➔ Use Case: Developers can use VMs to create multiple test environments with different operating systems or configurations, allowing for thorough testing without risking the stability of the main system. VMs Common Uses - Legacy Application Support ➔ Definition: Legacy Application Support refers to the use of technology (like VMs) to continue the operation and maintenance of older software applications that may not be compatible with current hardware or operating systems. ➔ Benefits: Running Old Software: VMs can emulate older operating systems and environments required to run legacy applications. Isolation: Keeps outdated software separated from modern systems, reducing security risks. Cost-Effective: Eliminates the need for maintaining old hardware solely for the purpose of running legacy applications. ➔ Use Case: An organization can use VMs to continue running essential legacy applications that are incompatible with modern operating systems, without the need to overhaul the software completely. Types of Virtual Machines ➔ System Virtual Machines: ➔ Process Virtual Machines: ➔ Process VMs are designed to execute a single program or process. They provide a platform-independent programming environment that abstracts away details of the underlying hardware or OS. Example: The Java Virtual Machine (JVM) is a type of process VM. Hardware Virtual Machines (HVM): ➔ System VMs provide a complete virtual platform that emulates all the hardware components of a physical machine. Capable of running a full operating system (OS) and multiple instances can run simultaneously on a single host. Offer a simulated environment that closely mimics the physical hardware. The main distinction from System VM, is in how they handle the execution of instructions and access to hardware resources. HVMs rely more on the physical hardware's capabilities to execute VM instructions directly on the CPU, with less overhead. Paravirtual Machines (PVM): Aware they are in a virtualized environment and interact directly with the hypervisor. PVMs use a special API for critical operations like disk and network access, which reduces the overhead of these operations. Types of Virtual Machines – Architecture HVM Layer Simulated OS with Direct Hardware Access PVM Layer Optimized OS for Hypervisor API System VM Layer Application Hypervisor Hypervisor Process VM Host OS Host OS Host OS Physical Hardware Physical Hardware Physical Hardware Physical Hardware System VM Process VM HVM PVM API Calls Hypervisor Direct Access Host OS 02 Introduction to Containers and their Advantages What are Containers? ➔Containers: A lightweight, executable units that encapsulate an application and its dependencies (libraries, binaries, configuration files) in a single package. ➔Containerization: The process of encapsulating an application and its environment to ensure consistency across multiple development, staging, and production environments. ➔Containers: Neither Emulators nor Simulators: Containers are neither traditional simulators nor emulators. They provide an isolated environment for running applications, but they do not simulate or emulate an entire hardware system. How Containers Works? ➔ Containers create isolated environments that simulate separate operating systems but share the host's kernel. They provide a simulated, consistent environment for applications to run in. ➔ OS-Level Simulation: Containers simulate at the operating system level, not at the hardware level. This means they replicate the environment (libraries, binaries, etc.) needed for applications. ➔ Resource Sharing: Kernel Sharing: Containers efficiently utilize the host's OS kernel, allowing multiple containers to run on the same kernel without the need for separate OS instances. Memory and Storage: Memory and storage resources are shared among containers, but each container has its own isolated view, ensuring they do not interfere with each other. System Libraries/Binaries: Containers can share common binaries and libraries with the host, reducing redundancy and saving space. How Containers Works? ➔Isolation Mechanism: Containers use advanced isolation mechanisms to create separate, secure environments within the same host OS. This isolation ensures that processes and resources within each container are segregated and operate independently, maintaining efficiency and security. Mechanisms: Namespaces: Namespaces are a technology used to provide isolated workspaces (including process trees, network interfaces, and file systems) within the same host OS, ensuring that processes in one container cannot interfere with those in another. Resource Management: Containers use resource management tools (similar to cgroups in Linux) to allocate and limit resources like CPU, memory, and I/O, ensuring fair resource usage and preventing resource hogging by any single container. Containers Building Blocks ➔Container Images: Immutable templates used to create containers. ➔Container Registries: Centralized hubs where container images are stored and shared (e.g., Docker Hub). ➔Container Engine: The runtime used to build, run, and manage containers (e.g., Docker Engine). ➔Container Instances: The actual running containers that are instantiated from container images. Advantages of Containerization ➔Lightweight: Containers are inherently lightweight due to their efficient use of the host system's kernel, which allows them to operate without the overhead of additional operating systems, thereby minimizing resource usage. ➔Portability: Containers can run consistently across any platform or cloud, reducing the "it works on my machine" problem. ➔Rapid Deployment and Scaling: Due to their small size and fast runtimes, containers can be quickly started, replicated, and stopped. Advantages of Containerization (cont.) ➔Resource Efficiency: Higher density and efficiency as multiple containers can run on a single host machine. ➔Isolation and Security: Containers provide process isolation which enhances security and reduces application conflicts. ➔Simplified Management: Containers simplify deployment, scaling, and management when used with orchestration tool like Kubernetes. Container Use Cases ➔ Microservices: Ideal for hosting individual services in a microservices architecture due to their isolation and scalability. ➔ DevOps and Agile Development: Enhance development and deployment speeds, aligning with continuous integration/continuous deployment (CI/CD) pipelines. ➔ Application Isolation: Running multiple applications on the same server without interference. ➔ Environment Consistency: Ensuring consistency across different environments, aiding in testing and reducing bugs. 03 VMs vs. Containers VMs vs. Containers - Architecture ➔VMs: Run on a hypervisor, each VM includes a full copy of an operating system, the application, necessary binaries, and libraries taking up tens of GBs. ➔Containers: Share the host system’s kernel but isolate the application processes from the system. Containers include the application and its dependencies, but not a full OS, making them lightweight. VMs vs. Containers - Performance and Resource Utilization ➔VMs: Generally, require more system resources and have longer startup times due to the overhead of virtualizing hardware and running separate OS instances. ➔Containers: More efficient in terms of system resource usage and have faster startup times due to sharing the host's OS kernel. VMs vs. Containers - Isolation and Security ➔VMs: Provide strong isolation due to separation at the hardware level; considered more secure for running applications that require high isolation. ➔Containers: Offer process-level isolation which is generally sufficient for most applications but might be less secure than VMs for sensitive tasks. VMs vs. Containers - Scalability and Portability ➔VMs: Less scalable and portable compared to containers. Moving VMs across hosts or environments can be resource-intensive. ➔Containers: Highly scalable and portable, can be easily moved across different platforms and cloud environments. VMs vs. Containers - Use Cases ➔VMs: Ideal for applications that require full isolation, extensive OS customization, or running multiple different OS on the same hardware. ➔Containers: Suited for microservices architectures, application development, and CI/CD workflows. VMs vs. Containers – Summary Aspect Virtual Machines Containers Architecture Full OS per VM Share host OS kernel Resource Utilization Higher (full OS) Lower (no full OS) Startup Time Longer Shorter Isolation Strong (hardware level) Moderate (process level) Scalability Less scalable Highly scalable Portability Less (due to size and dependencies) High (lightweight) Security Generally higher Depends on implementation Use Cases Full isolation, diverse OS needs Microservices, CI/CD, DevOps Decision Factors: VMs vs. Containers ➔Compatibility and Requirements: Consider the nature of the applications and the environment they require. Legacy applications might favor VMs, while modern, cloud-native apps are often more suited to containers. ➔Security and Isolation Needs: For highly sensitive data or applications where security is paramount, VMs might be the better choice. ➔Resource Availability and Efficiency: Containers are more efficient and can be a better choice for resource-constrained environments. Environmental Sustainability ➔Resource Efficiency: Generally, containers are more environmentally friendly due to their efficient use of resources. They require less computational power and energy compared to running multiple VMs. ➔Data Center Utilization: Using containers can lead to better utilization of physical servers in data centers, potentially reducing the environmental impact. 04 Docker Fundamentals What is Docker? ➔Definition: Docker is a platform that uses containerization technology to develop, deploy, and run applications in isolated environments called containers. ➔Core Concept: Allows applications to be packaged with all their dependencies into a container, ensuring consistency across various computing environments. Understanding Docker's Components ➔ Docker Engine: The core part of Docker, it's a client-server application with a server that’s a type of longrunning program called a daemon process. ➔ Docker Daemon: The background service that manages Docker containers, images, networks, and volumes. ➔ Docker Client: The command-line interface (CLI) tool that allows users to interact with the Docker daemon. ➔ Docker Images: Immutable templates used to create containers, built from Dockerfiles. ➔ Docker Containers: Isolated environments created from Docker images where applications run. ➔ Docker Registries: Docker Hub: The public registry to store Docker images. It provides a centralized resource for container image discovery, distribution, and change management. Private Registries: Users can also set up private registries to store and manage their proprietary images securely. The Foundation of Docker Containers ➔Docker Images: A Docker image is a lightweight, standalone, and executable software package that includes everything needed to run an application: code, runtime, libraries, environment variables, and configuration files. ➔Characteristics: Immutable: Once created, they do not change. Modifications create new image layers. Layered Architecture: Docker images are built in layers, each representing a set of changes. This layering speeds up the build process and saves storage by reusing layers. The Foundation of Docker Containers ➔Dockerfile: A Dockerfile is a text file containing a series of instructions and commands used to build a Docker image. Dockerfile Key Components ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ FROM: Specifies the base image to start building on. RUN: Executes commands in a new layer on top of the current image. COPY: Copies new files or directories into the Docker image. ADD: Similar to COPY, but with the ability to handle remote URLs and tar extraction. CMD: Provides a default command to execute when the container starts. ENTRYPOINT: Configures a container to run as an executable. ENV: Sets environment variables. EXPOSE: Informs Docker that the container listens on specified network ports at runtime. VOLUME: Creates a mount point to access and store persistent data. WORKDIR: Sets the working directory inside the container. Building an Image with Dockerfile ➔Write a Dockerfile with the required instructions. ➔Use the docker build command to create an image from the Dockerfile. ➔Example Command: docker build -t myapp:1.0. ➔This command builds an image from the Dockerfile in the current directory “.”, tagging it as myapp:1.0. Docker Image Best Practices ➔Minimize Layers: Combine related commands into a single RUN instruction to reduce the number of layers. ➔Use Official Base Images: For security and reliability, start with official base images from Docker Hub. ➔Clean Up: Remove unnecessary tools and files to keep images as small as possible. Dockerfile Example 1 ➔ # Use an official Nginx image as a parent image ➔ FROM nginx:latest ➔ # Set the working directory in the container ➔ WORKDIR /usr/share/nginx/html ➔ # Copy the static website files into the container ➔ COPY./static-html-directory /usr/share/nginx/html ➔ # Expose port 80 ➔ EXPOSE 80 ➔ # Start Nginx when the container launches ➔ CMD ["nginx", "-g", "daemon off;"] Dockerfile Example 1 ➔ FROM nginx:latest This line starts the build process from the Nginx official image found on Docker Hub. latest tags the latest version of the Nginx image. ➔ WORKDIR /usr/share/nginx/html Sets the working directory to /usr/share/nginx/html. This is the directory Nginx serves files from. ➔ COPY./static-html-directory /usr/share/nginx/html Copies files from static-html-directory (a directory on your host machine) into the specified directory inside the container. This directory should contain the static HTML files you want to serve. ➔ EXPOSE 80 Informs Docker that the container listens on port 80. This is the standard port for web traffic. ➔ CMD ["nginx", "-g", "daemon off;"] The command to start Nginx. The -g "daemon off;" argument is used to run Nginx in the foreground, which is required for Docker containers. How to Use Dockerfile Example 1 ➔Create a Directory: Make a directory named static-html-directory on your host machine and add your static HTML files to it. ➔Build the Image: Run: docker build -t my-nginx-image:1.0. in the terminal from the directory where the Dockerfile is located. This command builds the Docker image and tags it as my-nginx-image:1.0 Notice the dot. : it represents the current directory where it will look for the docker file; named “Dockerfile. Dockerfile Example 2 ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔ # Start with the official Python base image FROM python:3.8-slim # Set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set the working directory in the container WORKDIR /app # Copy the requirements.txt file into the container at /app COPY requirements.txt /app/ # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Copy the current directory contents into the container at /app COPY. /app # Make port 5000 available to the world outside this container EXPOSE 5000 # Define environment variable ENV NAME World # Run app.py when the container launches CMD ["python", "app.py"] Dockerfile Example 2 ➔ ➔ ➔ FROM python:3.8-slim Uses the official Python image with Python 3.8 on a slim variant as the base image. These environment variables prevent Python from writing.pyc files to disk (which is unnecessary in a container) and ensure that Python output is sent straight to the terminal without being first buffered, making it easier to detect problems. ENV PYTHONDONTWRITEBYTECODE 1 and ENV PYTHONUNBUFFERED 1 WORKDIR /app ➔ COPY requirements.txt /app/ ➔ Informs Docker that the container listens on port 5000 at runtime. ENV NAME World ➔ Copies the rest of the application's code into the container. EXPOSE 5000 ➔ Installs the Python packages specified in requirements.txt. COPY. /app ➔ Copies the requirements.txt file (which lists the necessary Python packages) into /app. RUN pip install --no-cache-dir -r requirements.txt ➔ Sets the working directory inside the container to /app. Sets an environment variable NAME used by the application. CMD ["python", "app.py"] The command to run the application (app.py) when the container starts. How to Use Dockerfile Example 2 ➔Create app.py and requirements.txt: Ensure you have these files in your project directory. requirements.txt should list all dependencies. ➔Build the Image: Run docker build -t my-python-app:1.0. to build the image. This command builds the Docker image and tags it as my-python-app:1.0 Instantiating a Container from an Image ➔ Docker Image as a Template: A Docker Image is essentially a blueprint or template. It contains the application code, libraries, dependencies, and instructions for creating a container. Think of an image as a read-only template used to create containers. ➔ Creating vs. Instantiating: Creating an image from a Dockerfile does not automatically create an instance of that image. An image instance, or a container, is created when you run an image. ➔ Instantiating and Starting a Container: Command: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Example: docker run -d -p 80:80 nginx Explanation: This command instantiates and runs an Nginx container in detached mode (-d), allowing the container to run in the background. The -p 80:80 option maps port 80 of the host to port 80 in the container, making the web server accessible from the host machine. As a result, the Nginx server starts and serves content as configured, accessible through the host machine's port 80. Container Lifecycle 1. Creating a Container: 2. Starting a Container: 3. Command: docker start [OPTIONS] CONTAINER Starts one or more stopped containers. Example: docker start 12345abcde Starts container “1234abcde” that you previously was created. Running a Container: 4. Command: docker create [OPTIONS] IMAGE [COMMAND] [ARG...] Creates a new container but does not start it. Example: docker create ubuntu This command creates a new container from the Ubuntu image but does not start it. Command: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] The docker run command is a combination of create and start. Example: docker run –d –i -t -p 80:80 nginx /bin/bash Runs an Nginx container in detached mode (-d), mapping port 80 on the host to port 80 in the container. The -t -i options allocate a pseudo-TTY and keep STDIN open. Stopping a Container: Command: docker stop [OPTIONS] CONTAINER Stops one or more running containers gracefully. Example: docker stop 12345abcde Stops container “1234abcde” that you previously started. Container Lifecycle ➔ Removing a Container: Command: docker rm [OPTIONS] CONTAINER Removes one or more containers. Use -f to force removal of a running container. Example: docker rm 12345abcde Remove container “1234abcde”. Add -f to force removal even if “1234abcde” is running. ➔ Inspecting a Container: Command: docker inspect [OPTIONS] CONTAINER Returns detailed information on the container's configuration. Example: docker inspect 12345abcde Inspects container ““1234abcde”, returning the container detailed configuration information. ➔ Viewing Logs: Command: docker logs [OPTIONS] CONTAINER Fetches the logs of a container. Example: docker logs 12345abcde Views the logs of a container “1234abcde”, fetching the logs output by the container. Orchestration with Docker ➔Docker Compose: A tool for defining and running multi-container Docker applications. Use of YAML files to configure application services. ➔Docker Swarm: A native clustering and scheduling tool for Docker containers. Turning a group of Docker engines into a single, virtual Docker engine. ➔Next Unit: More Orchestration using Kubernetes 05 Docker Networking and Volume Management Docker Networking Understanding Docker Networking ➔ Basics of Docker Networking: Docker networking allows containers to communicate with each other and the outside world. By default, Docker creates three networks (bridge, none, and host) upon installation. ➔ Network Types: Bridge: The default network a container is attached to. Useful for containers to communicate on the same host. None: Disables all networking for the container. Often used for security-focused containers. Host: Removes network isolation between the container and the Docker host, sharing the host's networking namespace. Understanding Docker Networking ➔Creating Custom Networks: Users can create their own networks using the docker network create command. Custom networks provide more control over network settings and inter-container communication. ➔Connecting Containers: Containers on the same network can communicate with each other, and port mappings allow external access. Bridge Network ➔ Bridge Network Overview: The default network when you run a container without specifying a network. It creates a private internal network on the host, allowing containers connected to it to communicate with each other. Containers on the same bridge network can communicate with each other. ➔ Setting up a Bridge Network: Docker automatically creates a default bridge network. You can also create a custom bridge network docker network create --driver bridge --subnet=192.168.10.0/24 --gateway=192.168.10.1 my_bridge_network This command creates a new bridge network named my_bridge_network. Containers attached to this network can communicate with each other. ➔ Connecting Containers to a Bridge Network: When you run a container, you can connect it to the bridge network: docker run -d --name container1 --network my_bridge_network nginx This attaches a container named container1 to my_bridge_network, enabling it to communicate with other containers on the same network. ➔ Connecting an Existing Container: docker network connect my_bridge_network existing_container This adds my_bridge_network to an already running container named existing_container. Host Network ➔ Host Network Overview: Containers use the host's networking directly. There is no network isolation between the container and the host. ➔ Using the Host Network: Containers have access to all host’s ports directly. No need for port forwarding, as containers share the host’s IP. ➔ Running a Container on the Host Network: docker run -d --name host_container --network host nginx This command starts a container from the nginx Docker image. The container is named host_container. Using --network host, the container is configured to use the host’s network stack, meaning it will share the same network namespace as the Docker host. This allows the container to use the host's IP address and exposed ports directly, ideal for cases where direct access to the host's network is necessary, such as running a web server that needs to be accessible externally. None Network ➔None Network Overview: Completely disables networking for a container. It effectively isolates the container from accessing any external or internal networks, making it suitable for security-sensitive applications or for testing purposes. ➔Using the None Network: docker run -d --name isolated_container --network none BusyBox This command runs a container named isolated_container using the BusyBox image with all network interfaces disabled. The --network none option ensures the container is completely isolated from any network, making it suitable for scenarios that require total network isolation. External Access with Port Forwarding ➔ ➔ ➔ ➔ ➔ Overview of Port Forwarding: Port forwarding is a technique used primarily in conjunction with the bridge network in Docker. It involves mapping ports from the host machine to ports within a Docker container. This mapping is crucial for allowing external traffic (from outside the Docker host) to access services running inside a container. Accessing Services in Containers: No Port Forwarding in Host Network: In the host network mode, containers share the host's network namespace. Containers have direct access to all host network interfaces and ports, so port forwarding is not necessary. Port forwarding is configured at the container level and is done using the docker run command. Each container can have its port forwarding setup, allowing for tailored network access. Running a Container with Port Forwarding: docker run -d -p 8080:80 --network my_bridge_network my_web_app This command runs a container from the my_web_app image. -p 8080:80 forwards port 8080 on the host to port 80 inside the container, where the web application is listening. The container is connected to a user-defined bridge network (my_bridge_network), enabling it to communicate with other containers on the same network. Container-Level Configuration: Example: Docker Volume Management Managing Data with Docker Volumes ➔Importance of Volumes: Volumes are essential in Docker for persisting data generated and used by containers. They prevent data loss, ensuring that important data survives even after a container is destroyed ➔Types of Volumes: Named Volumes Bind Volumes tmpfs Volums ➔Volumes can be specified when starting a container using the -v or --mount flag. Types of Volumes - Named Volumes ➔ Description: Managed by Docker and stored in a predefined area of the host filesystem (usually /var/lib/docker/volumes/). ➔ Use: Ideal for when you need persistent data but don’t need to worry about exactly where it’s stored. ➔ Example: Create Named Volume docker volume create my_volume Use Named Volume docker run -d --name db_container -v my_volume:/var/lib/mysql mysql Types of Volumes - Bind Mounts ➔ Description: Direct mapping of a host file or directory to a container. The file or directory is referenced by its full or relative path on the host. ➔ Use: Suitable for cases where you need to store data outside of the Docker-managed area or need to share data between the host and container. ➔ Example: docker run -d --name app_container -v /path/on/host:/path/in/container nginx Types of Volumes - tmpfs Mounts ➔ Description: Stored in the host system's memory only, and never written to the host's filesystem. ➔ Use: Useful for sensitive information you don’t want to persist in a writable layer or on the host filesystem. ➔ Example: docker run -d --name tmp_container --tmpfs /path/in/container nginx ➔ Behavior: tmpfs mounts in Docker are used to create a temporary piece of storage in the host system's memory. When you specify a tmpfs mount, it allocates a portion of the host's memory to store data created by the container. The data in a tmpfs mount is ephemeral and exists only as long as the container is running. Once the container stops, the data is removed and is not written to the host's filesystem Use Cases for Volumes ➔Database Storage: Store database files in a volume to ensure data persists across container restarts and updates. ➔Logs and Backups: Keep logs and backups in a volume for later analysis and recovery purposes. ➔Configurations and Code Sharing: Share configuration files or source code between the host and the container using bind mounts. ➔Sensitive Data: Use tmpfs mounts for sensitive information that should not be stored permanently.