DO180-OCP4.5 Student Guide 2020 PDF
Document Details
Uploaded by FervidPearTree8292
null
2020
null
Zach Gutterman, Dan Kolepp, Eduardo Ramirez Ronco, Jordi Sola Alaball, Richard Allred
Tags
Summary
This document is a student guide for the Red Hat OpenShift I: Containers & Kubernetes course, edition 2. The guide covers topics such as container technology, Kubernetes, and OpenShift architecture, image management, and deployment. It also contains quizzes, guided exercises, and labs.
Full Transcript
Student Workbook (ROLE) OCP 4.5 DO180 Red Hat OpenShift I: Containers & Kubernetes Edition 2 DO180-OCP4.5-en-2-20200911 Copyright ©2020 Red Hat, Inc. Red Hat OpenShift I: Containers & Kubernete...
Student Workbook (ROLE) OCP 4.5 DO180 Red Hat OpenShift I: Containers & Kubernetes Edition 2 DO180-OCP4.5-en-2-20200911 Copyright ©2020 Red Hat, Inc. Red Hat OpenShift I: Containers & Kubernetes DO180-OCP4.5-en-2-20200911 Copyright ©2020 Red Hat, Inc. OCP 4.5 DO180 Red Hat OpenShift I: Containers & Kubernetes Edition 2 20200911 Publication date 20200911 Authors: Zach Gutterman, Dan Kolepp, Eduardo Ramirez Ronco, Jordi Sola Alaball, Richard Allred Editor: Seth Kenlon, Dave Sacco, Connie Petlitzer Copyright © 2019 Red Hat, Inc. The contents of this course and all its modules and related materials, including handouts to audience members, are Copyright © 2019 Red Hat, Inc. No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of Red Hat, Inc. This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat, Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details contained herein. If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send email to [email protected] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919) 754-3700. Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, Hibernate, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a registered trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. The OpenStack® word mark and the Square O Design, together or apart, are trademarks or registered trademarks of OpenStack Foundation in the United States and other countries, and are used with the OpenStack Foundation's permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the OpenStack community. All other trademarks are the property of their respective owners. Contributors: Michael Jarrett, Forrest Taylor and Manuel Aude Morales Document Conventions vii Introduction ix DO180: Red Hat OpenShift I: Containers & Kubernetes.................................................. ix Orientation to the Classroom Environment................................................................... x Internationalization.................................................................................................. xiii 1. Introducing Container Technology 1 Overview of Container Technology............................................................................. 2 Quiz: Overview of Container Technology..................................................................... 5 Overview of Container Architecture............................................................................ 9 Quiz: Overview of Container Architecture.................................................................... 12 Overview of Kubernetes and OpenShift...................................................................... 14 Quiz: Describing Kubernetes and OpenShift................................................................ 17 Guided Exercise: Configuring the Classroom Environment............................................. 19 Summary............................................................................................................... 25 2. Creating Containerized Services 27 Provisioning Containerized Services.......................................................................... 28 Guided Exercise: Creating a MySQL Database Instance................................................ 34 Lab: Creating Containerized Services......................................................................... 37 Summary............................................................................................................... 42 3. Managing Containers 43 Managing the Life Cycle of Containers...................................................................... 44 Guided Exercise: Managing a MySQL Container.......................................................... 52 Attaching Persistent Storage to Containers................................................................ 56 Guided Exercise: Persisting a MySQL Database.......................................................... 59 Accessing Containers.............................................................................................. 62 Guided Exercise: Loading the Database..................................................................... 66 Lab: Managing Containers........................................................................................ 70 Summary............................................................................................................... 79 4. Managing Container Images 81 Accessing Registries................................................................................................ 82 Quiz: Working With Registries................................................................................... 88 Manipulating Container Images................................................................................. 92 Guided Exercise: Creating a Custom Apache Container Image...................................... 98 Lab: Managing Images............................................................................................ 103 Summary................................................................................................................ 111 5. Creating Custom Container Images 113 Designing Custom Container Images......................................................................... 114 Quiz: Approaches to Container Image Design............................................................. 118 Building Custom Container Images with Dockerfiles.................................................... 120 Guided Exercise: Creating a Basic Apache Container Image......................................... 125 Lab: Creating Custom Container Images................................................................... 129 Summary.............................................................................................................. 136 6. Deploying Containerized Applications on OpenShift 137 Describing Kubernetes and OpenShift Architecture.................................................... 138 Quiz: Describing Kubernetes and OpenShift.............................................................. 144 Creating Kubernetes Resources............................................................................... 148 Guided Exercise: Deploying a Database Server on OpenShift....................................... 159 Creating Routes..................................................................................................... 164 Guided Exercise: Exposing a Service as a Route......................................................... 168 Creating Applications with Source-to-Image.............................................................. 173 Guided Exercise: Creating a Containerized Application with Source-to-Image................. 183 DO180-OCP4.5-en-2-20200911 v Creating Applications with the OpenShift Web Console.............................................. 190 Guided Exercise: Creating an Application with the Web Console................................... 196 Lab: Deploying Containerized Applications on OpenShift............................................. 213 Summary.............................................................................................................. 218 7. Deploying Multi-Container Applications 219 Considerations for Multi-Container Applications........................................................ 220 Guided Exercise: Deploying the Web Application and MySQL Containers...................... 225 Deploying a Multi-Container Application on OpenShift................................................ 231 Guided Exercise: Creating an Application with a Template............................................ 241 Lab: Deploying Multi-Container Applications............................................................. 247 Summary............................................................................................................. 254 8. Troubleshooting Containerized Applications 255 Troubleshooting S2I Builds and Deployments............................................................ 256 Guided Exercise: Troubleshooting an OpenShift Build................................................. 261 Troubleshooting Containerized Applications............................................................. 269 Guided Exercise: Configuring Apache Container Logs for Debugging........................... 275 Lab: Troubleshooting Containerized Applications....................................................... 278 Summary............................................................................................................. 289 9. Comprehensive Review 291 Comprehensive Review.......................................................................................... 292 Lab: Containerizing and Deploying a Software Application.......................................... 295 A. Implementing Microservices Architecture 305 Implementing Microservices Architectures............................................................... 306 Guided Exercise: Refactoring the To Do List Application.............................................. 310 Summary.............................................................................................................. 315 B. Creating a GitHub Account 317 Creating a GitHub Account..................................................................................... 318 C. Creating a Quay Account 321 Creating a Quay Account....................................................................................... 322 Repositories Visibility............................................................................................. 325 D. Useful Git Commands 329 Git Commands..................................................................................................... 330 vi DO180-OCP4.5-en-2-20200911 Document Conventions References "References" describe where to find external documentation relevant to a subject. Note "Notes" are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier. Important "Important" boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled "Important" will not cause data loss, but may cause irritation and frustration. Warning "Warnings" should not be ignored. Ignoring warnings will most likely cause data loss. DO180-OCP4.5-en-2-20200911 vii viii DO180-OCP4.5-en-2-20200911 Introduction DO180: Red Hat OpenShift I: Containers & Kubernetes DO180: Red Hat OpenShift I: Containers & Kubernetes is a hands-on course that teaches students how to create, deploy, and manage containers using Podman, Kubernetes, and the Red Hat OpenShift Container Platform. One of the key tenants of the DevOps movement is continuous integration and continuous deployment. Containers have become a key technology for the configuration and deployment of applications and microservices. Red Hat OpenShift Container Platform is an implementation of Kubernetes, a container orchestration system. Course Demonstrate knowledge of the container Objectives ecosystem. Manage Linux containers using Podman. Deploy containers on a Kubernetes cluster using the OpenShift Container Platform. Demonstrate basic container design and the ability to build container images. Implement a container-based architecture using knowledge of containers, Kubernetes, and OpenShift. Audience System Administrators Developers IT Leaders and Infrastructure Architects Prerequisites Students should meet one or more of the following prerequisites: Be able to use a Linux terminal session and issue operating system commands. An RHCSA certification is recommended but not required. Have experience with web application architectures and their corresponding technologies. DO180-OCP4.5-en-2-20200911 ix Introduction Orientation to the Classroom Environment Figure 0.1: Classroom environment In this course, the main computer system used for hands-on learning activities is workstation. This is a virtual machine (VM) named workstation.lab.example.com. All student computer systems have a standard user account, student, which has the password student. The root password on all student systems is redhat. Classroom Machines Machine name IP addresses Role content.example.com, 172.25.252.254, Classroom utility server materials.example.com, 172.25.253.254, classroom.example.com 172.25.254.254 workstation.lab.example.com 172.25.250.254, Student graphical workstation 172.25.252.1 Several systems in the classroom provide supporting services. Two servers, content.example.com and materials.example.com, are sources for software and lab materials used in hands-on activities. Information on how to use these servers is provided in the instructions for those activities. Students use the workstation machine to access a shared OpenShift cluster hosted externally in AWS. Students do not have cluster administrator privileges on the cluster, but that is not necessary to complete the DO180 content. Students are provisioned an account on a shared OpenShift 4 cluster when they provision their environments in the Red Hat Online Learning interface. Cluster information such as the API endpoint, and cluster-ID, as well as their username and password are presented to them when they provision their environment. x DO180-OCP4.5-en-2-20200911 Introduction Students also have access to a MySQL and a Nexus server hosted by either the OpenShift cluster or by AWS. Hands-on activities in this course provide instructions to access these servers when required. Hands-on activities in DO180 also require that students have personal accounts on a two public, free internet services: GitHub and Quay.io. Students need to create these accounts if they do not already have them (see Appendix) and verify their access by signing in to these services before starting the class. Controlling Your Systems Students are assigned remote computers in a Red Hat Online Learning classroom. They are accessed through a web application hosted at rol.redhat.com [http://rol.redhat.com]. Students should log in to this site using their Red Hat Customer Portal user credentials. Controlling the Virtual Machines The virtual machines in your classroom environment are controlled through a web page. The state of each virtual machine in the classroom is displayed on the page under the Online Lab tab. Machine States Virtual Machine Description State STARTING The virtual machine is in the process of booting. STARTED The virtual machine is running and available (or, when booting, soon will be). STOPPING The virtual machine is in the process of shutting down. STOPPED The virtual machine is completely shut down. Upon starting, the virtual machine boots into the same state as when it was shut down (the disk will have been preserved). PUBLISHING The initial creation of the virtual machine is being performed. WAITING_TO_START The virtual machine is waiting for other virtual machines to start. Depending on the state of a machine, a selection of the following actions is available. Classroom/Machine Actions Button or Action Description PROVISION LAB Create the ROL classroom. Creates all of the virtual machines needed for the classroom and starts them. This can take several minutes to complete. DELETE LAB Delete the ROL classroom. Destroys all virtual machines in the classroom. Caution: Any work generated on the disks is lost. START LAB Start all virtual machines in the classroom. DO180-OCP4.5-en-2-20200911 xi Introduction Button or Action Description SHUTDOWN LAB Stop all virtual machines in the classroom. OPEN CONSOLE Open a new tab in the browser and connect to the console of the virtual machine. Students can log in directly to the virtual machine and run commands. In most cases, students should log in to the workstation virtual machine and use ssh to connect to the other virtual machines. ACTION → Start Start (power on) the virtual machine. ACTION → Gracefully shut down the virtual machine, preserving the contents of Shutdown its disk. ACTION → Power Forcefully shut down the virtual machine, preserving the contents of its Off disk. This is equivalent to removing the power from a physical machine. ACTION → Reset Forcefully shut down the virtual machine and reset the disk to its initial state. Caution: Any work generated on the disk is lost. At the start of an exercise, if instructed to reset a single virtual machine node, click ACTION → Reset for only the specific virtual machine. At the start of an exercise, if instructed to reset all virtual machines, click ACTION → Reset If you want to return the classroom environment to its original state at the start of the course, you can click DELETE LAB to remove the entire classroom environment. After the lab has been deleted, click PROVISION LAB to provision a new set of classroom systems. Warning The DELETE LAB operation cannot be undone. Any work you have completed in the classroom environment up to that point will be lost. The Autostop Timer The Red Hat Online Learning enrollment entitles students to a certain amount of computer time. To help conserve allotted computer time, the ROL classroom has an associated countdown timer, which shuts down the classroom environment when the timer expires. To adjust the timer, click MODIFY to display the New Autostop Time dialog box. Set the number of hours and minutes until the classroom should automatically stop. Note that there is a maximum time of ten hours. Click ADJUST TIME to apply this change to the timer settings. xii DO180-OCP4.5-en-2-20200911 Introduction Internationalization Per-user Language Selection Your users might prefer to use a different language for their desktop environment than the system-wide default. They might also want to use a different keyboard layout or input method for their account. Language Settings In the GNOME desktop environment, the user might be prompted to set their preferred language and input method on first login. If not, then the easiest way for an individual user to adjust their preferred language and input method settings is to use the Region & Language application. You can start this application in two ways. You can run the command gnome-control-center region from a terminal window, or on the top bar, from the system menu in the right corner, select the settings button (which has a crossed screwdriver and wrench for an icon) from the bottom left of the menu. In the window that opens, select Region & Language. Click the Language box and select the preferred language from the list that appears. This also updates the Formats setting to the default for that language. The next time you log in, these changes will take full effect. These settings affect the GNOME desktop environment and any applications such as gnome- terminal that are started inside it. However, by default they do not apply to that account if accessed through an ssh login from a remote system or a text-based login on a virtual console (such as tty5). Note You can make your shell environment use the same LANG setting as your graphical environment, even when you log in through a text-based virtual console or over ssh. One way to do this is to place code similar to the following in your ~/.bashrc file. This example code will set the language used on a text login to match the one currently set for the user's GNOME desktop environment: i=$(grep 'Language=' /var/lib/AccountsService/users/${USER} \ | sed 's/Language=//') if [ "$i" != "" ]; then export LANG=$i fi Japanese, Korean, Chinese, and other languages with a non-Latin character set might not display properly on text-based virtual consoles. Individual commands can be made to use another language by setting the LANG variable on the command line: DO180-OCP4.5-en-2-20200911 xiii Introduction [user@host ~]$ LANG=fr_FR.utf8 date jeu. avril 25 17:55:01 CET 2019 Subsequent commands will revert to using the system's default language for output. The locale command can be used to determine the current value of LANG and other related environment variables. Input Method Settings GNOME 3 in Red Hat Enterprise Linux 7 or later automatically uses the IBus input method selection system, which makes it easy to change keyboard layouts and input methods quickly. The Region & Language application can also be used to enable alternative input methods. In the Region & Language application window, the Input Sources box shows what input methods are currently available. By default, English (US) may be the only available method. Highlight English (US) and click the keyboard icon to see the current keyboard layout. To add another input method, click the + button at the bottom left of the Input Sources window. An Add an Input Source window will open. Select your language, and then your preferred input method or keyboard layout. When more than one input method is configured, the user can switch between them quickly by typing Super+Space (sometimes called Windows+Space). A status indicator will also appear in the GNOME top bar, which has two functions: It indicates which input method is active, and acts as a menu that can be used to switch between input methods or select advanced features of more complex input methods. Some of the methods are marked with gears, which indicate that those methods have advanced configuration options and capabilities. For example, the Japanese Japanese (Kana Kanji) input method allows the user to pre-edit text in Latin and use Down Arrow and Up Arrow keys to select the correct characters to use. US English speakers may also find this useful. For example, under English (United States) is the keyboard layout English (international AltGr dead keys), which treats AltGr (or the right Alt) on a PC 104/105-key keyboard as a "secondary shift" modifier key and dead key activation key for typing additional characters. There are also Dvorak and other alternative layouts available. Note Any Unicode character can be entered in the GNOME desktop environment if you know the character's Unicode code point. Type Ctrl+Shift+U, followed by the code point. After Ctrl+Shift+U has been typed, an underlined u will be displayed to indicate that the system is waiting for Unicode code point entry. For example, the lowercase Greek letter lambda has the code point U+03BB, and can be entered by typing Ctrl+Shift+U, then 03BB, then Enter. System-wide Default Language Settings The system's default language is set to US English, using the UTF-8 encoding of Unicode as its character set (en_US.utf8), but this can be changed during or after installation. From the command line, the root user can change the system-wide locale settings with the localectl command. If localectl is run with no arguments, it displays the current system- wide locale settings. xiv DO180-OCP4.5-en-2-20200911 Introduction To set the system-wide default language, run the command localectl set-locale LANG=locale, where locale is the appropriate value for the LANG environment variable from the "Language Codes Reference" table in this chapter. The change will take effect for users on their next login, and is stored in /etc/locale.conf. [root@host ~]# localectl set-locale LANG=fr_FR.utf8 In GNOME, an administrative user can change this setting from Region & Language by clicking the Login Screen button at the upper-right corner of the window. Changing the Language of the graphical login screen will also adjust the system-wide default language setting stored in the / etc/locale.conf configuration file. Important Text-based virtual consoles such as tty4 are more limited in the fonts they can display than terminals in a virtual console running a graphical environment, or pseudo-terminals for ssh sessions. For example, Japanese, Korean, and Chinese characters may not display as expected on a text-based virtual console. For this reason, you should consider using English or another language with a Latin character set for the system-wide default. Likewise, text-based virtual consoles are more limited in the input methods they support, and this is managed separately from the graphical desktop environment. The available global input settings can be configured through localectl for both text-based virtual consoles and the graphical environment. See the localectl(1) and vconsole.conf(5) man pages for more information. Language Packs Special RPM packages called langpacks install language packages that add support for specific languages. These langpacks use dependencies to automatically install additional RPM packages containing localizations, dictionaries, and translations for other software packages on your system. To list the langpacks that are installed and that may be installed, use yum list langpacks-*: [root@host ~]# yum list langpacks-* Updating Subscription Management repositories. Updating Subscription Management repositories. Installed Packages langpacks-en.noarch 1.0-12.el8 @AppStream Available Packages langpacks-af.noarch 1.0-12.el8 rhel-8-for-x86_64-appstream-rpms langpacks-am.noarch 1.0-12.el8 rhel-8-for-x86_64-appstream-rpms langpacks-ar.noarch 1.0-12.el8 rhel-8-for-x86_64-appstream-rpms langpacks-as.noarch 1.0-12.el8 rhel-8-for-x86_64-appstream-rpms langpacks-ast.noarch 1.0-12.el8 rhel-8-for-x86_64-appstream-rpms...output omitted... To add language support, install the appropriate langpacks package. For example, the following command adds support for French: [root@host ~]# yum install langpacks-fr DO180-OCP4.5-en-2-20200911 xv Introduction Use yum repoquery --whatsupplements to determine what RPM packages may be installed by a langpack: [root@host ~]# yum repoquery --whatsupplements langpacks-fr Updating Subscription Management repositories. Updating Subscription Management repositories. Last metadata expiration check: 0:01:33 ago on Wed 06 Feb 2019 10:47:24 AM CST. glibc-langpack-fr-0:2.28-18.el8.x86_64 gnome-getting-started-docs-fr-0:3.28.2-1.el8.noarch hunspell-fr-0:6.2-1.el8.noarch hyphen-fr-0:3.0-1.el8.noarch libreoffice-langpack-fr-1:6.0.6.1-9.el8.x86_64 man-pages-fr-0:3.70-16.el8.noarch mythes-fr-0:2.3-10.el8.noarch Important Langpacks packages use RPM weak dependencies in order to install supplementary packages only when the core package that needs it is also installed. For example, when installing langpacks-fr as shown in the preceding examples, the mythes-fr package will only be installed if the mythes thesaurus is also installed on the system. If mythes is subsequently installed on that system, the mythes-fr package will also automatically be installed due to the weak dependency from the already installed langpacks-fr package. References locale(7), localectl(1), locale.conf(5), vconsole.conf(5), unicode(7), and utf-8(7) man pages Conversions between the names of the graphical desktop environment's X11 layouts and their names in localectl can be found in the file /usr/share/X11/xkb/ rules/base.lst. Language Codes Reference Note This table might not reflect all langpacks available on your system. Use yum info langpacks-SUFFIX to get more information about any particular langpacks package. Language Codes Language Langpacks Suffix $LANG value English (US) en en_US.utf8 xvi DO180-OCP4.5-en-2-20200911 Introduction Language Langpacks Suffix $LANG value Assamese as as_IN.utf8 Bengali bn bn_IN.utf8 Chinese (Simplified) zh_CN zh_CN.utf8 Chinese (Traditional) zh_TW zh_TW.utf8 French fr fr_FR.utf8 German de de_DE.utf8 Gujarati gu gu_IN.utf8 Hindi hi hi_IN.utf8 Italian it it_IT.utf8 Japanese ja ja_JP.utf8 Kannada kn kn_IN.utf8 Korean ko ko_KR.utf8 Malayalam ml ml_IN.utf8 Marathi mr mr_IN.utf8 Odia or or_IN.utf8 Portuguese (Brazilian) pt_BR pt_BR.utf8 Punjabi pa pa_IN.utf8 Russian ru ru_RU.utf8 Spanish es es_ES.utf8 Tamil ta ta_IN.utf8 Telugu te te_IN.utf8 DO180-OCP4.5-en-2-20200911 xvii xviii DO180-OCP4.5-en-2-20200911 Chapter 1 Introducing Container Technology Goal Describe how applications run in containers orchestrated by Red Hat OpenShift Container Platform. Objectives Describe the difference between container applications and traditional deployments. Describe the basics of container architecture. Describe the benefits of orchestrating applications and OpenShift Container Platform. Sections Overview of Container Technology (and Quiz) Overview of Container Architecture (and Quiz) Overview of Kubernetes and OpenShift (and Quiz) DO180-OCP4.5-en-2-20200911 1 Chapter 1 | Introducing Container Technology Overview of Container Technology Objectives After completing this section, students should be able to describe the difference between container applications and traditional deployments. Containerized Applications Software applications typically depend on other libraries, configuration files, or services that are provided by the runtime environment. The traditional runtime environment for a software application is a physical host or virtual machine, and application dependencies are installed as part of the host. For example, consider a Python application that requires access to a common shared library that implements the TLS protocol. Traditionally, a system administrator installs the required package that provides the shared library before installing the Python application. The major drawback to traditionally deployed software application is that the application's dependencies are entangled with the runtime environment. An application may break when any updates or patches are applied to the base operating system (OS). For example, an OS update to the TLS shared library removes TLS 1.0 as a supported protocol. This breaks the deployed Python application because it is written to use the TLS 1.0 protocol for network requests. This forces the system administrator to roll back the OS update to keep the application running, preventing other applications from using the benefits of the updated package. Therefore, a company developing traditional software applications may require a full set of tests to guarantee that an OS update does not affect applications running on the host. Furthermore, a traditionally deployed application must be stopped before updating the associated dependencies. To minimize application downtime, organizations design and implement complex systems to provide high availability of their applications. Maintaining multiple applications on a single host often becomes cumbersome, and any deployment or update has the potential to break one of the organization's applications. Figure 1.1 describes the difference between applications running as containers and applications running on the host operating system. Figure 1.1: Container versus operating system differences 2 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Alternatively, a software application can be deployed using a container. A container is a set of one or more processes that are isolated from the rest of the system. Containers provide many of the same benefits as virtual machines, such as security, storage, and network isolation. Containers require far fewer hardware resources and are quick to start and terminate. They also isolate the libraries and the runtime resources (such as CPU and storage) for an application to minimize the impact of any OS update to the host OS, as described in Figure 1.1. The use of containers not only helps with the efficiency, elasticity, and reusability of the hosted applications, but also with application portability. The Open Container Initiative provides a set of industry standards that define a container runtime specification and a container image specification. The image specification defines the format for the bundle of files and metadata that form a container image. When you build an application as a container image, which complies with the OCI standard, you can use any OCI-compliant container engine to execute the application. There are many container engines available to manage and execute individual containers, including Rocket, Drawbridge, LXC, Docker, and Podman. Podman is available in Red Hat Enterprise Linux 7.6 and later, and is used in this course to start, manage, and terminate individual containers. The following are other major advantages to using containers: Low hardware footprint Containers use OS internal features to create an isolated environment where resources are managed using OS facilities such as namespaces and cgroups. This approach minimizes the amount of CPU and memory overhead compared to a virtual machine hypervisor. Running an application in a VM is a way to create isolation from the running environment, but it requires a heavy layer of services to support the same low hardware footprint isolation provided by containers. Environment isolation Containers work in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self- contained, the application can run without disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers. Quick deployment Containers deploy quickly because there is no need to install the entire underlying operating system. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any simple update might require a full OS restart. A container restart does not require stopping any services on the host OS. Multiple environment deployment In a traditional deployment scenario using a single host, any environment differences could break the application. Using containers, however, all application dependencies and environment settings are encapsulated in the container image. Reusability The same container can be reused without the need to set up a full OS. For example, the same database container that provides a production database service can be used by each developer to create a development database during application development. Using containers, there is no longer a need to maintain separate production and development database servers. A single container image is used to create instances of the database service. Often, a software application with all of its dependent services (databases, messaging, file systems) are made to run in a single container. This can lead to the same problems associated DO180-OCP4.5-en-2-20200911 3 Chapter 1 | Introducing Container Technology with traditional software deployments to virtual machines or physical hosts. In these instances, a multicontainer deployment may be more suitable. Furthermore, containers are an ideal approach when using microservices for application development. Each service is encapsulated in a lightweight and reliable container environment that can be deployed to a production or development environment. The collection of containerized services required by an application can be hosted on a single machine, removing the need to manage a machine for each service. In contrast, many applications are not well suited for a containerized environment. For example, applications accessing low-level hardware information, such as memory, file systems, and devices may be unreliable due to container limitations. References Home - Open Containers Initiative https://www.opencontainers.org/ 4 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Quiz Overview of Container Technology Choose the correct answers to the following questions: 1. Which two options are examples of software applications that might run in a container? (Choose two.) a. A database-driven Python application accessing services such as a MySQL database, a file transfer protocol (FTP) server, and a web server on a single physical host. b. A Java Enterprise Edition application, with an Oracle database, and a message broker running on a single VM. c. An I/O monitoring tool responsible for analyzing the traffic and block data transfer. d. A memory dump application tool capable of taking snapshots from all the memory CPU caches for debugging purposes. 2. Which two of the following use cases are best suited for containers? (Choose two.) a. A software provider needs to distribute software that can be reused by other companies in a fast and error-free way. b. A company is deploying applications on a physical host and would like to improve its performance by using containers. c. Developers at a company need a disposable environment that mimics the production environment so that they can quickly test the code they develop. d. A financial company is implementing a CPU-intensive risk analysis tool on their own containers to minimize the number of processors needed. 3. A company is migrating their PHP and Python applications running on the same host to a new architecture. Due to internal policies, both are using a set of custom made shared libraries from the OS, but the latest update applied to them as a result of a Python development team request broke the PHP application. Which two architectures would provide the best support for both applications? (Choose two.) a. Deploy each application to different VMs and apply the custom made shared libraries individually to each VM host. b. Deploy each application to different containers and apply the custom made shared libraries individually to each container. c. Deploy each application to different VMs and apply the custom made shared libraries to all VM hosts. d. Deploy each application to different containers and apply the custom made shared libraries to all containers. DO180-OCP4.5-en-2-20200911 5 Chapter 1 | Introducing Container Technology 4. Which three kinds of applications can be packaged as containers for immediate consumption? (Choose three.) a. A virtual machine hypervisor b. A blog software, such as WordPress c. A database d. A local file system recovery tool e. A web server 6 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Solution Overview of Container Technology Choose the correct answers to the following questions: 1. Which two options are examples of software applications that might run in a container? (Choose two.) a. A database-driven Python application accessing services such as a MySQL database, a file transfer protocol (FTP) server, and a web server on a single physical host. b. A Java Enterprise Edition application, with an Oracle database, and a message broker running on a single VM. c. An I/O monitoring tool responsible for analyzing the traffic and block data transfer. d. A memory dump application tool capable of taking snapshots from all the memory CPU caches for debugging purposes. 2. Which two of the following use cases are best suited for containers? (Choose two.) a. A software provider needs to distribute software that can be reused by other companies in a fast and error-free way. b. A company is deploying applications on a physical host and would like to improve its performance by using containers. c. Developers at a company need a disposable environment that mimics the production environment so that they can quickly test the code they develop. d. A financial company is implementing a CPU-intensive risk analysis tool on their own containers to minimize the number of processors needed. 3. A company is migrating their PHP and Python applications running on the same host to a new architecture. Due to internal policies, both are using a set of custom made shared libraries from the OS, but the latest update applied to them as a result of a Python development team request broke the PHP application. Which two architectures would provide the best support for both applications? (Choose two.) a. Deploy each application to different VMs and apply the custom made shared libraries individually to each VM host. b. Deploy each application to different containers and apply the custom made shared libraries individually to each container. c. Deploy each application to different VMs and apply the custom made shared libraries to all VM hosts. d. Deploy each application to different containers and apply the custom made shared libraries to all containers. DO180-OCP4.5-en-2-20200911 7 Chapter 1 | Introducing Container Technology 4. Which three kinds of applications can be packaged as containers for immediate consumption? (Choose three.) a. A virtual machine hypervisor b. A blog software, such as WordPress c. A database d. A local file system recovery tool e. A web server 8 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Overview of Container Architecture Objectives After completing this section, students should be able to: Describe the architecture of Linux containers. Install the podman utility to manage containers. Introducing Container History Containers have quickly gained popularity in recent years. However, the technology behind containers has been around for a relatively long time. In 2001, Linux introduced a project named VServer. VServer was the first attempt at running complete sets of processes inside a single server with a high degree of isolation. From VServer, the idea of isolated processes further evolved and became formalized around the following features of the Linux kernel: Namespaces The kernel can isolate specific system resources, usually visible to all processes, by placing the resources within a namespace. Inside a namespace, only processes that are members of that namespace can see those resources. Namespaces can include resources like network interfaces, the process ID list, mount points, IPC resources, and the system's host name information. Control groups (cgroups) Control groups partition sets of processes and their children into groups to manage and limit the resources they consume. Control groups place restrictions on the amount of system resources processes might use. Those restrictions keep one process from using too many resources on the host. Seccomp Developed in 2005 and introduced to containers circa 2014, Seccomp limits how processes could use system calls. Seccomp defines a security profile for processes, whitelisting the system calls, parameters and file descriptors they are allowed to use. SELinux SELinux (Security-Enhanced Linux) is a mandatory access control system for processes. Linux kernel uses SELinux to protect processes from each other and to protect the host system from its running processes. Processes run as a confined SELinux type that has limited access to host system resources. All of these innovations and features focus around a basic concept: enabling processes to run isolated while still accessing system resources. This concept is the foundation of container technology and the basis for all container implementations. Nowadays, containers are processes in Linux kernel making use of those security features to create an isolated environment. This environment forbids isolated processes from misusing system or other container resources. A common use case of containers is having several replicas of the same service (for example, a database server) in the same host. Each replica has isolated resources (file system, ports, DO180-OCP4.5-en-2-20200911 9 Chapter 1 | Introducing Container Technology memory), so there is no need for the service to handle resource sharing. Isolation guarantees that a malfunctioning or harmful service does not impact other services or containers in the same host, nor in the underlying system. Describing Linux Container Architecture From the Linux kernel perspective, a container is a process with restrictions. However, instead of running a single binary file, a container runs an image. An image is a file-system bundle that contains all dependencies required to execute a process: files in the file system, installed packages, available resources, running processes, and kernel modules. Like executable files are the foundation for running processes, images are the foundation for running containers. Running containers use an immutable view of the image, allowing multiple containers to reuse the same image simultaneously. As images are files, they can be managed by versioning systems, improving automation on container and image provisioning. Container images need to be locally available for the container runtime to execute them, but the images are usually stored and maintained in an image repository. An image repository is just a service - public or private - where images can be stored, searched and retrieved. Other features provided by image repositories are remote access, image metadata, authorization or image version control. There are many different image repositories available, each one offering different features: Red Hat Container Catalog [https://registry.redhat.io] Docker Hub [https://hub.docker.com] Red Hat Quay [https://quay.io/] Google Container Registry [https://cloud.google.com/container-registry/] Amazon Elastic Container Registry [https://aws.amazon.com/ecr/] This course uses the public image registry Quay, so students can operate with images without worrying about interfering with each other. Managing Containers with Podman Containers, images, and image registries need to be able to interact with each other. For example, you need to be able to build images and put them into image registries. You also need to be able to retrieve an image from the image registry and build a container from that image. Podman is an open source tool for managing containers and container images and interacting with image registries. It offers the following key features: It uses image format specified by the Open Container Initiative [https:// www.opencontainers.org] (OCI). Those specifications define an standard, community-driven, non-proprietary image format. Podman stores local images in local file-system. Doing so avoids unnecessary client/server architecture or having daemons running on local machine. Podman follows the same command patterns as the Docker CLI, so there is no need to learn a new toolset. Podman is compatible with Kubernetes. Kubernetes can use Podman to manage its containers. 10 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Currently, Podman is only available on Linux systems. To install Podman in Red Hat Enterprise Linux, Fedora or similar RPM-based systems, run sudo yum install podman or sudo dnf install podman. References Red Hat Quay Container Registry https://quay.io Podman site https://podman.io/ Open Container Initiative https://www.opencontainers.org DO180-OCP4.5-en-2-20200911 11 Chapter 1 | Introducing Container Technology Quiz Overview of Container Architecture Choose the correct answers to the following questions: 1. Which three of the following Linux features are used for running containers? (Choose three.) a. Namespaces b. Integrity Management c. Security-Enhanced Linux d. Control Groups 2. Which of the following best describes a container image? a. A virtual machine image from which a container will be created. b. A container blueprint from which a container will be created. c. A runtime environment where an application will run. d. The container's index file used by a registry. 3. Which three of the following components are common across container architecture implementations? (Choose three.) a. Container runtime b. Container permissions c. Container images d. Container registries 4. What is a container in relation to the Linux kernel? a. A virtual machine. b. An isolated process with regulated resource access. c. A set of file-system layers exposed by UnionFS. d. An external service providing container images. 12 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Solution Overview of Container Architecture Choose the correct answers to the following questions: 1. Which three of the following Linux features are used for running containers? (Choose three.) a. Namespaces b. Integrity Management c. Security-Enhanced Linux d. Control Groups 2. Which of the following best describes a container image? a. A virtual machine image from which a container will be created. b. A container blueprint from which a container will be created. c. A runtime environment where an application will run. d. The container's index file used by a registry. 3. Which three of the following components are common across container architecture implementations? (Choose three.) a. Container runtime b. Container permissions c. Container images d. Container registries 4. What is a container in relation to the Linux kernel? a. A virtual machine. b. An isolated process with regulated resource access. c. A set of file-system layers exposed by UnionFS. d. An external service providing container images. DO180-OCP4.5-en-2-20200911 13 Chapter 1 | Introducing Container Technology Overview of Kubernetes and OpenShift Objectives After completing this section, students should be able to: Identify the limitations of Linux containers and the need for container orchestration. Describe the Kubernetes container orchestration tool. Describe Red Hat OpenShift Container Platform (RHOCP). Limitations of Containers Containers provide an easy way to package and run services. As the number of containers managed by an organization grows, the work of manually starting them rises exponentially along with the need to quickly respond to external demands. When using containers in a production environment, enterprises often require: Easy communication between a large number of services. Resource limits on applications regardless of the number of containers running them. Respond to application usage spikes to increase or decrease running containers. React to service deterioration. Gradually roll out a new release to a set of users. Enterprises often require a container orchestration technology because container runtimes (such as Podman) do not adequately address the above requirements. Kubernetes Overview Kubernetes is an orchestration service that simplifies the deployment, management, and scaling of containerized applications. The smallest unit manageable in Kubernetes is a pod. A pod consists of one or more containers with its storage resources and IP address that represent a single application. Kubernetes also uses pods to orchestrate the containers inside it and to limit its resources as a single unit. Kubernetes Features Kubernetes offers the following features on top of a container infrastructure: Service discovery and load balancing Kubernetes enables inter-service communication by assigning a single DNS entry to each set of containers. This way, the requesting service only needs to know the target's DNS name, allowing the cluster to change the container's location and IP address, leaving the service unaffected. This permits load-balancing the request across the pool of containers providing the service. For example, Kubernetes can evenly split incoming requests to a MySQL service taking into account the availability of the pods. 14 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Horizontal scaling Applications can scale up and down manually or automatically with configuration set either with the Kubernetes command-line interface or the web UI. Self-healing Kubernetes can use user-defined health checks to monitor containers to restart and reschedule them in case of failure. Automated rollout Kubernetes can gradually roll updates out to your application's containers while checking their status. If something goes wrong during the rollout, Kubernetes can roll back to the previous iteration of the deployment. Secrets and configuration management You can manage configuration settings and secrets of your applications without rebuilding containers. Application secrets can be user names, passwords, and service endpoints; any configuration settings that need to be kept private. Operators Operators are packaged Kubernetes applications that also bring the knowledge of the application's life cycle into the Kubernetes cluster. Applications packaged as Operators use the Kubernetes API to update the cluster's state reacting to changes in the application state. OpenShift Overview Red Hat OpenShift Container Platform (RHOCP) is a set of modular components and services built on top of a Kubernetes container infrastructure. RHOCP adds the capabilities to provide a production PaaS platform such as remote management, multitenancy, increased security, monitoring and auditing, application life-cycle management, and self-service interfaces for developers. Beginning with Red Hat OpenShift v4, hosts in an OpenShift cluster all use Red Hat Enterprise Linux CoreOS as the underlying operating system. Throughout this course, the terms RHOCP and OpenShift are used to refer to the Red Hat OpenShift Container Platform. OpenShift Features OpenShift adds the following features to a Kubernetes cluster: Integrated developer workflow RHOCP integrates a built-in container registry, CI/CD pipelines, and S2I; a tool to build artifacts from source repositories to container images. Routes Easily expose services to the outside world. Metrics and logging Include built-in and self-analyzing metrics service and aggregated logging. Unified UI OpenShift brings unified tools and a UI to manage all the different capabilities. DO180-OCP4.5-en-2-20200911 15 Chapter 1 | Introducing Container Technology References Production-Grade Container Orchestration - Kubernetes https://kubernetes.io/ OpenShift: Container Application Platform by Red Hat, Built on Docker and Kubernetes https://www.openshift.com/ 16 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Quiz Describing Kubernetes and OpenShift Choose the correct answers to the following questions: 1. Which three of the following statements are correct regarding container limitations? (Choose three.) a. Containers are easily orchestrated in large numbers. b. Lack of automation increases response time to problems. c. Containers do not manage application failure inside them. d. Containers are not load-balanced. e. Containers are heavily isolated packaged applications. 2. Which two of the following statements are correct regarding Kubernetes? (Choose two.) a. Kubernetes is a container. b. Kubernetes can only use Docker containers. c. Kubernetes is a container orchestration system. d. Kubernetes simplifies management, deployment, and scaling of containerized applications. e. Applications managed in a Kubernetes cluster are harder to maintain. 3. Which three of the following statements are true regarding Red Hat OpenShift v4? (Choose three.) a. OpenShift provides additional features to a Kubernetes infrastructure. b. Kubernetes and OpenShift are mutually exclusive. c. OpenShift hosts use Red Hat Enterprise Linux as the base operating system. d. OpenShift simplifies development incorporating a Source-to-Image technology and CI/ CD pipelines. e. OpenShift simplifies routing and load balancing. 4. What features does OpenShift offer that extend Kubernetes capabilities? (choose two.) a. Operators and the Operator Framework. b. Routes to expose services to the outside world. c. An integrated development workflow. d. Self-healing and health checks. DO180-OCP4.5-en-2-20200911 17 Chapter 1 | Introducing Container Technology Solution Describing Kubernetes and OpenShift Choose the correct answers to the following questions: 1. Which three of the following statements are correct regarding container limitations? (Choose three.) a. Containers are easily orchestrated in large numbers. b. Lack of automation increases response time to problems. c. Containers do not manage application failure inside them. d. Containers are not load-balanced. e. Containers are heavily isolated packaged applications. 2. Which two of the following statements are correct regarding Kubernetes? (Choose two.) a. Kubernetes is a container. b. Kubernetes can only use Docker containers. c. Kubernetes is a container orchestration system. d. Kubernetes simplifies management, deployment, and scaling of containerized applications. e. Applications managed in a Kubernetes cluster are harder to maintain. 3. Which three of the following statements are true regarding Red Hat OpenShift v4? (Choose three.) a. OpenShift provides additional features to a Kubernetes infrastructure. b. Kubernetes and OpenShift are mutually exclusive. c. OpenShift hosts use Red Hat Enterprise Linux as the base operating system. d. OpenShift simplifies development incorporating a Source-to-Image technology and CI/ CD pipelines. e. OpenShift simplifies routing and load balancing. 4. What features does OpenShift offer that extend Kubernetes capabilities? (choose two.) a. Operators and the Operator Framework. b. Routes to expose services to the outside world. c. An integrated development workflow. d. Self-healing and health checks. 18 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Guided Exercise Configuring the Classroom Environment In this exercise, you will configure the workstation to access all infrastructure used by this course. Outcomes You should be able to: Configure your workstation to access an OpenShift cluster, a container image registry, and a Git repository used throughout the course. Fork this course's sample applications repository to your personal GitHub account. Clone this course's sample applications repository from your personal GitHub account to your workstation VM. Before You Begin To perform this exercise, ensure you have: Access to the DO180 course in the Red Hat Training's Online Learning Environment. The connection parameters and a developer user account to access an OpenShift cluster managed by Red Hat Training. A personal, free GitHub account. If you need to register to GitHub, see the instructions in Appendix B, Creating a GitHub Account. A personal, free Quay.io account. If you need to register to Quay.io, see the instructions in Appendix C, Creating a Quay Account. 1. Before starting any exercise, you need to configure your workstation VM. For the following steps, use the values the Red Hat Training Online Learning environment provides to you when you provision your online lab environment: DO180-OCP4.5-en-2-20200911 19 Chapter 1 | Introducing Container Technology Open a terminal on your workstation VM and execute the following command. Answer its interactive prompts to configure your workstation before starting any other exercise in this course. If you make a mistake, you can interrupt the command at any time using Ctrl+C and start over. [student@workstation ~]$ lab-configure 1.1. The lab-configure command starts by displaying a series of interactive prompts, and will try to find some sensible defaults for some of them. This script configures the connection parameters to access the OpenShift cluster for your lab scripts · Enter the API Endpoint: https://api.cluster.domain.example.com:6443 · Enter the Username: youruser · Enter the Password: yourpassword · Enter the GitHub Account Name: yourgituser · Enter the Quay.io Account Name: yourquayuser...output omitted... The URL to your OpenShift cluster's Master API. Type the URL as a single line, without spaces or line breaks. Red Hat Training provides this information to you when you provision your lab environment. You need this information to log in to the cluster and also to deploy containerized applications. Your OpenShift developer user name and password. Red Hat Training provides this information to you when you provision your lab environment. You need to use this user name and password to log in to OpenShift. You will also use your user name as part of the identifiers, such as route host names and project 20 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology names, to avoid collision with identifiers from other students who share the same OpenShift cluster with you. Your personal GitHub and Quay.io account names. You need valid, free accounts on these online services to perform this course's exercises. If you have never used any of these online services, refer to Appendix B, Creating a GitHub Account and Appendix C, Creating a Quay Account for instructions about how to register. Note If you use two-factor authentication with your GitHub account you may want to create a personal access token for use from the workstation VM during the course. Refer to the following documentation on how to setup a personal access on your account: Creating a personal access token for the command line [https:// help.github.com/en/articles/creating-a-personal-access-token-for-the-command- line] 1.2. The lab-configure command prints all the information that you entered and tries to connect to your OpenShift cluster:...output omitted... You entered: · API Endpoint: https://api.cluster.domain.example.com:6443 · Username: youruser · Password: yourpassword · GitHub Account Name: yourgituser · Quay.io Account Name: yourquayuser...output omitted... 1.3. If lab-configure finds any issues, it displays an error message and exits. You will need to verify your information and run the lab-configure command again. The following listing shows an example of a verification error:...output omitted... Verifying your Master API URL... ERROR: Cannot connect to an OpenShift 4.5 API using your URL. Please verify your network connectivity and that the URL does not point to an OpenShift 3.x nor to a non-OpenShift Kubernetes API. No changes made to your lab configuration. 1.4. If everything is OK so far, the lab-configure tries to access your public GitHub and Quay.io accounts: DO180-OCP4.5-en-2-20200911 21 Chapter 1 | Introducing Container Technology...output omitted... Verifying your GitHub account name... Verifying your Quay.io account name......output omitted... 1.5. Again, lab-configure displays an error message and exits if it finds any issues. You will need to verify your information and run the lab-configure command again. The following listing shows an example of a verification error:...output omitted... Verifying your GitHub account name... ERROR: Cannot find a GitHub account named: invalidusername. No changes made to your lab configuration. 1.6. Finally, the lab-configure command verifies that your OpenShift cluster reports the expected wildcard domain....output omitted... Verifying your cluster configuration......output omitted... 1.7. If all checks pass, the lab-configure command saves your configuration:...output omitted... Saving your lab configuration file... All fine, lab config saved. You can now proceed with your exercises. 1.8. If there were no errors saving your configuration, you are almost ready to start any of this course's exercises. If there were any errors, do not try to start any exercise until you can execute the lab-configure command successfully. 2. Before starting any exercise, you need to fork this course's sample applications into your personal GitHub account. Perform the following steps: 2.1. Open a web browser and navigate to https://github.com/RedHatTraining/ DO180-apps. If you are not logged in to GitHub, click Sign in in the upper-right corner. 22 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology 2.2. Log in to GitHub using your personal user name and password. 2.3. Return to the RedHatTraining/DO180-apps repository and click Fork in the upper-right corner. 2.4. In the Fork DO180-apps window, click yourgituser to select your personal GitHub project. Important While it is possible to rename your personal fork of the https://github.com/ RedHatTraining/DO180-apps repository, grading scripts, helper scripts, and the example output in this course assume that you retain the name DO180-apps when your fork the repository. 2.5. After a few minutes, the GitHub web interface displays your new repository yourgituser/DO180-apps. DO180-OCP4.5-en-2-20200911 23 Chapter 1 | Introducing Container Technology 3. Before starting any exercise, you also need to clone this course's sample applications from your personal GitHub account to your workstation VM. Perform the following steps: 3.1. Run the following command to clone this course's sample applications repository. Replace yourgituser with the name of your personal GitHub account: [student@workstation ~]$ git clone https://github.com/yourgituser/DO180-apps Cloning into 'DO180-apps'......output omitted... 3.2. Verify that /home/student/DO180-apps is a Git repository: [student@workstation ~]$ cd DO180-apps [student@workstation DO180-apps]$ git status # On branch master nothing to commit, working directory clean 3.3. Verify that /home/student/DO180-apps contains this course's sample applications, and change back to the student user's home folder. [student@workstation DO180-apps]$ head README.md # DO180-apps...output omitted... [student@workstation DO180-apps]$ cd ~ [student@workstation ~]$ 4. Now that you have a local clone of the DO180-apps repository on your workstation VM, and you have executed the lab-configure command successfully, you are ready to start this course's exercises. During this course, all exercises that build applications from source start from the master branch of the DO180-apps Git repository. Exercises that make changes to source code require you to create new branches to host your changes, so that the master branch always contains a known good starting point. If for some reason you need to pause or restart an exercise, and need to either save or discard about changes you make into your Git branches, refer to Appendix D, Useful Git Commands. This concludes the guided exercise. 24 DO180-OCP4.5-en-2-20200911 Chapter 1 | Introducing Container Technology Summary In this chapter, you learned: Containers are an isolated application runtime created with very little overhead. A container image packages an application with all of its dependencies, making it easier to run the application in different environments. Applications such as Podman create containers using features of the standard Linux kernel. Container image registries are the preferred mechanism for distributing container images to multiple users and hosts. OpenShift orchestrates applications composed of multiple containers using Kubernetes. Kubernetes manages load balancing, high availability, and persistent storage for containerized applications. OpenShift adds to Kubernetes multitenancy, security, ease of use, and continuous integration and continuous development features. OpenShift routes enable external access to containerized applications in a manageable way. DO180-OCP4.5-en-2-20200911 25 26 DO180-OCP4.5-en-2-20200911 Chapter 2 Creating Containerized Services Goal Provision a service using container technology. Objectives Create a database server from a container image. Sections Provisioning a Containerized Database Server (and Guided Exercise) Lab Creating Containerized Services DO180-OCP4.5-en-2-20200911 27 Chapter 2 | Creating Containerized Services Provisioning Containerized Services Objectives After completing this section, students should be able to: Search for and fetch container images with Podman. Run and configure containers locally. Use the Red Hat Container Catalog. Fetching Container Images with Podman Applications can run inside containers as a way to provide them with an isolated and controlled execution environment. Running a containerized application, that is, running an application inside a container, requires a container image, a file system bundle providing all application files, libraries, and dependencies the application needs to run. Container images can be found in image registries: services that allow users to search and retrieve container images. Podman users can use the search subcommand to find available images from remote or local registries: [student@workstation ~]$ sudo podman search rhel INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED redhat.com registry.access.redhat.com/rhel This plat... 0...output omitted... After you have found an image, you can use Podman to download it. When using the pull subcommand, Podman fetches the image and saves it locally for future use: [student@workstation ~]$ sudo podman pull rhel Trying to pull registry.access.redhat.com/rhel...Getting image source signatures Copying blob sha256:...output omitted... 72.25 MB / 72.25 MB [======================================================] 8s Copying blob sha256:...output omitted... 1.20 KB / 1.20 KB [========================================================] 0s Copying config sha256:...output omitted... 6.30 KB / 6.30 KB [========================================================] 0s Writing manifest to image destination Storing signatures 699d44bc6ea2b9fb23e7899bd4023d3c83894d3be64b12e65a3fe63e2c70f0ef Container images are named based on the following syntax: registry_name/user_name/image_name:tag First registry_name, the name of the registry storing the image. It is usually the FQDN of the registry. user_name stands for the user or organization the image belongs to. The image_name should be unique in user namespace. 28 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services The tag identifies the image version. If the image name includes no image tag, latest is assumed. Note This classroom's Podman installation uses a several publicly available registries, like Quay.io and Red Hat Container Catalog. After retrieval, Podman stores images locally and you can list them with the images subcommand: [student@workstation ~]$ sudo podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/rhel latest 699d44bc6ea2 4 days ago 214MB...output omitted... Running Containers The podman run command runs a container locally based on an image. At a minimum, the command requires the name of the image to execute in the container. The container image specifies a process that starts inside the container known as the entry point. The podman run command uses all parameters after the image name as the entry point command for the container. The following example starts a container from a Red Hat Enterprise Linux image. It sets the entry point for this container to the echo "Hello world" command. [student@workstation ~]$ sudo podman run ubi7/ubi:7.7 echo 'Hello!' Hello world To start a container image as a background process, pass the -d option to the podman run command: [student@workstation ~]$ sudo podman run -d rhscl/httpd-24-rhel7:2.4-36.8 ff4ec6d74e9b2a7b55c49f138e56f8bc46fe2a09c23093664fea7febc3dfa1b2 [student@workstation ~]$ sudo podman inspect -l \ > -f "{{.NetworkSettings.IPAddress}}" 10.88.0.68 [student@workstation ~]$ curl http://10.88.0.68:8080...output omitted......output omitted... Test Page for the Apache HTTP Server on Red Hat Enterprise Linux...output omitted... The previous example ran a containerized Apache HTTP server in the background. Then, the example uses the podman inspect command to retrieve the container's internal IP address from container metadata. Finally, it uses the IP address to fetch the root page from Apache HTTP server. This response proves the container is still up and running after the podman run command. DO180-OCP4.5-en-2-20200911 29 Chapter 2 | Creating Containerized Services Note Most Podman subcommands accept the -l flag (l for latest) as a replacement for the container id. This flag applies the command to the latest used container in any Podman command. Note If the image to be executed is not available locally when using the podman run command, Podman automatically uses pull to download the image. When referencing the container, Podman recognizes a container either with the container name or the generated container id. Use the --name option to set the container name when running the container with Podman. Container names must be unique. If the podman run command includes no container name, Podman generates a unique random name. If the images require interacting with the user with console input, Podman can redirect container input and output streams to the console. The run subcommand requires the -t and -i flags (or, in short, -it flag) to enable interactivity. Note Many Podman flags also have an alternative long form; some of these are explained below. -t is equivalent to --tty, meaning a pseudo-tty (pseudo-terminal) is to be allocated for the container. -i is the same as --interactive. When used, standard input is kept open into the container. -d, or its long form --detach, means the container runs in the background (detached). Podman then prints the container id. See the Podman documentation for the complete list of flags. The following example starts a Bash terminal inside the container, and interactively runs some commands in it: [student@workstation ~]$ sudo podman run -it ubi7/ubi:7.7 /bin/bash bash-4.2# ls...output omitted... bash-4.2# whoami root bash-4.2# exit exit [student@workstation ~]$ Some containers need or can use external parameters provided at startup. The most common approach for providing and consuming those parameters is through environment variables. Podman can inject environment variables into containers at startup by adding the -e flag to the run subcommand: 30 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services [student@workstation ~]$ sudo podman run -e GREET=Hello -e NAME=RedHat \ > rhel7:7.5 printenv GREET NAME Hello RedHat [student@workstation ~]$ The previous example starts a RHEL image container that prints the two environment variables provided as parameters. Another use case for environment variables is setting up credentials into a MySQL database server: [root@workstation ~]# sudo podman run --name mysql-custom \ > -e MYSQL_USER=redhat -e MYSQL_PASSWORD=r3dh4t \ > -d rhmap47/mysql:5.5 Using the Red Hat Container Catalog Red Hat maintains its repository of finely tuned container images. Using this repository provides customers with a layer of protection and reliability against known vulnerabilities, which could potentially be caused by untested images. The standard podman command is compatible with the Red Hat Container Catalog. The Red Hat Container Catalog provides a user-friendly interface for searching and exploring container images from the Red Hat repository. The Container Catalog also serves as a single interface, providing access to different aspects of all the available container images in the repository. It is useful in determining the best image among multiple versions of container images given health index grades. The health index grade indicates how current an image is, and whether it contains the latest security updates. The Container Catalog also gives access to the errata documentation of an image. It describes the latest bug fixes and enhancements in each update. It also suggests the best technique for pulling an image on each operating system. The following images highlight some of the features of the Red Hat Container Catalog. Figure 2.1: Red Hat Container Catalog search page DO180-OCP4.5-en-2-20200911 31 Chapter 2 | Creating Containerized Services As displayed above, searching for Apache in the search box of the Container Catalog displays a suggested list of products and image repositories matching the search pattern. To access the Apache httpd 2.4 image page, select rhscl/httpd-24-rhel7 from the suggested list. Figure 2.2: Apache httpd 2.4 (rhscl/httpd-24-rhel7) overview image page The Apache httpd 2.4 panel displays image details and several tabs. This page states that Red Hat maintains the image repository. Under the Overview tab, there are other details: Description: A summary of the image's capabilities. Products using this container:: It indicates that Red Hat Enterprise Linux uses this image repository. Most Recent Tag: When the image received its latest update, the latest tag applied to the image, the health of the image, and more. 32 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services Figure 2.3: Apache httpd 2.4 (rhscl/httpd-24-rhel7) latest image page The Get this image tab provides the procedure to get the most current version of the image. The page provides different options to retrieve the image. Choose your preferred procedure in the tabs, and the page provides the appropriate instructions to retrieve the image. References Red Hat Container Catalog https://registry.redhat.io Quay.io website https://quay.io DO180-OCP4.5-en-2-20200911 33 Chapter 2 | Creating Containerized Services Guided Exercise Creating a MySQL Database Instance In this exercise, you will start a MySQL database inside a container, and then create and populate a database. Outcomes You should be able to start a database from a container image and store information inside the database. Before You Begin Open a terminal on workstation as the student user and run the following command: [student@workstation ~]$ lab container-create start 1. Create a MySQL container instance. 1.1. Start a container from the Red Hat Software Collections Library MySQL image. [student@workstation ~]$ sudo podman run --name mysql-basic \ > -e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 \ > -e MYSQL_DATABASE=items -e MYSQL_ROOT_PASSWORD=r00tpa55 \ > -d rhscl/mysql-57-rhel7:5.7-3.14 Trying to pull...output omitted... Copying blob sha256:e373541...output omitted... 69.66 MB / 69.66 MB [===================================================] 8s Copying blob sha256:c5d2e94...output omitted... 1.20 KB / 1.20 KB [=====================================================] 0s Copying blob sha256:b3949ae...output omitted... 62.03 MB / 62.03 MB [===================================================] 8s Writing manifest to image destination Storing signatures 92eaa6b67da0475745b2beffa7e0895391ab34ab3bf1ded99363bb09279a24a0 This command downloads the MySQL container image with the 5.7-3.14 tag, and then starts a container-based image. It creates a database named items, owned by a user named user1 with mypa55 as the password. The database administrator password is set to r00tpa55 and the container runs in the background. 1.2. Verify that the container started without errors. [student@workstation ~]$ sudo podman ps --format "{{.ID}} {{.Image}} {{.Names}}" 92eaa6b67da0 registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7-3.14 mysql-basic 2. Access the container sandbox by running the following command: 34 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services [student@workstation ~]$ sudo podman exec -it mysql-basic /bin/bash bash-4.2$ This command starts a Bash shell, running as the mysql user inside the MySQL container. 3. Add data to the database. 3.1. Connect to MySQL as the database administrator user (root). Run the following command from the container terminal to connect to the database: bash-4.2$ mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g....output omitted... mysql> The mysql command opens the MySQL database interactive prompt. Run the following command to determine the database availability: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | items | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.01 sec) 3.2. Create a new table in the items database. Run the following command to access the database. mysql> use items; Database changed 3.3. Create a table called Projects in the items database. mysql> CREATE TABLE Projects (id int(11) NOT NULL, -> name varchar(255) DEFAULT NULL, -> code varchar(255) DEFAULT NULL, -> PRIMARY KEY (id)); Query OK, 0 rows affected (0.01 sec) You can optionally use the ~/DO180/solutions/container-create/ create_table.txt file to copy and paste the CREATE TABLE MySQL statement as given above. 3.4. Use the show tables command to verify that the table was created. DO180-OCP4.5-en-2-20200911 35 Chapter 2 | Creating Containerized Services mysql> show tables; +---------------------+ | Tables_in_items | +---------------------+ | Projects | +---------------------+ 1 row in set (0.00 sec) 3.5. Use the insert command to insert a row into the table. mysql> insert into Projects (id, name, code) values (1,'DevOps','DO180'); Query OK, 1 row affected (0.02 sec) 3.6. Use the select command to verify that the project information was added to the table. mysql> select * from Projects; +----+-----------+-------+ | id | name | code | +----------------+-------+ | 1 | DevOps | DO180 | +----------------+-------+ 1 row in set (0.00 sec) 3.7. Exit from the MySQL prompt and the MySQL container: mysql> exit Bye bash-4.2$ exit exit Finish On workstation, run the lab container-create finish script to complete this lab. [student@workstation ~]$ lab container-create finish This concludes the exercise. 36 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services Lab Creating Containerized Services Performance Checklist In this lab, you create an Apache HTTP Server container with a custom welcome page. Outcomes You should be able to start and customize a container using a container image. Before You Begin Open a terminal on workstation as the student user and run the following command: [student@workstation ~]$ lab container-review start 1. Start a container named httpd-basic in the background, and forward port 8080 to port 80 in the container. Use the redhattraining/httpd-parent container image with the 2.4 tag. Note Use the -p 8080:80 option with sudo podman run command to forward the port. This command starts the Apache HTTP server in the background and returns to the Bash prompt. 2. Test the httpd-basic container. From workstation, attempt to access http://localhost:8080 using any web browser. An Hello from the httpd-parent container! message is displayed, which is the index.html page from the Apache HTTP server container running on workstation. 3. Customize the httpd-basic container to display Hello World as the message. The container's message is stored in the file /var/www/html/index.html. 3.1. Start a Bash session inside the container. 3.2. From the Bash session, verify the index.html file under /var/www/html directory using the ls -la command. 3.3. Change the index.html file to contain the text Hello World, replacing all of the existing content. 3.4. Attempt to access http://localhost:8080 again, and verify that the web page has been updated. DO180-OCP4.5-en-2-20200911 37 Chapter 2 | Creating Containerized Services Evaluation Grade your work by running the lab container-review grade command on your workstation machine. Correct any reported failures and rerun the script until successful. [student@workstation ~]$ lab container-review grade Finish On workstation, run the lab container-review finish script to complete this lab. [student@workstation ~]$ lab container-review finish This concludes the lab. 38 DO180-OCP4.5-en-2-20200911 Chapter 2 | Creating Containerized Services Solution Creating Containerized Services Performance Checklist In this lab, you create an Apache HTTP Server container with a custom welcome page. Outcomes You should be able to start and customize a container using a container image. Before You Begin Open a terminal on workstation as the student user and run the following command: [student@workstation ~]$ lab container-review start 1. Start a container named httpd-basic in the background, and forward port 8080 to port 80 in the container. Use the redhattraining/httpd-parent container image with the 2.4 tag. Note Use the -p 8080:80 option with sudo podman run command to forward the port. Run the following command: [student@workstation ~]$ sudo podman run -d -p 8080:80 \ > --name httpd-basic redhattraining/httpd-parent:2.4...output omitted... Copying blob sha256:743f2d6...output omitted... 2