Full Transcript

Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... (/) / Documentation (/documentation/) Table of Contents 1. Installation Overview 2. Requirements...

Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... (/) / Documentation (/documentation/) Table of Contents 1. Installation Overview 2. Requirements ◦ 2.1. oVirt Engine Requirements ▪ 2.1.1. Hardware Requirements ▪ 2.1.2. Browser Requirements ▪ 2.1.3. Client Requirements ▪ 2.1.4. Operating System Requirements ◦ 2.2. Host Requirements ▪ 2.2.1. CPU Requirements ▪ 2.2.2. Memory Requirements ▪ 2.2.3. Storage Requirements ▪ 2.2.4. PCI Device Requirements ▪ 2.2.5. Device Assignment Requirements ▪ 2.2.6. vGPU Requirements ◦ 2.3. Networking requirements ▪ 2.3.1. General requirements ▪ 2.3.2. Network range for self-hosted engine deployment ▪ 2.3.3. Firewall Requirements for DNS, NTP, and IPMI Fencing ▪ 2.3.4. oVirt Engine Firewall Requirements ▪ 2.3.5. Host Firewall Requirements ▪ 2.3.6. Database Server Firewall Requirements ▪ 2.3.7. Maximum Transmission Unit Requirements 3. Preparing Storage for oVirt ◦ 3.1. Preparing NFS Storage ◦ 3.2. Preparing iSCSI Storage ◦ 3.3. Preparing FCP Storage ◦ 3.4. Preparing Gluster Storage ◦ 3.5. Customizing Multipath Configurations for SAN Vendors ◦ 3.6. Recommended Settings for Multipath.conf 4. Installing the Self-hosted Engine Deployment Host ◦ 4.1. Installing oVirt Nodes ◦ 4.2. Installing Enterprise Linux hosts 5. Installing the oVirt Engine ◦ 5.1. Manually installing the Engine Appliance ◦ 5.2. Enabling and configuring the firewall ◦ 5.3. Deploying the self-hosted engine using the command line ◦ 5.4. Enabling the oVirt Engine Repositories ◦ 5.5. Connecting to the Administration Portal 6. Installing Hosts for oVirt ◦ 6.1. oVirt Nodes ▪ 6.1.1. Installing oVirt Nodes ▪ 6.1.2. Installing a third-party package on oVirt-node ▪ 6.1.3. Advanced Installation ◦ 6.2. Enterprise Linux hosts ▪ 6.2.1. Installing Enterprise Linux hosts ▪ 6.2.2. Installing Cockpit on Enterprise Linux hosts ◦ 6.3. Recommended Practices for Configuring Host Networks ◦ 6.4. Adding Self-Hosted Engine Nodes to the oVirt Engine ◦ 6.5. Adding Standard Hosts to the oVirt Engine 7. Adding Storage for oVirt ◦ 7.1. Adding NFS Storage ◦ 7.2. Adding iSCSI Storage ◦ 7.3. Adding FCP Storage ◦ 7.4. Adding Gluster Storage Appendix A: Troubleshooting a Self-hosted Engine Deployment ◦ Troubleshooting the Engine Virtual Machine 1 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... ◦ Cleaning Up a Failed Self-hosted Engine Deployment Appendix B: Customizing the Engine virtual machine using automation during deployment Appendix C: Migrating Databases and Services to a Remote Server ◦ Migrating Data Warehouse to a Separate Machine ▪ Migrating the Data Warehouse Database to a Separate Machine ▪ Migrating the Data Warehouse Service to a Separate Machine Appendix D: Configuring a Host for PCI Passthrough Appendix E: Preventing kernel modules from loading automatically ◦ Removing a module temporarily Appendix F: Legal notice Installing oVirt as a self-hosted engine using the command line Self-hosted engine installation is automated using Ansible. The installation script ( hosted-engine --deploy ) runs on an initial deployment host, and the oVirt Engine (or "engine") is installed and configured on a virtual machine that is created on the deployment host. The Engine and Data Warehouse databases are installed on the Engine virtual machine, but can be migrated to a separate server post-installation if required. Hosts that can run the Engine virtual machine are referred to as self-hosted engine nodes. At least two self-hosted engine nodes are required to support the high availability feature. A storage domain dedicated to the Engine virtual machine is referred to as the self-hosted engine storage domain. This storage domain is created by the installation script, so the underlying storage must be prepared before beginning the installation. See the Planning and Prerequisites Guide (https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/ planning_and_prerequisites_guide/index) for information on environment options and recommended configuration. See Self-Hosted Engine Recommendations (https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/planning_and_prerequisites_guide/ index#self-hosted-engine-recommendations) for configuration specific to a self-hosted engine environment. oVirt Key Components Component Name Description A service that provides a graphical user interface and a REST API to oVirt Engine manage the resources in the environment. The Engine is installed on a physical or virtual machine running Enterprise Linux. Enterprise Linux hosts (Enterprise Linux hosts) and oVirt Nodes (image- based hypervisors) are the two supported types of host. Hosts use Hosts Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. A storage service is used to store the data associated with virtual Shared Storage machines. A service that collects configuration information and statistical data Data Warehouse from the Engine. Self-Hosted Engine Architecture The oVirt Engine runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Engine is highly available without external HA management. The minimum setup of a self-hosted engine environment includes: One oVirt Engine virtual machine that is hosted on the self-hosted engine nodes. The Engine Appliance is used to automate the installation of a Enterprise Linux 8 virtual machine, and the Engine on that virtual machine. A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Enterprise Linux hosts or oVirt Nodes (oVirt Node). VDSM (the host agent) runs on all hosts to facilitate communication with the oVirt Engine. The HA services run on all self-hosted engine nodes to manage the high availability of the Engine virtual machine. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. 2 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Figure 1. Self-Hosted Engine oVirt Architecture 1. Installation Overview The self-hosted engine installation uses Ansible and the Engine Appliance (a pre-configured Engine virtual machine image) to automate the following tasks: Configuring the first self-hosted engine node Installing a Enterprise Linux virtual machine on that node Installing and configuring the oVirt Engine on that virtual machine Configuring the self-hosted engine storage domain  The Engine Appliance is only used during installation. It is not used to upgrade the Engine. Installing a self-hosted engine environment involves the following steps: 1. Prepare storage to use for the self-hosted engine storage domain and for standard storage domains. You can use one of the following storage types: ◦ NFS ◦ iSCSI ◦ Fibre Channel (FCP) ◦ Gluster Storage 2. Install a deployment host to run the installation on. This host will become the first self-hosted engine node. You can use either host type: ◦ oVirt Node ◦ Enterprise Linux 3. Install and configure the oVirt Engine: a. Enabling and configuring the firewall b. Install the self-hosted engine using the hosted-engine --deploy command on the deployment host. c. Enable the oVirt Engine repositories. d. Connect to the Administration Portal to add hosts and storage domains. 4. Add more self-hosted engine nodes and standard hosts to the Engine. Self-hosted engine nodes can run the Engine virtual machine and other virtual machines. Standard hosts can run all other virtual machines, but not the Engine virtual machine. a. Use either host type, or both: ▪ oVirt Node ▪ Enterprise Linux b. Add hosts to the Engine as self-hosted engine nodes. c. Add hosts to the Engine as standard hosts. 5. Add more storage domains to the Engine. The self-hosted engine storage domain is not recommended for use by anything other than the 3 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Engine virtual machine. 6. If you want to host any databases or services on a server separate from the Engine, you can migrate them a er the installation is complete. Keep the environment up to date. Since bug fixes for known issues are frequently released, use scheduled tasks to update the hosts  and the Engine. 2. Requirements 2.1. oVirt Engine Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. The oVirt Engine runs on Enterprise Linux operating systems like CentOS Linux (https://www.centos.org/) or Red Hat Enterprise Linux (https:// www.redhat.com/en/technologies/linux-platforms/enterprise-linux). Table 1. oVirt Engine Hardware Requirements Resource Minimum Recommended A quad core x86_64 CPU or multiple dual core CPU A dual core x86_64 CPU. x86_64 CPUs. 4 GB of available system RAM if Data Warehouse Memory is not installed and if memory is not being 16 GB of system RAM. consumed by existing processes. 50 GB of locally accessible, writable disk space. You can use the RHV Engine History Database Hard Disk 25 GB of locally accessible, writable disk space. Size Calculator (https://access.redhat.com/labs/ rhevmhdsc/) to calculate the appropriate disk space for the Engine history database size. 1 Network Interface Card (NIC) with bandwidth 1 Network Interface Card (NIC) with bandwidth Network Interface of at least 1 Gbps. of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser testing is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Tier 3: Browser and operating system combinations that are not tested, but may work. Table 2. Browser Requirements Support Tier Operating System Family Browser Mozilla Firefox Extended Support Release (ESR) Tier 1 Enterprise Linux version Most recent version of Google Chrome, Mozilla Any Firefox, or Microso Edge Tier 2 Earlier versions of Google Chrome or Mozilla Tier 3 Any Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines (https://ovirt.org/documentation/ virtual_machine_management_guide/index#sect-installing_supporting_components) in the Virtual Machine Management Guide. Installing virt-viewer requires Administrator privileges. 4 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Enterprise Linux 7.2 and later, and Windows 10.  SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The oVirt Engine must be installed on a base installation of Enterprise Linux 8.7 or later. Do not install any additional packages a er the base installation, as they may cause dependency issues when attempting to install the packages required by the Engine. Do not enable additional repositories other than those required for the Engine installation. 2.2. Host Requirements 2.2.1. CPU Requirements All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V™ or Intel VT® hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD ◦ Opteron G4 ◦ Opteron G5 ◦ EPYC Intel ◦ Nehalem ◦ Westmere ◦ SandyBridge ◦ IvyBridge ◦ Haswell ◦ Broadwell ◦ Skylake Client ◦ Skylake Server ◦ Cascadelake Server Enterprise Linux 9 doesn’t support virtualization for ppc64le: CentOS Virtualization SIG (https://wiki.centos.org/SpecialInterestGroup/ Virtualization) is working on re-introducing virtualization support but it’s not ready yet. The oVirt project also provides packages for the 64-bit ARM architecture (ARM 64) but only as a Technology Preview. For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ#1731395 (https://bugzilla.redhat.com/1731395) Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host a er this change to ensure that the change is applied. Procedure 1. At the Enterprise Linux or oVirt Node boot screen, press any key and select the Boot or Boot with serial console entry from the list. 2. Press Tab to edit the kernel parameters for the selected option. 3. Ensure there is a space a er the last kernel parameter listed, and append the parameter rescue. 4. Press Enter to boot into rescue mode. 5. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: 5 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... # grep -E 'svm|vmx' /proc/cpuinfo | grep nx If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system’s BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in oVirt Node is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in oVirt Node is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shi ing underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. oVirt Node (oVirt Node) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If oVirt Node boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of oVirt Node are documented in this section. The storage requirements for Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of oVirt Node. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? (https://access.redhat.com/solutions/15244) for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the Engine Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization (https://ovirt.org/documentation/administration_guide/index#Memory_Optimization). 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine (https://access.redhat.com/articles/3201152). 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. 6 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements oVirt requires IPv6 to remain enabled on the physical or virtual machine running the Engine. Do not disable IPv6 (https://access.redhat.com/ solutions/8709) on the Engine machine, even if your systems do not use it. 2.3.2. Network range for self-hosted engine deployment The self-hosted engine deployment process temporarily uses a /24 network address under 192.168. It defaults to 192.168.222.0/24 , and if this address is in use, it tries other /24 addresses under 192.168 until it finds one that is not in use. If it does not find an unused network address in this range, deployment fails. When installing the self-hosted engine using the command line, you can set the deployment script to use an alternate /24 network range with the option --ansible-extra-vars=he_ipv4_subnet_prefix=PREFIX , where PREFIX is the prefix for the default range. For example: # hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=192.168.222  You can only set another range by installing oVirt as a self-hosted engine using the command line. 2.3.3. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP oVirt does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. The oVirt Engine and all hosts (oVirt Node and Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the oVirt environment is not supported. All DNS services the oVirt environment  uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. 7 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... By default, Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each oVirt Node and Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error…) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.4. oVirt Engine Firewall Requirements The oVirt Engine requires that a number of ports be opened to allow network traffic through the system’s firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Table 3. oVirt Engine Firewall Requirements Encrypted by ID Port(s) Protocol Source Destination Purpose default oVirt Nodes Optional. M1 - ICMP Enterprise Linux oVirt Engine No May help in diagnosis. hosts System(s) used for maintenance of the Engine including Secure Shell (SSH) access. M2 22 TCP oVirt Engine Yes backend Optional. configuration, and so ware upgrades. Clients accessing Secure Shell (SSH) access to enable M3 2222 TCP virtual machine oVirt Engine connection to virtual machine serial Yes serial consoles. consoles. Administration Portal clients VM Portal clients Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access M4 80, 443 TCP oVirt Nodes oVirt Engine Yes to the Engine. HTTP redirects Enterprise Linux connections to HTTPS. hosts REST API clients Administration Provides websocket proxy access for a Portal clients web-based console client, noVNC , M5 6100 TCP oVirt Engine No when the websocket proxy is running on VM Portal clients the Engine. If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Engine. See fence_kdump Advanced Configuration (https://ovirt.org/ oVirt Nodes documentation/administration_guide/ M6 7410 UDP Enterprise Linux oVirt Engine index#sect- No hosts fence_kdump_Advanced_Configuration). fence_kdump doesn’t provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. oVirt Engine Administration Required for communication with the M7 54323 TCP ( ovirt- Yes Portal clients ovirt-imageo service. imageio service) 8 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Encrypted by ID Port(s) Protocol Source Destination Purpose default oVirt Nodes Open Virtual Network (OVN) Connect to Open Virtual Network (OVN) M8 6642 TCP Enterprise Linux Yes southbound database hosts database Yes, with Clients of external External network configuration M9 9696 TCP network provider for OpenStack Networking API provider for OVN generated by OVN engine-setup. Yes, with Clients of external External network configuration M10 35357 TCP network provider for OpenStack Identity API provider for OVN generated by OVN engine-setup. DNS lookup requests from ports above M11 53 TCP, UDP oVirt Engine DNS Server 1023 to port 53, and responses. Open by No default. NTP requests from ports above 1023 to M12 123 UDP oVirt Engine NTP Server port 123, and responses. Open by No default. A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn. Because they both run on the same host, their communication is not visible to the network.  By default, Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Engine to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Host Firewall Requirements Enterprise Linux hosts and oVirt Nodes (oVirt Node) require a number of ports to be opened to allow network traffic through the system’s firewall. The firewall rules are automatically configured by default when adding a new host to the Engine, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters. Table 4. Virtualization Host Firewall Requirements Encrypted by ID Port(s) Protocol Source Destination Purpose default oVirt Nodes Secure Shell (SSH) H1 22 TCP oVirt Engine access. Yes Enterprise Linux hosts Optional. Secure Shell (SSH) oVirt Nodes access to enable H2 2223 TCP oVirt Engine Enterprise Linux connection to Yes hosts virtual machine serial consoles. Simple network management protocol (SNMP). Only required if you want Simple oVirt Nodes Network H3 161 UDP Enterprise Linux oVirt Engine Management No hosts Protocol traps sent from the host to one or more external SNMP managers. Optional. 9 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Encrypted by ID Port(s) Protocol Source Destination Purpose default oVirt Nodes NFS connections. H4 111 TCP NFS storage server Enterprise Linux No Optional. hosts Remote guest console access via Administration oVirt Nodes VNC and SPICE. H5 5900 - 6923 TCP Portal clients These ports must be Yes (optional) Enterprise Linux VM Portal clients hosts open to facilitate client access to virtual machines. Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines Common oVirt Nodes running on the host. Information Model Only required if you No H6 5989 TCP, UDP Enterprise Linux Object Manager want to use a (CIMOM) hosts CIMOM to monitor the virtual machines in your virtualization environment. Optional. oVirt Nodes Required to access oVirt Engine the Cockpit web H7 9090 TCP Enterprise Linux Yes Client machines interface, if hosts installed. oVirt Nodes oVirt Nodes Virtual machine H8 16514 TCP Enterprise Linux Enterprise Linux migration using Yes hosts hosts libvirt. Virtual machine migration and oVirt Nodes oVirt Nodes fencing using VDSM. Yes. Depending on These ports must be agent for fencing, H9 49152 - 49215 TCP Enterprise Linux Enterprise Linux open to facilitate migration is done hosts hosts both automated andthrough libvirt. manual migration of virtual machines. oVirt Engine VDSM oVirt Nodes communications oVirt Nodes H10 54321 TCP Enterprise Linux with the Engine and Yes Enterprise Linux hosts other virtualization hosts hosts. oVirt Nodes Required for oVirt Engine communication with H11 54322 TCP ovirt-imageio Enterprise Linux Yes the ovirt- service hosts imageo service. Required, when Open Virtual oVirt Nodes oVirt Nodes Network (OVN) is used as a network H12 6081 UDP Enterprise Linux Enterprise Linux No provider, to allow hosts hosts OVN to create tunnels between hosts. 10 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Encrypted by ID Port(s) Protocol Source Destination Purpose default DNS lookup oVirt Nodes requests from ports above 1023 to port H13 53 TCP, UDP Enterprise Linux DNS Server No 53, and responses. hosts This port is required and open by default. NTP requests from oVirt Nodes ports above 1023 to port 123, and H14 123 UDP Enterprise Linux NTP Server responses. This port hosts is required and open by default. Internet Security H15 4500 TCP, UDP oVirt Nodes oVirt Nodes Yes Protocol (IPSec) Internet Security H16 500 UDP oVirt Nodes oVirt Nodes Yes Protocol (IPSec) Internet Security H17 - AH, ESP oVirt Nodes oVirt Nodes Yes Protocol (IPSec) By default, Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the oVirt Nodes  Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.6. Database Server Firewall Requirements oVirt supports the use of a remote database server for the Engine database ( engine ) and the Data Warehouse database ( ovirt-engine- history ). If you plan to use a remote database server, it must allow connections from the Engine and the Data Warehouse service (which can be separate from the Engine). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system.  Accessing the Engine database from external systems is not supported. Table 5. Database Server Firewall Requirements ID Port(s)ProtocolSource DestinationPurpose Encrypted by default Engine ( engine ) database oVirt server Default port Engine for No, but can be enabled (https://ovirt.org/documentation/installing_ovirt_as_a_self- TCP, Data D15432 Data Warehouse PostgreSQL hosted_engine_using_the_command_line/ UDP Warehouse( ovirt- database index#Migrating_the_Data_Warehouse_Database_to_a_Separate_Machine_migrate_DWH). service engine- connections. history ) database server Data Warehouse Default port ( ovirt- for Disabled by default. No, but can be enabled (https://ovirt.org/documentation/ TCP, External D25432 engine- PostgreSQL installing_ovirt_as_a_self-hosted_engine_using_the_command_line/ UDP systems history ) database index#Migrating_the_Data_Warehouse_Database_to_a_Separate_Machine_migrate_DWH). database connections. server 2.3.7. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting a er the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network 11 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... MTU (https://access.redhat.com/solutions/4129641). 3. Preparing Storage for oVirt You need to prepare storage to be used for storage domains in the new environment. A oVirt environment must have at least one data storage domain, but adding more is recommended. When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS ⚠ storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Gluster Storage Prerequisites Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Engine virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation. Extending or otherwise changing the self-hosted engine storage domain a er deployment of the self-hosted engine is not ⚠ supported. Any such change might prevent the self-hosted engine from booting. When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine. If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target. It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. A er exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems (https://access.redhat.com/documentation/ en-us/red_hat_enterprise_linux/8/html-single/managing_file_systems/index) for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by oVirt so the Engine can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in oVirt. Prerequisites 1. Install the NFS utils package. # dnf install nfs-utils -y 2. To check the enabled versions: # cat /proc/fs/nfsd/versions 3. Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure 1. Create the group kvm : # groupadd kvm -g 36 12 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... 2. Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm 3. Create the storage directory and modify the access rights. # mkdir /storage # chmod 0755 /storage # chown 36:36 /storage/ 4. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) 5. Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server 6. To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server If changes in /etc/exports have been made a er starting the services, the exportfs -ra command can be used to reload  the changes. A er performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 3.2. Preparing iSCSI Storage oVirt supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target (https://access.redhat.com/documentation/en-us/ red_hat_enterprise_linux/8/html-single/managing_storage_devices/index#configuring-an-iscsi-target_managing-storage-devices) in Managing storage devices for Red Hat Enterprise Linux 8. If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being  activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm- tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter (https://ovirt.org/documentation/ administration_guide/index#Creating_LVM_filter_storage_admin)  oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state a er connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:  # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 3.3. Preparing FCP Storage oVirt supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be 13 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... attached to more than one storage domain at a time. oVirt system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Enterprise Linux, see the Storage Administration Guide (https:// access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/Storage_Administration_Guide/index.html) and DM Multipath Guide (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/DM_Multipath/index.html). If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being  activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm- tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter (https://ovirt.org/documentation/ administration_guide/index#Creating_LVM_filter_storage_admin)  oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state a er connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf  multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 3.4. Preparing Gluster Storage For information on setting up and configuring Gluster Storage, see the Gluster Storage Installation Guide (https://docs.gluster.org/en/latest/Install- Guide/Overview/). 3.5. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in / etc/multipath.conf. To override the multipath settings, do not customize /etc/multipath.conf. Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or oVirt can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf. To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf. If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf. Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf. ⚠ Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: # vdsm-tool is-configured --module multipath Procedure 1. Create a new configuration file in the /etc/multipath/conf.d directory. 14 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... 2. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/ conf.d/.conf. Remove any comment marks, edit the setting values, and save your changes. 3. Apply the new configuration settings by entering: # systemctl reload multipathd  Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps 1. Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. 2. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf (https://ovirt.org/documentation/installing_ovirt_as_a_self- hosted_engine_using_the_command_line/index#ref-Recommended_Settings_for_Multipath_conf_SHE_cli_deploy) Enterprise Linux DM Multipath (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/dm_multipath/) Configuring iSCSI Multipathing (https://ovirt.org/documentation/administration_guide/index#Configuring_iSCSI_Multipathing) How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? (https://access.redhat.com/ solutions/3234761) 3.6. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID}. The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. ⚠ Do not change this setting to user_friendly_names yes. User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether oVirt Node tries to access devices through multipath only if more than one path is available. The current value, no , allows oVirt to access devices through multipath even if only one path is available. ⚠ Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before oVirt version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. oVirt version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the next time all paths fail. For more details, see the commit that changed this setting (https://gerrit.ovirt.org/#/c/88082/). polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner. 4. Installing the Self-hosted Engine Deployment Host A self-hosted engine can be deployed from a oVirt Node or a Enterprise Linux host. If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them on the host before beginning the self-hosted engine deployment. See  Networking Recommendations (https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/ planning_and_prerequisites_guide/index#networking-recommendations) in the Planning and Prerequisites Guide. 4.1. Installing oVirt Nodes 15 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... oVirt Node (oVirt Node) is a minimal operating system based on Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a oVirt environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See Running Cockpit (http://cockpit-project.org/running.html) for the minimum browser requirements. oVirt Node supports NIST 800-53 partitioning requirements to improve security. oVirt Node uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements (https://ovirt.org/documentation/ installing_ovirt_as_a_standalone_manager_with_local_databases/index.html#host-requirements). When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS ⚠ storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Procedure 1. Visit the oVirt Node Download (https://ovirt.org/download/node.html) page. 2. Choose the version of oVirt Node to download and click its Installation ISO link. 3. Write the oVirt Node Installation ISO disk image to a USB, CD, or DVD. 4. Start the machine on which you are installing oVirt Node, booting from the prepared installation media. 5. From the boot menu, select Install oVirt Node 4.5 and press Enter. You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can  boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. 6. Select a language, and click Continue. 7. Select a keyboard layout from the Keyboard Layout screen and click Done. 8. Select the device on which to install oVirt Node from the Installation Destination screen. Optionally, enable encryption. Click Done.  Use the Automatically configure partitioning option. 9. Select a time zone from the Time & Date screen and click Done. 10. Select a network from the Network & Host Name screen and click Configure… to configure the connection details. To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options (https://access.redhat.com/documentation/en-us/  red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/graphical-installation_graphical- installation#network-hostname_configuring-system-settings) in the Enterprise Linux 8 Installation Guide. Enter a host name in the Host Name field, and click Done. 11. Optional: Configure Security Policy and Kdump. See Customizing your RHEL installation using the GUI (https://access.redhat.com/ documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/graphical-installation_graphical- installation) in Performing a standard RHEL installation for Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen. 12. Click Begin Installation. 13. Set a root password and, optionally, create an additional user while oVirt Node installs. ⚠ Do not create untrusted users on oVirt Node, as this can lead to exploitation of local security vulnerabilities. 14. Click Reboot to complete the installation. When oVirt Node restarts, nodectl check performs a health check on the host and displays the result when you log in on  the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information.  If necessary, you can prevent kernel modules from loading automatically. 4.2. Installing Enterprise Linux hosts A Enterprise Linux host is based on a standard basic installation of Enterprise Linux 8.7 or later on a physical server, with the Enterprise Linux Server and oVirt repositories enabled. The oVirt project also provides packages for Enterprise Linux 9. For detailed installation instructions, see the Performing a standard EL installation (https://access.redhat.com/documentation/en-us/ red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/index.html). 16 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... The host must meet the minimum host requirements (https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/ planning_and_prerequisites_guide/index#host-requirements). When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS ⚠ storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Virtualization must be enabled in your host’s BIOS settings. For information on changing your host’s BIOS settings, refer to your host’s  hardware documentation.  Do not install third-party watchdogs on Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. 5. Installing the oVirt Engine 5.1. Manually installing the Engine Appliance When you deploy the self-hosted engine, the following sequence of events takes place: 1. The installer installs the Engine Appliance to the deployment host. 2. The appliance installs the Engine virtual machine. 3. The appliance installs the Engine on the Engine virtual machine. However, you can install the appliance manually on the deployment host beforehand if you need to. The appliance is large and network connectivity issues might cause the appliance installation to take a long time, or possibly fail. Procedure 1. On Enterprise Linux hosts: a. Reset the virt module: # dnf module reset virt If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.  You can see the value of the stream by entering: # dnf module list virt b. Enable the virt module in the Advanced Virtualization stream with the following command: For oVirt 4.4.2: # dnf module enable virt:8.2 For oVirt 4.4.3 to 4.4.5: # dnf module enable virt:8.3 For oVirt 4.4.6 to 4.4.10: # dnf module enable virt:av For oVirt 4.5 and later: # dnf module enable virt:rhel Starting with EL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For EL 8.4 and 8.5, only  one Advanced Virtualization stream is used, rhel:av. 1. Synchronize installed packages to update them to the latest available versions: # dnf distro-sync --nobest 2. Install the Engine Appliance to the host manually: # dnf install ovirt-engine-appliance 17 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Now, when you deploy the self-hosted engine, the installer detects that the appliance is already installed. 5.2. Enabling and configuring the firewall firewalld must be installed and running before you run the self-hosted deployment script. You must also have an active zone with an interface configured. Prerequisites firewalld is installed. hosted-engine-setup requires the firewalld package, so you do not need to do any additional steps. Procedure 1. Start firewalld : # systemctl unmask firewalld # systemctl start firewalld To ensure firewalld starts automatically at system start, enter the following command as root: # systemctl enable firewalld 2. Ensure that firewalld is running: # systemctl status firewalld 3. Ensure that your management interface is in a firewall zone via # firewall-cmd --get-active-zones Now you are ready to deploy the self-hosted engine. 5.3. Deploying the self-hosted engine using the command line You can deploy a self-hosted engine from the command line. A er installing the setup package, you run the command hosted-engine -- deploy , and a script collects the details of your environment and uses them to configure the host and the Engine. You can customize the Engine virtual machine during deployment, either manually, by pausing the deployment, or using automation. Setting the variable he_pause_host to true pauses deployment a er installing the Engine and adding the deployment host to the Engine. Setting the variable he_pause_before_engine_setup to true pauses the deployment before installing the Engine and before restoring the Engine when using he_restore_from_file. When the he_pause_host or he_pause_before_engine_setup variables are set to true a lock file is created at /  tmp with the suffix _he_setup_lock on the deployment host. You can then manually customize the virtual machine as needed. The deployment continues a er you delete the lock file, or a er 24 hours, whichever comes first. Adding an Ansible playbook to any of the following directories on the deployment host automatically runs the playbook. Add the playbook under one of the following directories under /usr/share/ansible/collections/ansible_collections/redhat/rhv/ roles/hosted_engine_setup/hooks/ : ◦ enginevm_before_engine_setup ◦ enginevm_after_engine_setup ◦ after_add_host ◦ after_setup Prerequisites Upgrade the appliance content to the latest product version before performing engine-setup. ◦ To do this manually, pause the deployment using he_pause_before_engine_setup and perform a dnf update. ◦ To do this automatically, apply the enginevm_before_engine_setup hook. FQDNs prepared for your Engine and the host. Forward and reverse lookup records must both be set in the DNS. When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine. Optional: If you want to customize the Engine virtual machine during deployment using automation, an Ansible playbook must be added. See Customizing the Engine virtual machine using automation during deployment. 18 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... The self-hosted engine setup script requires ssh public key access using 2048-bit RSA keys from the engine virtual machine to the root account of its bare metal host. In /etc/ssh/sshd_config , these values must be set as follows: ◦ PubkeyAcceptedKeyTypes must allow 2048-bit RSA keys or stronger. By default, this setting uses system-wide crypto policies. For more information, see the manual page crypto-policies(7). oVirt Node hosts that are registered with the Engine in versions earlier than 4.4.5.5 require RSA 2048 for backward compatibility until all the keys are migrated.  oVirt Node hosts registered for 4.4.5.5 and later use the strongest algorithm that is supported by both the Engine and oVirt Node. The PubkeyAcceptedKeyTypes setting helps determine which algorithm is used. ◦ PermitRootLogin is set to without-password or yes ◦ PubkeyAuthentication is set to yes Procedure 1. Install the deployment tool: # dnf install ovirt-hosted-engine-setup 2. Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run tmux : # dnf -y install tmux # tmux 3. Start the deployment script: # hosted-engine --deploy Alternatively, to pause the deployment a er adding the deployment host to the Engine, use the command line option --ansible- extra-vars=he_pause_host=true : # hosted-engine --deploy --ansible-extra-vars=he_pause_host=true To escape the script at any time, use the Ctrl + D keyboard combination to abort deployment. In the event of session timeout  or connection disruption, run tmux attach to recover the deployment session. 4. When prompted, enter Yes to begin the deployment: Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM ther e. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]: 5. Configure the network. Check that the gateway shown is correct and press Enter. Enter a pingable address on the same subnet so the script can check the host’s connectivity. Please indicate a pingable gateway IP address [X.X.X.X]: 6. The script detects possible NICs to use as a management bridge for the environment. Enter one of them or press Enter to accept the default. Please indicate a nic to set ovirtmgmt bridge on: (ens1, ens0) [ens1]: 7. Specify how to check network connectivity. The default is dns. Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dn s]: ping Attempts to ping the gateway. dns 19 of 46 19/07/24, 11:21 Installing oVirt as a self-hosted engine using the co... https://www.ovirt.org/documentation/installing_ovirt_... Checks the connection to the DNS server. tcp Creates a TCP connection to a host and port combination. You need to specify a destination IP address and port. Once the connection is successfully created, the network is considered to be alive. Ensure that the given host is able to accept incoming TCP connections on the given port. none The network is always considered connected. 8. Enter a name for the data center in which to deploy the host for the self-hosted engine. The default name is Default. Please enter the name of the data center where you want to deploy this hosted-engine host. Data center [Default]: 9. Enter a name for the cluster in which to deploy the host for the self-hosted engine. The default name is Default. Please enter the name of the cluster where you want to deploy this hosted-engine host. Cluster [Default]: 10. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the Engine Appliance. 11. To deploy with a custom Engine Appliance appliance image, specify the path to the OVA archive. Otherwise, leave this field empty to use the Engine Appliance. If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use. Entering no value will use the image from the rhvm-appliance rpm, installing it if needed. Appliance image path []: 12. Enter the CPU and memory configuration for the Engine virtual machine: Please specify the number of virtual CPUs for the VM. The default is the appliance OVF value : Please specify the memory size of the VM in MB. The default is the maximum available : 13. Specify the FQDN for the Engine virtual machine, such as manager.example.com : Please provide the FQDN you would like to use for the engine. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN []: 14. Specify the domain of the Engine virtual machine. For example, if the FQDN is manager.example.com , then enter example.com. Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [example.com] 15. Create the root password for the Engine, and reenter it to confirm: Enter root password that will be used for the engine appliance: Confirm appliance root password: 16. Optional: Enter an SSH public key to enable you to log in to the Engine virtual machine as the root user without entering a password, and specify whether to enable SSH access for the root user: You may provide an SSH public key, that will be added by the deployment script to the authoriz ed_keys file of the root user in the engine appliance. This should allow you passwordless login to the engine machine after deployment. If you provide no key, authorized_keys will not be touched. S

Use Quizgecko on...
Browser
Browser