Workload Domain Prep - Host Commissioning PDF

Summary

This document provides instructions and checklists for the commissioning of host systems in a VMware environment. It covers adding hosts, validation procedures, and network pool configuration.

Full Transcript

**Workload Domain Prep** **Host Commissioning** Adding to existing or creating new workload domain Pre-imaged with correct esxi version Resolved dns Can't change storage type after host commissioned Network pools must exist BEFORE commission Individually or in bulk NTP/Syslog enabled A scre...

**Workload Domain Prep** **Host Commissioning** Adding to existing or creating new workload domain Pre-imaged with correct esxi version Resolved dns Can't change storage type after host commissioned Network pools must exist BEFORE commission Individually or in bulk NTP/Syslog enabled A screenshot of a checklist Description automatically generated SDDC manager creates distributed switches do NOT do this ahead of time. Standard switch only **Host Validation** 1. Retrieve fingerprint (certificate) 2. Compare certificate with VCF 3. Select check mark for each host to confirm certificate 4. Alternately -- turn on confirm ALL fingerprints 5. Remove hosts with fingerprints that can't be validated 6. VALIDATE ALL **Bulk.commission.json (template) -- same make changes here through API** Max 32 hosts at a time ![A screenshot of a computer Description automatically generated](media/image2.png) **View Host Inventory** Host Page in SDDC manager Assigned hosts: commissioned hosts assigned to a workload domain NetworkA screenshot of a computer Description automatically generated **Network Pool** Range if IP Addresses for specific services that VCF assigns to host Assigns for following host network types vMotion (required) vsan (optional) NFS (optional) iSCSI (optional) raw ISCSI is NOT a storage option iSCSI network is for VVOL config ![A diagram of a computer server Description automatically generated with medium confidence](media/image4.png) A screenshot of a computer Description automatically generated MTU 9000 -- not 8900 -- this is lab only **Implementation** Get right addressing Mgmt. network is added to a vSphere std switch on initial config ![A screenshot of a computer Description automatically generated](media/image8.png) **Sizing and Managing network pools** Physical network architecture \# of hosts Type of storage Non VCF hosts that share Capacity for planned growth Best practice -- allocate IP addresses in small block rather than reserving large blocks of ip addresses upfront **Editing Network Pools** Change the name of the network pool Add additional included ip address ranges You should plan your network pools to reserve what you need for a pool rather than larger blocks of addresses After IP address range in a pool they are locked to that pool Network pool cannot be deleted if any ip addresses in the pool are in use Up to 64 nodes (at time of video) 24 bit max/allows max range in class c (usually overkill) **Delete Network Pools** Must ensure us unused before you delete it Cannot delete the network pool that was created during the bring-up process Think about network pools -- can't change it as a best practice A screenshot of a computer Description automatically generated **Creating Network Pools Steps** You create network pools using the Create Network Pool wizard: - Enter the network pool name. - Select the network types. - Enter a VLAN ID between 1 and 4094. - Enter an MTU between 1500 and 9216. - Enter the subnet network address, subnet mask, and gateway address. - Enter a valid IP address range from the subnet. All VI workload domains must be configured with a vSphere vMotion network. Depending on your requirements, you can add each of the network types to the network pool. ![A screenshot of a computer Description automatically generated](media/image10.png) Process to network pool Administration - Network settings Network type (vsan, nfs, iscsi, vmotion) A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image13.png) Commission hosts through SDDC manager in VCF ![A screenshot of a computer Description automatically generated](media/image15.png)\ A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image17.png) **Workload Domain Configuration** Transport and Sub-Transport Zone profile (NSX concept) Span logical over physical infrastructure Zones Overlay -- Internal tunnel between esxi hosts and nsx edge Geneve-encapsulated VLAN NSX Edge uplinks to establish NORTHBOUND Top of Rack (TOR) Workload VLANs to physical network Rack to Rack -- Leaf Spine architecture most common Edge spans TOR and Hypervisor **Sub Transport (new in 5.1)** Adds Per RACK NSX configuration **VSAN Stretched Customers (new in 5.2)** Allows you to utilize Sub Transport profiles in VCF 5.2 **Guidelines** Transport node can have multiple NSX virtual switches IF the transport node has \> 2 PNICs Transport zone can only be attached to ONE NSX virtual switch on a given transport Node NSX virtual switch can attach to only ONE overlay transport zone NSX virtual switch can attach to multiple VLAN transport zones Segment can belong to only ONE transport zone Characteristics of Transport ZONE 1 transport zone can have all types of transport nodes (NSX Edge or ESXi) Transport zone does NOT represent a security boundary Hypervisor transport node can belong to multiple transport zones VSAN Stretched Clusters are configured using API or JSON file Separate network pool (not static IP address pool) is configured for vMOTION and vSAN networks for all hosts in the secondary fault domain participating in a vSAN stretched cluster. ESX host mgmt. addresses can be configured for each fault on separate networks if routes are created between them. ![A screenshot of a computer Description automatically generated](media/image19.png) **Workload Domain Management Tasks (VCF 5.1)** Deployed through SDDC manager ui Enhanced workflows -- variety of switch confg options depending on available PNICs and traffic isolation policies **Creating VI Workload Domains** Storage Type (VSAN, NFS, VMFS on FC, VVOL) Name the domain -- and lifecycle mgr used Name the cluster and (opt) cluster image for lifecycle mgr vCenter settings NSX mgrm appliance IP address and cluster vSAN config (if applicable) Datastore name if NFS, VMFS on FC or VVOL storage, NOT VSAN ESXi host VDS config (virtual distributed switch) Host assignment and licensing \*Management Domain has a dedicated NSX management cluster \*First workload Domain has a dedicated NSX management cluster Can be used to be shared across other workload domains if you wish **Expanding Workload Domains** Must use SDDC manager to expand existing Add Clusters Clusters can use diff storage types, network pools and HW profiles New clusters must use same lifecycle mgmt. option as the VI workload domain Multiple clusters can be added in parallel \# of hosts in a cluster must be within vcenter config maximums (64) A computer screen with text on it Description automatically generated **\*Recommendation to keep same in cluster/workload domain** Difference Between vSphere and VCF **Adding Hosts** Hosts must use same storage type as existing cluster hosts Host must use same network pool HW should be homogeneous across the cluster **Shrinking VI Workload Domains** Decommission Hosts after removing. **Delete Whole Workload Domain** **\*Resume generating event so make sure you know what you're doing** ![A screenshot of a computer Description automatically generated](media/image24.png) vSAN Configuration 1. vSAN maintains a single redundant copy of all data -- min of 3 hosts 2. vSAN maintains 2 redundant copies of data -- min of 5 hosts A screenshot of a web page Description automatically generated ![A screenshot of a computer Description automatically generated](media/image26.png) A screenshot of a computer program Description automatically generated **Workload Domain Design Considerations** ESXi design \*Best practice consistent across cluster (all nodes) \*Try same workload domain but may not be realistic VMware certified CPU & memory Storage devices Out-of-band mgmt. interface Network interfaces Esxi Host Size OEM tweaked to their hosts -- use this vCenter design vCenter size tiny -- 10 hosts or 100 vms small -- 100 hosts or 1000 vms med -- 400 hosts or 4000 vms lg -- 1000 hosts or 10000 vms x-lg -- 2000 hosts or 35000 vms HA Design and fault tolerance settings vSphere network design How many nics, appropriate vLANs, edges, bgp configuration ![A screenshot of a computer Description automatically generated](media/image28.png) NSX Dedicated or shared fabric \*Best practices Many vi workload domains same same, use shared If different, use dedicated NSX managers A screenshot of a diagram Description automatically generated ![A screenshot of a computer Description automatically generated](media/image30.png) Examples of dedicated -- VI, tanzu or VDI Shared storage design Based on storage and what you need Fault tolerance, vsAn, iscsi, nfs, vmotion, stretched clusters (5 ms latency, 10 GB) **Mgmt. must be stretched to have stretched workload domain** A diagram of a diagram Description automatically generated with medium confidence vSAN cluster design vSAN datastore or vSAN cluster as principal storage method vSAN datastore -- id the vSAN policies the each workload requires vSAN cluster -- determine the level of HA for workloads in each domain cluster HA and admission control Maintenance mode selection -- ensure accessibility or full migration Share existing SSO domain or have something new (5.2) ![A close-up of a document Description automatically generated](media/image32.png) A screenshot of a computer program Description automatically generated Lab -- task list In **Hosts and Clusters** inventory in the left pane of the vSphere Client, review the ESXi hosts listed under the sa-m01-cl01 and sa-wld01-cl01 clusters. Click the **SDDC Manager Dashboard** browser tab to open the SDDC Manager UI. In the navigation pane on the left, select **Inventory** \> **Workload Domains**. In the Domain column of the workload domains table, click **sa-m01**. On the sa-m01 page, click the **Clusters** tab. In the Clusters column, click **sa-m01-cl01**. On the sa-m01-cl01 page, click the **Hosts** tab. On the Hosts tab, select the **esxi-9.vcf.sddc.local** check box and click **REMOVE SELECTED HOSTS**. The Remove these hosts from the cluster? dialog box appears, but you do not click **REMOVE** yet. Open another instance of SDDC Manager in a new browser tab. In the navigation pane on the left, select **Inventory** \> **Workload Domains**. In the Domain column of the workload domains table, click **sa-wld01**. On the sa-wld01 page, click the **Clusters** tab. Click the vertical ellipsis icon next to sa-wld01-cl01 and select **Add Host**. On the Host Selection page of the Add Hosts wizard, select the **esxi-8-vcf.sddc.local** check box and click **NEXT**. On the Switch Configuration page, review the configuration details and then click **NEXT**. On the License page, select the default license from the **VMware vSphere** drop-down menu and then click **NEXT**. On the Review page, review the details of the host being added to the cluster and then click **FINISH**. The Addition of 1 hosts(s) is in progress message appears at the top of the sa-wld01 page. While the host is being added to the sa-wld01-cl01 cluster, return to the browser tab with the original instance of SDDC Manager. On the Remove these hosts from the cluster? dialog box, click **REMOVE**. The Remove of 1 hosts(s) for Cluster is in progress message appears at the top of the sa-wld01 page. Expand the Tasks pane to monitor the progress of the Removing host(s) from cluster and Adding new host(s) to cluster tasks. When both tasks are complete, click the **vSphere** browser tab to return to the vSphere Client. Review the ESXi hosts listed under the sa-m01-cl01 and sa-wld01-cl01 clusters to confirm that the add and remove processes were successful. Close Google Chrome to complete the simulation. ![A screenshot of a computer program Description automatically generated](media/image35.png) **vSAN in VCF Overview** Mgmt Domain must be (VSAN) (ESA or OSA) as principal storage \*All storage must be configured before bringing into VCF [unless you use VSAN] vSAN Storage -- software-defined Replacing physical with virtual VCF uses vSAN as the principal storage type VSAN enabled at cluster level All hosts must have a vSAN-enabled VMkernel interface Local disks on each host are pooled to create the vSAN datastore Cluster requires a network that Is dedicated to vSAN traffic vSAN ESA (express storage architecture) (recommended) don't need separate cache tier if all SSD based storage. No performance benefit. In fact, writing data endurance of SSDs becomes greater. No disk groups but create storage pool. Old -- OSA (original storage architecture) Advantages of using vSAN Extends virtualization to storage layer VCF automates vSAN lifecycle mgmt. Integrates storage and VM admin Brings storage to app using storage policy-based mgmt. (SPBM) Framework -- set of rules apply to VM or virtual disk that vSAN must follow Performance, capacity, redundancy, etc that we need a VM to have VM admin becomes storage admin when it comes to this VM Everything built into ESXi Also, monitoring is on par Supports modern design for app and microservices requirements Increases native monitoring and visibility ![A screenshot of a computer Description automatically generated](media/image37.png) **vSAN in the mgmt. domain** Bring up process -- vSAN created, minimum of 4 esxi hosts for mgmt. domain Use readynode sizer tool for capacity planning Create VI workload domains after mgmt. domain created Create cluster with min of 3 esxi hosts with local storage -- VSAN Deploy workload VMs when using a consolidated architecture Deploy solutions in VCF A screenshot of a computer Description automatically generated **Scaling vSAN Clusters in VCF** Hosts must be added to the cluster using the SDDC mgr Must commission hosts using the same network pool as other hosts in the cluster Should use homogeneous HW across the cluster If you add disks or disk groups to 1 host, do the same for all hosts Add disks & disk groups using the vSphere Client Must ALWAYS use SDDC Manager to add hosts. You ust use same network pool to \*Best practice **Use SDDC manager to add hosts to the cluster** **Use vSphere Client to add disks to Disk Groups** **[In the vSphere Client:]** **Esxcli vsan debug controller list** **Shows \# of controllers, \# used by vSAN -- see below** ![A screenshot of a computer Description automatically generated](media/image40.png) **Run** **Esxcli network vswitch dvs vmware list** A screenshot of a computer Description automatically generated **Df -h** ![A screen shot of a computer program Description automatically generated](media/image42.png) **Vdq -qH (easier to read format)** A screenshot of a computer Description automatically generated **[In the SDDC Manager UI]** Workload Domains, select mgmt. domain, clusters tab, select cluster, verify under vSAN section, name and free space ![A screenshot of a computer Description automatically generated](media/image44.png) **VVOL is supported as principal storage for VI domain deployment and NOT mgmt. domain deployment** A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image46.png) A screenshot of a computer Description automatically generated

Use Quizgecko on...
Browser
Browser