Summary

This document outlines VMware Cloud Foundation (VCF). It details VCF components, architecture, deployment procedures, and licensing. The document contains questions and answers about VCF, and is geared towards professionals.

Full Transcript

Title Location VMware Cloud Foundation FAQs VMware Cloud Foundation Resource Center VMware Cloud Foundation (VCF) Blog VMware vSAN Documentation VMware vSphere Documentation Resource Center Components Framework -- Support Cloud Op Model On Prem acts like a cloud Cloud Mgmt -- Compute-Stor...

Title Location VMware Cloud Foundation FAQs VMware Cloud Foundation Resource Center VMware Cloud Foundation (VCF) Blog VMware vSAN Documentation VMware vSphere Documentation Resource Center Components Framework -- Support Cloud Op Model On Prem acts like a cloud Cloud Mgmt -- Compute-Storage-Networking (wrapped in security) Automation and Configuration Workflow and then Validation Traditional hw on prem and cloud (hybrid cloud approach ok) Cloud Builder 5.2 -- used once to build mgmt. domain SDDC Mgr 5.2 -- Star of the show, brains of Ops, perform initial configuration -- like vSphere client login -- everything coordinated from here. Automation of build out of environment vCenter 8.0 Appliance, esxi 8 -- base of resources and ops VSAN witness appliance 8.0 -- manage storage without 3^rd^ party NSX 4.2 -- networking / routing and switching east/west and handles n/s routing to peer to top of rack or physical routers Aria Suite Lifecycle 8.18 -- day 2 ops capabilities. Large suite of products Architecture and Components Cloud Builder -- Used once, deploy mgmt. domain (all mgmt. components live -- vcenters, nsx mgrs., aria, id mgr -- initial bring up -- sddc mgr -- initiate other workflows going forward) - Deploys and configures vCenter - Creates vSAN cluster using 4 esxi hosts - Host networking and cluster features - NSX mgr cluster - Host networking to support NSX - NSX transport zones and configures tunnel end points (TEPs) - Can remove vm once build out but can keep it to build other instances, etc Maintains and inventory of managed objects Provides HTLM5 UI and API gateway Deploys & configures software solutions Orchestrates the following tasks: Software updates on managed objects Configuration tasks on managed objects Maintains network pools for consumption by managed objects Creates and manages vi workload domains Dashboard vSAN local storage devices on ESXI hosts -- single shared storage pool for all hosts Integration with esxi Mgmt. using vsphere client Policy-based storage consumption Consistent and best performance Stretched cluster capability All-flash and hybrid configurations Support for OSA and ESA Dedup and compression (all-flash only) Native health service NSX -- E/W and N/S and peering with native devices in datacenter vSphere with Tanzu -- App Modernization -- any and all workloads in vcf instance -- yes consistent networking across private clouds, public clouds, and containers E/W BGP datacenter devices for N/S Firewall capabilities on zero Basic multi-tier routing Aria Suite -- day 2 and beyond ops -- network monitoring, log analysis, journal analysis and automation -- automation of vm's Ops Networking, log, op control, automation ESXi/vCenter/vSAN -- pillar basis of compute, memory, std resources vSAN -- build out mgmt. domain (bring up from scratch -- also supports external storage) Tanzu (modern apps)-- kuernetes workloads directly on esxi hosts and create upstream Kubernetes clusters in dedicated resource pools Called workload management in vCenter Workload Domain example Tanzu Mgmt (NSX vc), mgmt. wd-01, wd-02 A screenshot of a computer Description automatically generated vSphere and VCF Licensing vSphere 8.0u2b and VCF 5.1.1 and \> **[VVF]** - vSphere Foundation (vSphere previously and limited) vCenter, vSphere with Tanzu, ESX iand vSAN (100GB per core) and access to standard features in aria suite lifecycle, ops, logs not as streamlined as VCF potential add-ons Edge, load balancer, intelligence services **[VCF]** (access to all) SDDC Manager, vSphere, NSX (not security piece), Aria Suite, VSAN, AON (aria ops for networking), HCX, Data Service Manager (DSM) Add-ons Live Site Recovery, live cyber recovery, security part of NSX, firewall and atp, load balancer, and tanzu, also Private AI Foundation (PIAF) -- new processor capabilities -- cloud operation model Applying Licenses -- Solution License Key Add directly to vCenter ESXi, vCenter, Tanzu After applying solution license key, adding vCenter to aria, nsx, hcx, these features auto entitle themselves with VCF Until then they are in eval mode Look for more automation on licensing in the future Brownfield Deployment -- Standing up mgmt. domain, import and convert features brand new (resources need to be there) VCF 5.1.1 and vSphere 8.0 u2b working Individual keys appear in licensing portal If running vcf \< 5.1.1 must downgrade solution key Manually apply individual components License required for VMware Load Balancer Add-On VCF Which VMware Cloud Foundation component aggregates local storage devices on ESXi hosts and creates a single shared storage pool across all hosts VSAN Planning and Preparing VCF Design Workload Domain Types -- (generic term) 1. Management Workload Domain -- 1 per VCF instance (all mgmt. components, sddc mgr, vcener, aria, nsx manager, etc) 2. Workload Domains -- VI (vsphere based -- std to types of clusters in vsphere -- also enabled for tanzu, other specialized like virtual desktops) Purpose based -- follow same kind of policy Control role-based access 3. Single Sign On (isolated VI workload domain) - a. 14 workload domains limited to SSO + 1 mgmt domains = 15 total (vcenter limit) Architecture Types 1. Consolidated -- (small environments) a. Compute workload co-reside in the mgmt. workload domain b. vSphere cluster shares resource pools c. vSphere with tanzu and Kubernetes are supported d. Can scale out to Standard Architecture 2. Standard e. mgmt. domain is DEDICATED to mgmt. components f. VI workload domains run customer workloads MGMT Domain Features - includes all the components required to enable centralized administration, automation, and monitoring of the entire software-defined data center (SDDC) stack - Hosts (min of 4) - SDDC Manager, which deploys the VI workload domains - Can optionally include VMware Aria Suite components - Consists of a minimum of four hosts - Built using vSAN storage MGMT Domain Decisions Consolidated or Standard How many nodes? (min 4 hosts -- for vSAN) Sizing of ESXi host Disk groups, physical disk infrastructure, all-flash (best performance) OEMs help with this Business Continuity Availability Zones Stretched design (10GB throughput, less than 5 ms latency) 150 ms latency between instances Run mgmt. components in AZ1 for example AZ2 will be the backup for this Advantage of multiple zones Single All flash vSAN Optional nfs supplemental storage Multiple Availability zones Min of 8 hosts in stretched cluster vSAN witness appliance in a 3^rd^ location vSAN stretched cluster required as first cluster min 4 hosts in EACH availability zone All-flash Optional nfs supplemental storage VMware Validated Solutions Framework Design Detailed Design Planning and prep Implementing procedures (UI) Operational guidance Solution interoperability For more information, see \"VMware Validated Solutions\" at . See the introductory video on VMware Validated Solutions at . **[Isolated workload domain means SSO (single sign on)]** Design Decisions Requirement, justification, implication Planning and Preparation Workbook (Excel Spreadsheet) Which tab in the VMware Cloud Foundation Planning and Preparation Workbook provides a summary of the infrastructure configuration requirements that must be met before you deploy VMware Cloud Foundation? - **SDDC Inputs** tab - **Infrastructure** tab - **Management Domain Sizing Inputs** tab - [Prerequisite Checklist tab] Correct! Configuration of Final Environment Statistics Management TAB What characteristics of Management Domain? VSAN Storage Hosts SDDC Manager Not 3 or 5 hosts and not NSX storage Deployment Management Domain -- automated based on workbook, cloudbuilder Replacing Certs 1. Install ESXi host 2. Validate TSM-SSH service running 3. Use DCUI or SSH to login 4. Rename existing Certs in /etc/vmware/ssl 5. Copy new signed Certs to /etc/vmware/ssl 6. Rename new certs to replace old certs 7. Stop SSH Convert Deployment Param excel to JSON file and add cert details 1. Use SSH to VMware Cloud Builder Appliance 2. Copy deply params workbook to /hom/adm 3. Run./sos -- jsongenerator -- jsongenerator-input./name.xlsx---jsongenerator-design output location (routing info, host name, ip of dns, ntp servers, etc) 4. Copy file to deployment system and provide json 5. Edit json file to include cert 6. Upload deploy params 7. Convert worksheet 8. Update json file 9. Cloud builder implements json cert for each host Esxi hosts (4) Deploying VCF VSAN Image (OEM) -- LCM SSH/NTP -- turn on VMK portal configure for (MGMT) on standard switch VSAN Capacity (do not configure! Automated by cloud builder appliance) Cloud builder needs to run on another ESXi host - used to bring up the first cluster of the management domain Parameter workbook deployed / imported into cloud builder to build out mgmt. domain Extensive set of validations (descriptive) -- go back and change and then deploy SDDC Manager gets deployed vCENTER -- Mgmt VMs NSX Mgmt cluster -- networking /mgmt. domains -- 3 appliances and VIP Bring up -- build up distributed switch, auto deletes standard DO NOT PRE-BUILD AUTOMATED! Bring Up Process -- Cloud builder bringing up mgmt. domain. Cloud builder is no longer needed after this. SDDC manager works thereafter to bring up all workload domains [https://core-vmware.bravais.com/api/dynamic/documentVersions/30263/files/595722/media/\_VCF/CloudFoundation/VCFDCM52/Lecture/M03/M 3 slide 21.svg](https://core-vmware.bravais.com/api/dynamic/documentVersions/30263/files/595722/media/_VCF/CloudFoundation/VCFDCM52/Lecture/M03/M%203%20slide%2021.svg) ![A screenshot of a computer Description automatically generated](media/image2.png) Not included in Cloud Builder -- VSAN Witness appliance (for vSAN stretched) VMware Aria Suite Lifecycle (must be downloaded in SDDC manager AFTER deployment) Deployment Workbook is Uploaded to Cloud Builder A screenshot of a computer Description automatically generated vSphere Lifecycle manager cluster image -- includes ESXi software Drivers Firmware Configuration Settings for ESXI hosts (network, security, adv) Vendor add-ons Security Patches ![A screenshot of a computer Description automatically generated](media/image4.png) In the example, vSphere Lifecycle Manager cluster imaging was enabled in the Deployment Parameters Workbook, resulting in ESXi cluster homogeneity checks during validation. Cloud Builder can be deployed on ESXi, Workstation Pro, Fusion Pro Lab on Deployment Preliminary SDDC Setup Tasks Wizard Configure NSX mgr and SDDC mgr backups (backup server) Register server Configure setup Hourly, weekly, what time Whenever SDDC mgr completes a task Connect to ID provider Types -- Embedded, AD\_FS, Okta, Mocrosoft Centra ID, Ping Federate ![A screenshot of a computer Description automatically generated](media/image6.png) Single sign-on Multiple vcenter single sign-on capable A diagram of a cloud foundation Description automatically generated Embedded AD over LDAP or Open LDAP ID Source Name Base Distinguished Name for Users Base Distinguished Name for Groups Domain Alias User Name/Pwd Certificates (for LDAPS) ![A screenshot of a computer Description automatically generated](media/image8.png) Connect to Online Depot Download bundles of software There is an offline bundle utility also A screenshot of a computer Description automatically generated USER MANAGEMENT SDDC Manager Admin, Operator, Viewer roles are in SDDC Manager Admin -- access to all functionality of the UI and API Operator -- cannot access user mgmt., pwd mgmt. or backup configuration settings Viewer role can only view. User mgmt. and pwd mgmt. are hidden INSIDE SDDC MANAGER Chng host and other component passwords Deploy/configure NSX Edge Adding hosts to existing cluster Creating new cluster OUTSIDE SDDC MANAGER Changing roles and permissions for AD user & group Adding new resource pools Creating port groups Applying updated license keys to vSphere that are added to SDDC Mgr Safeguarding VCF Components Restrict access so vSphere admins can only access their assets and educate about potential damage Restrict access to actions that can affect VCF assets and create procedures for rquesting changing through SDDC mgr Customers must assess their current admin methodologies Most restrictive approach NSX mgr and SDDC mgr -- E/W conflict potential Restrict access for each nsx admin Restrict access that can affect VCF User-created service accounts API accounts Benefit -- Aria Automation Aria Orchestrator Custom Apps that make API calls to VCF System-Created service accounts Auto created and removed by system as needed Appear in SDDC mgr UI, you can rotate pwds as desired SDDC mgr and each ESXi host Aria suite lifecycle and vCenter NSX mgr and vCenter Naming convention -- svc on prefix Ex; svc-vcf-esxi-1 ![A screenshot of a computer Description automatically generated](media/image10.png) VMware123! Administration Drop Down Access Control Global Permissions Vclass.local\\vcfadmin SDDC manager VMware 123! Administration/Single Sign On Add user to group A screenshot of a computer Description automatically generated Add operator role to account called VCFPriv ![](media/image12.png) Api Commands to do specific tasks Developer Center API Explorer Users Get V1roles Execute -- run command for us Actual output is below -- simple Copy id given Post section Expand it Try it out section User expand link ![A screenshot of a computer Description automatically generated](media/image14.png) Copy API Key A screenshot of a computer Description automatically generated Issue a post command with this service account Executing a command Tokens section Post /v1/tokens Token creation specs, enter API key from above ![A screenshot of a computer Description automatically generated](media/image16.png) Password Management ESXi host vCenter Server Appliance NSX Mgr NSX Edge Aria Suite Lifecycle Other Aria products when deployed in aware mode SDDC Manager backup user Other VMware solutions integrate in VCF -- use pwd mgmt. practices Best Practices NSX -- 12 character minimum VCF 5.2 -- 15 character password requirement 1 uppercase 1 special character 1 numerical Change immediately after deployment Rotate at least 90 days Schedule pwd rotation in SDDC mgr NOT available on ESXI hosts Monitor fo expiration Secure up-to-date copies according to company Rotating Passwords Rotate All -- change for all selected accounts Rotate Now -- immediately change pwd for 1 or more selected (subset of all) Schedule Rotation -- automation 30,60,90 days schedule from enabled date (or disable schedule) Enables auto-rotate auto for vcenter service accounts NOT available for ESXi hosts Runs at midnight, per NTP Source time zone Rotate Password Default policy -- 20 characters in length 1 upper case letter 1 number 1 special character (@,\#,\$,\^,\*) No more than 2 same characters consecutive Auto rotate auto enabled for vcenter May take up to 24 hours  . Update Password (update sync entity with SDDC mgr) Remediate Password (SDDC mgr to entity) Lookup\_passwords command -- ssh to SDDC manager (taking over existing environment) Exceptions -- Passwd command on Linux command line to change passwords for root and vcf accounts Use api change to change pwd for api calls SDDC manager uses admin account for internal API calls -- change this one! (key to everything) Password expire notice -- scheduled job that runs in SDDC manager Access sddc manager with ssh as vcf account Su Passwd Passwd root Passwd vcf API admin password Use sshe to access sddc manager as vcf, su to root, run - To change the API admin password: - Access SDDC Manager with SSH as the vcf account. - Run the su command to change to the root account. - Run the - **[/opt/vmware/vcf/commonsvcs/scripts/auth/set-basicauth-password.sh admin \]** A screenshot of a computer Description automatically generated API Commands Access Token -- refreshed every hour Refresh Token -- refreshed every 24 hours Pair sent together from SDDC manager to External Application Uses Manage VCF assets User Cred mgmt. Manage networking, including Application Virtual Network (AVN) and network pools Perform Lifecycle mgmt. Create & mg vcenter clusters and workload domains Manage multiple instances List local OS user accts For information about how to use the complete list of available APIs, see the VMware Cloud Foundation API Reference Guide at  API Explorer Tab in Developer Center List of components, search option. Expand to show code ![](media/image18.png) PowerCLI Command line tool Example Most power is in API call and not Powercli A screenshot of a web page Description automatically generated ![A close-up of a computer code Description automatically generated](media/image20.png) Lookup password command line A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image22.png) API Developer Api explorer tab Credentials (expand) Get /v1/credentials Try It Out Section ResourceType -- ESXi Account Type -- USER Execute A screenshot of a computer Description automatically generated Can also download response body (json file) -- to downloads folder ![A screenshot of a computer Description automatically generated](media/image24.png) A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image26.png) A screenshot of a computer Description automatically generated NSX Overview Adds layer 3 routing capability -- before we would have to exit the environment to a router and come back VMs info NOT on physical network. All taking place in software Can encapsulate through physical routers to another VM Routing information from orange network VM1 to blue VM1 (blue routing line in diagram) Route first, switching operation over the network Exponentially better performance over physical network Can span layer 2 network from Site 1 to Site 2 (purple) -- failover and load balancing -- subnet exists in both places NSX improves security! Management Plane Control Plane (controllers that have information (FIB -- forwarding information) Data Plane 3 Nodes and VIP 1 is primary ![A screenshot of a computer Description automatically generated](media/image29.png) Tier 0 --(North/South) Routing between VCF & Physical (NSX Edge) -- Top of Rack routers, etc Tier 1 -- (East/West) -- Routing between segments An NSX overlay segment is a virtual Layer 2 network within VMware NSX that utilizes a software-defined overlay network to connect virtual machines across different physical hosts, essentially creating a logical network on top of the existing physical infrastructure, where traffic between VMs on different hosts is encapsulated and tunneled through the hosts to reach their destination, rather than relying on traditional VLANs on the physical switch; this allows for greater flexibility in network design and scalability across multiple data centers A screen shot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image31.png) vDefend About vDefend As you deploy your private cloud infrastructure with VMware Cloud Foundation, you need a comprehensive way to protect your applications. The vDefend license add-on offers a multilayer defense-in-depth solution to protect organizations against advanced threats. vDefend Gateway Firewall -- Permiter fw protection L4 /L7 firewalls N/S traffic vDefend Distributed Firewall -- micro-segmentation Advanced threat prevention -- IDS/IPS (intrusion detection/prevention system) Network sandboxing (malware prevention) Network traffic analysis (NTA) Network detection response (NDR) Security intelligence Mgmt and Control Planes (single NSX cluster) Mgmt. -- REST API and web-based UI for user config Control -- compute and distributing the network run-time state VCF deploys NSX management clusters in management domain Mgmt. domain and VI workload domains have separate mgmt. and control planes NSX mgmt. cluster -- 3 nsx mgr nodes for HA and redundancy NSX mgr appliance built=in policy, manager, and controller roles Mgmt. plane includes the policy &manager roles Central control plane (CCP) includes the controller role Desired state is replicated in distributed persistent db Different sizes -- Extra small -- cloud service mgmr only Small -- lab or poc Medium for deploy up to 128 hosts -- default size for mgmt. domain Large -- up to 1024 hosts, default size for VI workload domain Form factors **[NSX 4.2 -- adds extra-large (can add during bring up)]** Small -- 4 cpu, 16 gb mem, 300 gb hd Medium -- 6 cpus, 24 gb mem, 300 gb hd Large -- 12 cpus, 48 gb mem, 300 gb hd Extra large -- 24 cpus, 96 gb mem, 300 gb hd. NSX managers is a STANDALONE appliance Mgmt. plane, control plane, policies NSX uses groups -- collate objects to define security policies (vms, tags, ips, mac, segment ports, ad users , other groups) Deployed -- All mgr appliances are on same VLAN or subnet 1 manager node is elected leader Cluster(s) virtual ip is attached to leader manager Traffic is NOT load balanced across mgrs. while using VIP Cluster VIP is used for API/GUI access to nsx mgr If leader fails, 2 remaining mgrs. elect a new leader Always 3 node ![A diagram of a computer Description automatically generated](media/image33.png) Segment (logical switch) Layer 2 networking Port or segment level profiles VNI (similar to vlan id) VNIs scale beyond the limits of VLAN IDs Cannot configure NSX on standalone ESXi host -- no VDS capability VDS MTU is 1600 or \> VDS attaches to 1 single overlay transport zone and multiple VLAN transport zones NSX Edge -- Routing to external networks Acts as host gateway (NAT) Terminates overlay network tunnels Separate routing tables for mgmt. and overlay traffic Site to site VPN Purpose -- computational power to deliver network & routing services DATA PLANE Appliances with pools of capacity Provides HA (active/active and active/standby models) Deployed in DMZs and multi-tenant cloud (virtual boundaries) VCF automates NSX Edge Deployment VCF supports -- bare-metal NSX edge nodes that require High performance, low latency routing (TELCO) NSX Edge Cluster Scale out HA Must join to use a transport node A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image35.png) Workload Domain (Networking) VI Workload Domain Add 3 node NSX manager cluster Cluster deployed when you create the first VI workload domain Cluster shared by other VI workload domains or dedicated to single VI workload domain NSX mgr adds the VI workload domain vCenter as compute mgr NSX manager creates vLAN and overlay transport zones to which all host transport nodes are added NSX mgr creates uplink profile and applies it to all host transport nodes Mgmt Domain -- Dedicated NSX manager, and VCenter Always have dedicated vCenter to workload domain A screenshot of a computer screen Description automatically generated Transport Zones and Uplink Profile For each NSX manager cluster instance VLAN and transport zones are created where transport nodes are added- defined by uplink profiles Traffic Separation Default 2 Physical Nics Uplink associated to each nic and service different traffic classes ![A diagram of a traffic separation Description automatically generated](media/image37.png) 40GB bandwidth so usually not traffic Traffic that can be separated Mgmt, storage, vmotion, overlay Deploying Workload domain deployed separately Mgmt domain -- cloud builder One of 3 VDS profiles is selected to segregate traffic types 1. Default unified fabric a. 1 VDS, NSX prepared b. Unified traffic mgmt. -- vmotion, storage, nsx edge, overlay networks c. Recommended default d. Requires 2 or 4 physical nics 2. Storage traffic separation e. Vds, nsx reppared for mgmt., vmotion, nsx edge and overlay f. **secondary vds -- separates the storage network (dedicated) -- 2 to storage dedicated** g. 4 physical nics 3. Nsx traffic separation h. Vds, vmotion, storage networks i. **secondary vds -- nsx prepared to separate nsx edge and nsx overlay networks -- 2 dedicated to** networking j. 4 physical nics Can use 6 nics but do this through SDDC manager API -- you specify this in JSON file VI Configuration Switch configuration Shows how many adapters can use **[Do this in HARDWARE ORDER]** **Connect and cable correctly IN ORDER** A screenshot of a computer screen Description automatically generated **Rule of thumb** One NSX manager -\> Like Workload Domain Ex: NSX Manager --dedicated to Tanzu NSX Manager --dedicated to VDI **[New Feature]** Importing vSphere clusters to VCF as a VI Workload Domain Steps Scenario 1 (no VCF but need VCF) 1. ID vsPhere cluster to SDDC manager 2. Download and deply SDDC manager as an appliance -- NOT built by cloud builder 3. Copy import scripts to SDDC 4. Use SSH 5. Extract import script (convert option) 6. Pull into existing manager domain Scenario 2 (SDDC manager existing) Migrating VLAN-backed workload to NSX overlay Can migrate workloads from vSphere VLAN to MSX overlay network VLAN to overlay migration tool provides a UI-based ste-by-step workflow with validation before and after each step to ensure smooth transition ![A screenshot of a computer Description automatically generated](media/image39.png) Dual DPU Support in 5.2 Active-standby or Maximum performance mode (doubles network offload) Workload -- SDDC manager deployed Transport zones are created to which host transport nodes are added -- defined by uplink profiles VCF replaces segregating graffice types in vswitch or vds by using consolidated VDS (NSX) with each traffic type separated by VLANS and GENEVE or tunnel endpoints (TEPS) Use case -- security Can map mgmt., storage, vm traffic to logical switch that is connected to a dedicated physical nic Mgmt. vmkernal adapters can beon one pair of physical nics wihile other traffic (vsphere, vmotion, vsan, vm and so on on another pair Can isolate vm traffic so 1 pair of phy nics manage all mgmt. vmkernel adapters vmotion vsan and a 2^nd^ pair handles overlay and vm traffic Use case -- bandwidth limitations Can separate vm traffic onto a port group supported by separate phy adapters Guarantee bandwidth especially when cannot use 25 GBE adapters Deployment Mgmt Domain -- Vcloud builder and deployment parameter workbook or bring-up JSON file VI Domain -- deployed using SDDC manager Deploying AVI load balancer from SDDC manager Integrated with SDDC manager to deploy the avi load balancer controller cluster into workload domains VCF 5.2 integrates with Avi LB AVI requires add-on license Deployment of AVI LB controller and deployment of the service engines (Ses) is performed from the avi admin console A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image41.png) NSX EDGE Automates the following Deploys NSX Edge VMs Creates uplink profiles Creates uplink segments When using BGP, enables, configures and verifies tier-0 bgp (border gateway protocol) Creates & configures the initial tier-0 and tier-1 gateways Expands or shrinks an existing nsx edge cluster Deploy Workload Domain Add Edge Cluster A screenshot of a computer Description automatically generated At time of deployment or later as above Prerequisites Does validation Mgmt. network & gateway MUST be reachable from nsx hosts and edge instances CANNOT deploy edge cluster on a vSphere cluster that is already stretched Wizard MTU, tier0and1routernames profile type (default or bidirectional forwarding detection (BFD) for bgp peering, passwords for edge root, confirm, edge admin, confirm, edge audit, confirm BFD with BGP enhances reliability and responsiveness of your network -- faster detection and response to link failures Use Cases for NSX Edge Deployment Options Kubernetes Used for North South routing for vSphere with Tanzu cluster Deploys large form factor appliance and configures active-active HA ONLY Supports IAAS control plane Application Virtual Network (NSX segments can be stretched -- aria and access) Aria Suite Custom Require different appliance sizes or HA options Tier 0 routing type is static or BGP Form Factor -- Small (2ghz vcpu, 4gb mem) ASN default 65005 OSPF not supported with Application or Federal and NOT integrated in SDDC manager To use OSPF, deploy using static routing, deploy nsx edge nodes, configure ospf BGP or Static routes only Steps General info Cluster Settings Edge Node (Ips) Summary Validation Can add multiple edge appliances For rack failure resiliency When tier-0 HA is active-standby and require \> 2 edge nodes When tier-0 HA is active-active and require \> 8 edge nodes Edges in deployment AVNs on management Resizing Function of NSX Expand or Shrink Removing NSX Edge Cluster (shrink option) Minimum of 2 NSX Edge nodes are needed for the Edge Cluster Prescriptive Creates a 2 tier routing (North South) (tier-0) and (East-West -- Tier 1) Custom with APIs JSON Custom-edge-deployment.json -- setting name, pwd, form factor, mtu, ip addresses, names Must have 2 to define cluster ![A screenshot of a computer Description automatically generated](media/image43.png) Edge Cases 1. Tier-0 active-standby and no tier-1 a. Directly connected to tier0 2. Tier-0 active-active and tier-1 Distributed routing 3. Tier 0 active-active and tier-1 active-standby separate cA screenshot of a computer Description automatically generated Configuring NSX Edge Mgmt. ip on mgmt. vlan 1 ege tnnnel endpoint (TEP) ip address for each edge node on edge tep vlan 1 uplink ip address for each edge node on separate uplink vlan ![A screenshot of a computer Description automatically generated](media/image45.png) A screenshot of a computer Description automatically generated NSX Edge Routing Considerations Spine and Leaf Design Layer 3 traffic between leaf switch and spine is routed ![A computer screen shot of a diagram Description automatically generated](media/image47.png) Equal-cost multipath routing (ECMP) is used to load balance traffic across layer 3 -- bgp is used to exchange routes and peer with nsx edge nodes Other protocols require network admin to configure with nsx edge nodes Layer 3/layer 2 boundary is at the leaf switches Other types possible like multi-chassis link aggregation (MLAG) No active spanning tree protocol (STP) instances Hierarchy is supported by not recommended Gateways -- Tier 1- vm gateway Tier-0 physical gateway BGP (border gateway protocol) or static **OSPF in SDDC manager not supported but it is still supported in NSX** ASN (autonomous system number) -- iBGP (routing between same system \# - 65000 to 65000 for example), EBGP (between different \#s -- 65000 to 65001 for example), BGP (protocol) A screenshot of a computer Description automatically generated NSX Edge Placement When placing nsx edge nodes in workload domain racks, all N-S traffic of dc is broadcsast to every rack Place the edge nods on hosts that are physically located in the edge and mgmt. racks to limit n/s traffic to those racks Consolidated Design 1 Domain Top of Rack (TOR) switch for N/S traffic N/S traffic max of 2 hops to reach external router E/W traffic is confined to rack vSphere HA can keep rack highly available Single Rack Simplifies on and off ramp Workloads have consistent latency for N/S Network pools can match Single point of failure, host redundancy but NOT rack redundancy Multi-Racks -- **RECOMMENDED (medium or above datacenter)** VLANS stretched across racks using VXLAN Mgmt, vSAN, vMotion, Host Overlay Spread resources across the racks Host Overlay TEPS are configured using DHCP ![A screenshot of a computer Description automatically generated](media/image51.png) Lab -- Deploying Edge Cluster using SDDC Manager A computer screen shot of a diagram Description automatically generated TEP -- NSX Section Mgmt -- vSphere Uplinks -- ties to upstream router BGP -- Edge VMs tie to Physical Routers (border gateway protocol) SDDC Manager -- Inventory Workload Domains Mmgt workload domain Actions Menu Add Edge Cluster Prerequisites List Select All ![A screenshot of a computer Description automatically generated](media/image59.png) A screenshot of a computer Description automatically generated Edge Cluster -- ![A screenshot of a computer Description automatically generated](media/image61.png) Kubernetes, Application Virtual Networks or Custom Cluster Type -- L2 Uniform (all hosts in cluster have identical mgmt. and tier-0 uplinks, edge & host TEP networks) L2 Non-uniform and L3 ABGP shows external \# different than bottom (E/W) ABGP number IBGP is the same autonomous system number (ASN) A screenshot of a computer Description automatically generated End of first edge node added Now onto the 2^nd^ node (minimum of 2 required!) At the end of adding 2^nd^ node Validation BEFORE it executes! ![A screenshot of a computer Description automatically generated](media/image63.png) Once validated, click finish and then deploy these 2. Takes about an HOUR! Click on Task link -- expand it -- watch what's going on with sub tasks A screenshot of a computer Description automatically generated Lab\#2 -- Create a Custom NSX Edge Cluster using SDDC Manager API ![A screenshot of a computer Description automatically generated](media/image65.png) A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image70.png) Application Virtual Networking (AVN) (ARIA!) For the management domain Aria components and identity management (workspace one access) 1 Tier-0 gateway Peers with physical network Configure with equal cost multipath (ECMP) routing 1 tier-0 gateway Services 2 NSX segments for VMware Aria Suite solution deployment 1 NSX Edge Cluster Deployment Details Overlay-backed NSX Segments Layer 2 traffic is carried by a tunnel between VMs on different hosts Type requires a direct connection to the tier-1 gateway VLAN-backed NSX segments Layer 2 traffic is carried across the VLAN network between VMs on different hosts A screenshot of a computer Description automatically generated \*Use Case Deploy Aria Suite Lifecycle Use SDDC manager to download Aria Suite Lifecycle install pck from depot Use SDDC manager to deploy \*You must use FQDN resolvable \*SDDC Manager validates inputs and reports errors \*Benefits Simplified data mobility Software-defined network top for apps Improved security & mobility of mgmt. apps Easily deployed through SDDC mgr UI or API Steps -- SDDC Manager -- Inventory -- Workload Domains \*Support and Restrictions NSX Edge cluster that is hosted on the default vSphere cluster in the mgmt. domain 1 pair of AVNs created NSX Edge clusters CANNOT be extended across multiple vSphere clusters CANNOT modify or delete AVNS through VCF ![A screenshot of a computer Description automatically generated](media/image72.png) Lab -- A screenshot of a computer Description automatically generated VLAN backed or overlay backed -- use OVERLAY to get advantages MTU 8900 Region-A (local remote collectors, aria ops logs,cloud proxies) and X-Region (across all sites, workspace 1, aria ops, workstation, automation) ![A screenshot of a computer program Description automatically generated](media/image77.png) Workload Domains Pool of vSphere hosts -- compute, networking, storage Default capabilities Management (1 for every VCF Instance) SDDC Manager, NSX, VCenter Deployed during bring-up process Uses vSAN storage and a dedicated NSX manager cluster VI (similar to vSphere) Workload Domains (Up to 14 VI supported) First is supported bya new NSX mgr cluster Subsequent VI can share the NSX mgr cluster or use dedicated NSX mgr cluster Tanzu -- usually dedicated VDI -- NSX mgr cluster running for mgmt. domain, 1 SDDC manager for entire VCF instance 1 vCenter for each workload domain Ex: 14 vcenters running in mgmt. domain Ex: 5 VDI workload domains Share NSX manager A screenshot of a computer program Description automatically generated VI Workload domain ESXi hosts are dedicated to a cluster vCenter instance for each VI workload domain deployed in mgmt. domain NSX mgr cluster deployed in mgmt. domain to support NSX networking in VI workload domain **[Principal storage -- vSAN]**, NFS v3/4.1, VMFS on FC or VVOLs Supplement storage -- NFS v3/4.1, VMFS on FC or VVOLs Architecture -- Consolidated 1 mgmt domain Workload vms run in mgmt. domain Testing, poc, small Shares resource pool Single rack, less complex Standard Mgmt. domain -- dedicated to mgmt. components VI workload domain run customer workloads Spans multiple racks and features more complex networking Can move from consolidated to standard Multiple Clusters Provides licensing compliance Storage options Sizing Availability Segmentation Scalability Targeted life cycle mgmt Same Clusters Same settings (resource, etc) Summary Slide -- ![A screenshot of a computer Description automatically generated](media/image79.png) Don't confuse workload domains and clusters Initial segmentation is workload domain Sub-segmentation is cluster within workload domain -- but has overarching guidance A blue and white diagram Description automatically generated ![A screenshot of a computer Description automatically generated](media/image81.png) Shared and Isolated Single Sign On (SSO) Instance VCF supported linked mode (ELM)- allows user access to all products without reauth All VI workload domai vCenter servers are connected using ELM SSO replication ring is maintained by SDDC manager Users authenticated through vCenter SSO architectures -examples A diagram of a cloud computing system Description automatically generated ![A screenshot of a computer Description automatically generated](media/image83.png) A screenshot of a computer Description automatically generated ![A screenshot of a computer Description automatically generated](media/image88.png) A screenshot of a computer Description automatically generated

Use Quizgecko on...
Browser
Browser