NSX-T Data Center 3.2 Installation Guide PDF
Document Details
Uploaded by GreekMichigander
CMU
2024
Tags
Related
- Hindi PDF 1-31 dQÀfa¶fSXX, 2023
- VCF Management Domain Backup and Restore PDF
- Cursus Geschiedenis (ca. 1500 - ca. 1800) - Sint-Donatusinst. Merchtem - PDF
- VMware Cloud Foundation 5.2 Design Guide PDF
- VMware Certified Professional (VCP) 2V0-11.24 NSX-T Data Center Exam Notes PDF
- Deploy and Configure an NSX Edge Cluster PDF
Summary
This document is an installation guide for VMware NSX-T Data Center 3.2. It covers workflows for vSphere, KVM, and bare metal. The guide provides system requirements, installation procedures, and troubleshooting for NSX Manager, Edge, and Transport Nodes.
Full Transcript
NSX-T Data Center Installation Guide Modified on 9 SEPT 2024 VMware NSX-T Data Center 3.2 NSX-T Data Center Installation Guide You can find the most up-to-date technical documentation on the VMware by Broadcom website at: https://docs.vmware.com/ VMware by Broadcom 3401 Hillview Ave. Palo Alto,...
NSX-T Data Center Installation Guide Modified on 9 SEPT 2024 VMware NSX-T Data Center 3.2 NSX-T Data Center Installation Guide You can find the most up-to-date technical documentation on the VMware by Broadcom website at: https://docs.vmware.com/ VMware by Broadcom 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com © Copyright 2017-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies. VMware by Broadcom 2 Contents NSX-T Data Center Installation Guide 9 1 Overview of NSX-T Data Center 10 Key Concepts 11 NSX Manager 14 Configure the User Interface Settings 17 2 NSX-T Data Center Installation Workflows 19 NSX-T Data Center Workflow for vSphere 19 NSX-T Data Center Installation Workflow for KVM 20 NSX-T Data Center Configuration Workflow for Bare Metal Server 20 3 Preparing for Installation 22 System Requirements 22 NSX Manager VM and Host Transport Node System Requirements 22 NSX Edge VM System Requirements 26 NSX Edge Bare Metal Requirements 28 Bare Metal Server System Requirements 32 Bare Metal Linux Container Requirements 33 Ports and Protocols 33 4 NSX Manager Installation 34 Modifying the Default Admin Password Expiration 38 5 NSX Manager Cluster Requirements 40 Cluster Requirements for an Individual Site 41 Cluster Requirements for Multiple Sites 42 6 Installing and Configuring NSX-T using vCenter Server Plugin 44 Install NSX Manager from vSphere Client 44 Configure NSX-T for Virtual Networking from vSphere Client 48 Configuring NSX-T for Security from vSphere Client 53 Prepare Clusters for NSX-T Security 54 Add vCenter Server and NSX Manager to Distributed Firewall Exclusion List 55 Create Groups 56 Define and Publish Communication Strategies for Groups 58 7 Installing NSX Manager Cluster on vSphere 61 VMware by Broadcom 3 NSX-T Data Center Installation Guide Install NSX Manager and Available Appliances 61 Install NSX Manager on ESXi Using the Command-Line OVF Tool 65 Configure an Appliance to Display the GRUB Menu at Boot Time 72 Log In to the Newly Created NSX Manager 73 Add a Compute Manager 73 Deploy NSX Manager Nodes to Form a Cluster from the UI 77 Configure a Virtual IP Address for a Cluster 85 Configuring an External Load Balancer 87 Disable Snapshots on an NSX Appliance 90 8 Installing NSX Manager Cluster on KVM 91 Set Up KVM 91 Manage Your Guest VMs in the KVM CLI 94 Deploy an NSX Manager Cluster on KVM 95 Install NSX Manager on KVM 95 Form an NSX Manager Cluster Using the CLI 101 9 Transport Zones and Profiles 104 Create Transport Zones 104 Create an IP Pool for Tunnel Endpoint IP Addresses 106 Enhanced Data Path 108 Configuring Profiles 111 Create an Uplink Profile 111 Configuring Network I/O Control Profiles 115 Add an NSX Edge Cluster Profile 124 Add an NSX Edge Bridge Profile 125 VMkernel Migration to an N-VDS Switch 126 VMkernel Migration Errors 132 10 Host Transport Nodes 136 Manual Installation of NSX-T Data Center Kernel Modules 136 Manually Install NSX-T Data Center Kernel Modules on ESXi Hypervisors 136 Manually Install NSX-T Data Center Software Packages on Ubuntu KVM Hypervisors 140 Manually Install NSX-T Data Center Software Packages on RHEL and CentOS Linux KVM Hypervisors 141 Manually Install NSX-T Data Center Software Packages on SUSE KVM Hypervisors 142 Install Third-Party Packages on a KVM Host 143 Verify Open vSwitch Version on RHEL or CentOS Linux KVM Hosts 145 Verify Open vSwitch Version on SUSE KVM Hosts 146 Verify Open vSwitch Version on Ubuntu KVM Hosts 147 Preparing Physical Servers as NSX-T Data Center Transport Nodes 148 Install Third-Party Packages on a Physical Server 149 VMware by Broadcom 4 NSX-T Data Center Installation Guide Configure a Physical Server as a Transport Node from GUI 151 Ansible Server Configuration for Physical Server 158 Create Application Interface for Physical Server Workloads 159 Secure Workloads on Windows Server 2016/2019 Bare Metal Servers 159 Prepare Standalone Hosts as Transport Nodes 161 Preparing ESXi and KVM Hosts as Transport Nodes 169 Transport Node Profiles 169 Sub-TNPs and Sub-clusters 170 Prepare a vSphere Distributed Switch for NSX-T 172 Add a Transport Node Profile 173 Prepare ESXi Cluster Hosts as Transport Nodes 178 Change Sub-cluster 185 Detach Cluster TNP 185 Migrate VMkernels and Physical NICs to a vSphere Distributed Switch 186 Configure an ESXi Host Transport Node with Link Aggregation 188 Prepare Clusters for Networking and Security Using Quick Start Wizard 190 Prepare Clusters for Networking and Security 190 Install Distributed Security for vSphere Distributed Switch 192 Deploy a Fully Collapsed vSphere Cluster NSX-T on Hosts Running N-VDS Switches 193 Multiple NSX Managers Managing a Single vCenter Server 203 Troubleshoot Multi-NSX Issues 206 Cannot Enable Multiple NSX-T on Single vCenter Server 206 Override NSX-T Ownership Constraints 207 Preparing Standalone Host Cluster Results in Failure 209 Traffic Performance Issues Related to NSX Edge VMs in Multiple NSX-T Data Center Environment 209 Managing Transport Nodes 211 Switch Visualization 211 NSX Maintenance Mode 211 Migrate ESXi VMkernel and Physical Adapters 212 Health Check VLAN ID Ranges and MTU Settings 213 View Bidirectional Forwarding Detection Status 217 Verify the Transport Node Status 218 11 Installing NSX Edge 221 NSX Edge Installation Requirements 221 NSX Edge Networking Setup 224 NSX Edge Installation Methods 231 Create an NSX Edge Transport Node 232 Create an NSX Edge Cluster 238 Install an NSX Edge on ESXi Using the vSphere GUI 239 Install NSX Edge on ESXi Using the Command-Line OVF Tool 243 VMware by Broadcom 5 NSX-T Data Center Installation Guide Install NSX Edge via ISO File as a Virtual Appliance 248 Install NSX Edge on Bare Metal 251 Prepare a PXE Server for NSX Edge 253 Install NSX Edge Automatically via ISO File 257 Install NSX Edge Interactively via ISO File 261 Intel QAT Support for IPSec VPN Bulk Cryptography 263 Join NSX Edge with the Management Plane 264 Configure an NSX Edge as a Transport Node 266 Remove NSX Edge Nodes from an Edge Cluster 268 12 vSphere Lifecycle Manager with NSX-T 271 Prepare an NSX-T Cluster with vSphere Lifecycle Manager 272 Enable vSphere Lifecycle Manager on an NSX-T Cluster 274 NSX-T with vSphere Lifecycle Manager Scenarios 277 vSphere Lifecycle Manager Failed to Prepare a Host for NSX-T Networking 278 vSphere Lifecycle Manager Failed to Prepare NSX-T Cluster 279 Delete a NSX-T Depot on vCenter Server 280 13 Host Profile integration with NSX-T 281 Auto Deploy Stateless Cluster 281 High-Level Tasks to Auto Deploy Stateless Cluster 281 Prerequisites and Supported Versions 282 Create a Custom Image Profile for Stateless Hosts 283 Associate the Custom Image with the Reference and Target Hosts 284 Set Up Network Configuration on the Reference Host 285 Configure the Reference Host as a Transport Node in NSX-T 286 Extract and Verify the Host Profile 289 Verify the Host Profile Association with Stateless Cluster 291 Update Host Customization 291 Trigger Auto Deployment on Target Hosts 292 Troubleshoot Host Profile and Transport Node Profile 301 Stateful Servers 304 Supported NSX-T and ESXi versions 304 Prepare a Target Stateful Cluster 305 VMkernel Migration with Host Profile Applied 306 VMkernel Migration without Host Profile Applied 308 14 Getting Started with NSX Federation 309 NSX Federation Key Concepts 309 NSX Federation Requirements 310 Configuring the Global Manager and Local Managers 311 VMware by Broadcom 6 NSX-T Data Center Installation Guide Install the Active and Standby Global Manager 312 Make the Global Manager Active and Add Standby Global Manager 314 Add a Location 315 Remove a Location 321 15 Install NSX Advanced Load Balancer Appliance Cluster 323 Troubleshooting NSX Advanced Load Balancer Controller Issues 326 NSX Advanced Load Balancer does not register with NSX Manager 326 The Second NSX Advanced Load Balancer Controller Remains in Queued State 326 NSX Advanced Load Balancer Controller Password Change Caused Cluster Failure 327 Unable to Delete NSX Advanced Load Balancer Controller 327 NSX Advanced Load Balancer Cluster HA Status is Compromised 328 Credential Mismatch After Changing NSX Advanced Load Balancer Controller Password 328 Deployment of NSX Advanced Load Balancer Controller Failed 329 Cluster Unstable After Two Controllers Are Down 330 16 Getting Started with NSX Cloud 331 NSX Cloud Architecture and Components 332 Overview of Deploying NSX Cloud 333 Deploy NSX-T Data Center On-Prem Components 333 Install CSM 333 Join CSM with NSX Manager 334 Specify CSM IPs for Access by PCG 335 (Optional) Configure Proxy Servers 335 (Optional) Set Up vIDM for Cloud Service Manager 336 Connect your Public Cloud with On-prem NSX-T Data Center 337 Deploy NSX Cloud Components in Microsoft Azure using the NSX Cloud Marketplace Image 339 Deploy NSX Cloud Components in Microsoft Azure using Terraform scripts 341 Deploy NSX Cloud Components in Microsoft Azure without using Terraform scripts 346 Add your Public Cloud Account 350 Adding your Microsoft Azure Subscription 351 Adding your AWS Account 357 Managing Regions in CSM 361 NSX Public Cloud Gateway: Architecture and Modes of Deployment 362 Deploy PCG or Link to a PCG 366 Deploy PCG in a VNet 366 Deploy PCG in a VPC 369 Link to a Transit VPC or VNet 372 Auto-Configurations after PCG Deployment or Linking 373 Auto-created NSX-T Data Center Logical Entities 373 VMware by Broadcom 7 NSX-T Data Center Installation Guide Auto-created Public Cloud Configurations 377 Integrate Horizon Cloud Service with NSX Cloud 379 Auto-Created Entities After Horizon Cloud Integration 382 (Optional) Install NSX Tools on your Workload VMs 385 Un-deploy NSX Cloud 385 Undeploy or Unlink PCG 386 17 Uninstalling NSX-T Data Center from a Host Transport Node 389 Verify Host Network Mappings for Uninstall 390 Migrate Out VMkernels and Physical NICs from a vSphere Distributed Switch 391 Uninstall NSX-T Data Center from a vSphere Cluster 394 Uninstall NSX-T Data Center from a Managed Host in a vSphere Cluster 397 Uninstall NSX-T Data Center from a Standalone ESXi Host 401 Uninstall NSX-T Data Center from a Physical Host 403 Triggering Uninstallation from the vSphere Web Client 405 Uninstall NSX-T Data Center from a Physical Server 407 Uninstall NSX-T from a vSphere Lifecycle Manager cluster through NSX Manager 408 18 Troubleshooting Installation Issues 410 Installation Fails Due to Insufficient Space in Bootbank on ESXi Host 410 Host Fails to Install as Transport Node 411 NSX Agent Times Out Communicating with NSX Manager 411 Troubleshooting Installation 412 Node Status of KVM Transport Node is Degraded 416 VMware by Broadcom 8 NSX-T Data Center Installation Guide The NSX-T Data Center Installation Guide describes how to install the VMware NSX-T Data Center™ product. The information includes step-by-step configuration instructions and suggested best practices. Intended Audience This information is intended for anyone who wants to install or use NSX-T Data Center. This information is written for experienced system administrators who are familiar with virtual machine technology and network virtualization concepts. Technical Publications Glossary VMware Technical Publications provides a glossary of terms that might be unfamiliar to you. For definitions of terms as they are used in VMware technical documentation, go to https:// www.vmware.com/topics/glossary. Related Documentation You can find the VMware NSX® Intelligence™ documentation at https://docs.vmware.com/en/ VMware-NSX-Intelligence/index.html. The NSX Intelligence 1.0 content was initially included and released with the NSX-T Data Center 2.5 documentation set. VMware by Broadcom 9 Overview of NSX-T Data Center 1 In the same way that server virtualization programmatically creates and manages virtual machines, NSX-T Data Center network virtualization programmatically creates and manages virtual networks. With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 through Layer 7 networking services (for example, switching, routing, access control, firewalling, QoS) in software. As a result, these services can be programmatically assembled in any arbitrary combination to produce unique, isolated virtual networks in a matter of seconds. NSX-T Data Center works by implementing three separate but integrated planes: management, control, and data. These planes are implemented as a set of processes, modules, and agents residing on two types of nodes: NSX Manager and transport nodes. n Every node hosts a management plane agent. n NSX Manager nodes host API services and the management plane cluster daemons. n NSX Controller nodes host the central control plane cluster daemons. n Transport nodes host local control plane daemons and forwarding engines. NSX Manager supports a cluster with three node, which merges policy manager, management, and central control services on a cluster of nodes. NSX Manager clustering provides high availability of the user interface and API. The convergence of management and control plane nodes reduces the number of virtual appliances that must be deployed and managed by the NSX-T Data Center administrator. The NSX Manager appliance is available in three different sizes for different deployment scenarios: n A small appliance for lab or proof-of-concept deployments. n A medium appliance for deployments up to 64 hosts. n A large appliance for customers who deploy to a large-scale environment. See NSX Manager VM and Host Transport Node System Requirements and Configuration maximums tool. Read the following topics next: VMware by Broadcom 10 NSX-T Data Center Installation Guide n Key Concepts n NSX Manager Key Concepts The common NSX-T Data Center concepts that are used in the documentation and user interface. Compute Manager A compute manager is an application that manages resources such as hosts and VMs. One example is vCenter Server. Control Plane Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines. Data Plane Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics. External Network A physical network or VLAN not managed by NSX-T Data Center. You can link your logical network or overlay network to an external network through a Tier-0 Gateway, Tier-1 Gateway or L2 bridge. Logical Port Egress Outbound network traffic leaving the VM or logical network is called egress because traffic is leaving virtual network and entering the data center. Logical Port Ingress Inbound network traffic entering the VM is called ingress traffic. Gateway NSX-T Data Center routing entity that provide connectivity between different L2 networks. Configuring a gateway through NSX Manager instantiates a gateway on each hypervisor. Gateway Port Logical network port to which you can attach a logical switch port or an uplink port to a physical network. Segment Port VMware by Broadcom 11 NSX-T Data Center Installation Guide Logical switch attachment point to establish a connection to a virtual machine network interface, container, physical appliances or a gateway interface. The segment port reports applied switching profile, port state, and link status. Management Plane Provides single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control, and data plane nodes in the system. Management plane is also responsible for querying, modifying, and persisting user configuration. NSX Edge Cluster Is a collection of NSX Edge node appliances that have the same settings and provide high availability if one of the NSX Edge node fails. NSX Edge Node Edge nodes are service appliances with pools of capacity, dedicated to running network and security services that cannot be distributed to the hypervisors. NSX Managed Virtual Distributed Switch or KVM Open vSwitch The NSX managed virtual distributed switch (N-VDS, previously known as hostswitch)or OVS is used for shared NSX Edge and compute cluster. N-VDS is required for overlay traffic configuration. An N-VDS has two modes: standard and enhanced datapath. An enhanced datapath N- VDS has the performance capabilities to support NFV (Network Functions Virtualization) workloads. vSphere Distributed Switch (VDS) Starting in vSphere 7.0, NSX-T Data Center supports VDS switches. You can create segments on VDS switches. NSX Manager Node that hosts the API services, the management plane, the control plane and the agent services. NSX Manager is an appliance included in the NSX-T Data Center installation package. You can deploy the appliance in the role of NSX Manager or nsx-cloud-service- manager. Currently, the appliance only supports one role at a time. NSX Manager Cluster A cluster of NSX Managers that can provide high availability. Open vSwitch (OVS) VMware by Broadcom 12 NSX-T Data Center Installation Guide Open source software switch that acts as a virtual switch within XenServer, Xen, KVM, and other Linux-based hypervisors. Opaque Network An opaque network is a network created and managed by a separate entity outside of vSphere. For example, logical networks that are created and managed by N-VDS switch running on NSX-T Data Center appear in vCenter Server as opaque networks of the type nsx.LogicalSwitch. You can choose an opaque network as the backing for a VM network adapter. To manage an opaque network, use the management tools associated with the opaque network, such as NSX Manager or the NSX-T Data Center API management tools. Overlay Logical Network Logical network implemented using GENEVE encapsulation protocol as mentioned in https:// www.rfc-editor.org/rfc/rfc8926.txt. The topology seen by VMs is decoupled from that of the physical network. Physical Interface (pNIC) Network interface on a physical server that a hypervisor is installed on. Segment Previously known as logical switch. It is an entity that provides virtual Layer 2 switching for VM interfaces and Gateway interfaces. A segment gives tenant network administrators the logical equivalent of a physical Layer 2 switch, allowing them to connect a set of VMs to a common broadcast domain. A segment is a logical entity independent of the physical hypervisor infrastructure and spans many hypervisors, connecting VMs regardless of their physical location. In a multi-tenant cloud, many segments might exist side-by-side on the same hypervisor hardware, with each Layer 2 segment isolated from the others. Segments can be connected using gateways, which can provide connectivity to the external physical network. Tier-0 Gateway A Tier-0 Gateway provides north-south connectivity and connects to the physical routers. It can be configured as an active-active or active-standby cluster. The Tier-0 Gateway runs BGP and peers with physical routers. Tier-1 Gateway A Tier-1 Gateway connects to one Tier-0 Gateway for northbound connectivity to the subnetworks attached to it. It connects to one or more overlay networks for southbound connectivity to its subnetworks. A Tier-1 Gateway can be configured as an active-standby cluster. Transport Zone VMware by Broadcom 13 NSX-T Data Center Installation Guide Collection of transport nodes that defines the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors. It also has been registered with the NSX-T Data Center management plane and has NSX-T Data Center modules installed. For a hypervisor host or NSX Edge to be part of the NSX-T Data Center overlay, it must be added to the NSX-T Data Center transport zone. Transport Node A fabric node is prepared as a transport node so that it becomes capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking. For a KVM host, you can preconfigure the N-VDS or you can have NSX Manager perform the configuration. For an ESXi host, you can configure the host on a N-VDS or a VDS switch. Uplink Profile Defines policies for the links from transport nodes to NSX-T Data Center segments or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles might include teaming policies, the transport VLAN ID, and the MTU setting. The transport VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint. VM Interface (vNIC) Network interface on a virtual machine that provides connectivity between the virtual guest operating system and the standard vSwitch or NSX-T segment. The vNIC can be attached to a logical port. You can identify a vNIC based on its Unique ID (UUID). Tunnel Endpoint Each transport node has a Tunnel Endpoint (TEP) responsible for encapsulating the overlay VM traffic inside a VLAN header and routing the packet to a destination TEP for further processing. TEPs are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX-T encapsulation of overlay frames. Traffic can be routed to another TEP on a different host or the NSX Edge gateway to access the physical network. TEPs create a GENEVE tunnel between the source and destination endpoints. NSX Manager The NSX Manager provides a web-based user interface where you can manage your NSX-T Data Center environment. It also hosts the API server that processes API calls. The NSX Manager interface provides two modes for configuring resources: n Policy mode n Manager mode VMware by Broadcom 14 NSX-T Data Center Installation Guide Accessing Policy Mode and Manager Mode If present, you can use the Policy and Manager buttons to switch between the Policy and Manager modes. Switching modes controls which menus items are available to you. n By default, if your environment contains only objects created through Policy mode, your user interface is in Policy mode and you do not see the Policy and Manager buttons. n By default, if your environment contains any objects created through Manager mode, you see the Policy and Manager buttons in the top-right corner. These defaults can be changed by modifying the user interface settings. See Configure the User Interface Settings for more information. The same System tab is used in the Policy and Manager interfaces. If you modify Edge nodes, Edge clusters, or transport zones, it can take up to 5 minutes for those changes to be visible in Policy mode. You can synchronize immediately using POST /policy/api/v1/infra/ sites/default/enforcement-points/default?action=reload. When to Use Policy Mode or Manager Mode Be consistent about which mode you use. There are a few reasons to use one mode over the other. n If you are deploying a new NSX-T Data Center environment, using Policy mode to create and manage your environment is the best choice in most situations. n Some features are not available in Policy mode. If you need these features, use Manager mode for all configurations. n If you plan to use NSX Federation, use Policy mode to create all objects. Global Manager supports only Policy mode. n If you are upgrading from an earlier version of NSX-T Data Center and your configurations were created using the Advanced Networking & Security tab, use Manager mode. The menu items and configurations that were found under the Advanced Networking & Security tab are available in NSX-T Data Center 3.0 in Manager mode. Important If you decide to use Policy mode, use it to create all objects. Do not use Manager mode to create objects. Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode to create objects. VMware by Broadcom 15 NSX-T Data Center Installation Guide Table 1-1. When to Use Policy Mode or Manager Mode Policy Mode Manager Mode Most new deployments should use Policy mode. Deployments which were created using the advanced NSX Federation supports only Policy mode. If you want to interface, for example, upgrades from versions before use NSX Federation, or might use it in future, use Policy Policy mode was available. mode. NSX Cloud deployments Deployments which integrate with other plugins. For example, NSX Container Plug-in, Openstack, and other cloud management platforms. Networking features available in Policy mode only: n DNS Services and DNS Zones n VPN n Forwarding policies for NSX Cloud Security features available in Policy mode only: Security features available in Manager mode only: n Endpoint Protection n Bridge Firewall n Network Introspection (East-West Service Insertion) n Context Profiles n L7 applications n FQDN n New Distributed Firewall and Gateway Firewall Layout n Categories n Auto service rules n Drafts Names for Objects Created in Policy Mode and Manager Mode The objects you create have different names depending on which interface was used to create them. Table 1-2. Object Names Objects Created Using Policy Mode Objects Created Using Manager Mode Segment Logical switch Tier-1 gateway Tier-1 logical router Tier-0 gateway Tier-0 logical router Group NSGroup, IP Sets, MAC Sets Security Policy Firewall section Gateway firewall Edge firewall VMware by Broadcom 16 NSX-T Data Center Installation Guide Policy and Manager APIs The NSX Manager provides two APIs: Policy and Manager. n The Policy API contains URIs that begin with /policy/api. n The Manager API contains URIs that begin with /api. For more information about using the Policy API, see the NSX Policy API: Getting Started Guide. Security NSX Manager has the following security features: n NSX Manager has a built-in user account called admin, which has access rights to all resources, but does not have rights to the operating system to install software. NSX-T upgrade files are the only files allowed for installation. You can change the username and role permissions for admin, but you cannot delete admin. n NSX Manager supports session timeout and automatic user logout. NSX Manager does not support session lock. Initiating a session lock can be a function of the workstation operating system being used to access NSX Manager. Upon session termination or user logout, users are redirected to the login page. n Authentication mechanisms implemented on NSX-T follow security best practices and are resistant to replay attacks. The secure practices are deployed systematically. For example, sessions IDs and tokens on NSX Manager for each session are unique and expire after the user logs out or after a period of inactivity. Also, every session has a time record and the session communications are encrypted to prevent session hijacking. You can view and change the session timeout value with the following CLI commands: n The command get service http displays a list of values including session timeout. n To change the session timeout value, run the following commands: set service http session-timeout restart service ui-service Configure the User Interface Settings You can configure how your users view the NSX-T Data Center user interface. These settings are valid for NSX Manager, and in NSX Federation for Global Managers and Local Managers. Prior to NSX-T Data Center 3.2, users could access two possible modes in the user interface: Policy and Manager. You can control which mode is default, and whether users can switch between them using the user interface mode buttons. The Policy mode is the default. New users of release 4.0 will not see the Manager button. If present, you can use the Policy and Manager buttons to switch between the Policy and Manager modes. Switching modes controls which menus items are available to you. VMware by Broadcom 17 NSX-T Data Center Installation Guide n By default, if your environment contains only objects created through Policy mode, your user interface is in Policy mode and you do not see the Policy and Manager buttons. n By default, if your environment contains any objects created through Manager mode, you see the Policy and Manager buttons in the top-right corner. You can use the User Interface settings to modify these defaults. See NSX Manager for more information about the modes. Procedure 1 With admin privileges, log in to NSX Manager. 2 Navigate to System > General System Settings. 3 Select User Interface and click Edit in the User Interface Mode Toggle pane. 4 Modify the user interface settings: Toggle Visibility and Default Mode. Toggle Visibility Description Visible to All Users If Manager mode objects are present, the mode buttons are visible to all users. Visible to Users with the Enterprise If Manager mode objects are present, the mode buttons are visible to users Admin Role with the Enterprise Admin role. Hidden from All Users Even if Manager mode objects are present, the mode buttons are hidden from all users. This displays Policy mode UI only, even if Manager mode objects are present. Default Mode Can be set to Policy or Manager, if available. VMware by Broadcom 18 NSX-T Data Center Installation Workflows 2 You can install NSX-T Data Center on vSphere or KVM hosts. You can also configure a bare metal server to use NSX-T Data Center. To install or configure any of the hypervisors or bare metal, follow the recommended tasks in the workflows. Read the following topics next: n NSX-T Data Center Workflow for vSphere n NSX-T Data Center Installation Workflow for KVM n NSX-T Data Center Configuration Workflow for Bare Metal Server NSX-T Data Center Workflow for vSphere Use the checklist to track your installation progress on a vSphere host. Follow the recommended order of procedures. 1 Review the NSX Manager installation requirements. See Chapter 4 NSX Manager Installation. 2 Configure the necessary ports and protocols. See Ports and Protocols. 3 Install the NSX Manager. See Install NSX Manager and Available Appliances. 4 Log in to the newly created NSX Manager. See Log In to the Newly Created NSX Manager. 5 Configure a compute manager. See Add a Compute Manager. 6 Deploy additional NSX Manager nodes to form a cluster. See Deploy NSX Manager Nodes to Form a Cluster from the UI. 7 Review the NSX Edge installation requirements. See NSX Edge Installation Requirements. 8 Install NSX Edges. See Install an NSX Edge on ESXi Using the vSphere GUI. 9 Create an NSX Edge cluster. See Create an NSX Edge Cluster. 10 Create transport zones. See Create Transport Zones. 11 Create host transport nodes. See Prepare ESXi Cluster Hosts as Transport Nodes. VMware by Broadcom 19 NSX-T Data Center Installation Guide NSX-T Data Center Installation Workflow for KVM Use the checklist to track your installation progress on a KVM host. Follow the recommended order of procedures. 1 Prepare your KVM environment. See Set Up KVM. 2 Review the NSX Manager installation requirements. See Chapter 4 NSX Manager Installation. 3 Configure the necessary ports and protocols. See Ports and Protocols. 4 Install the NSX Manager. See Install NSX Manager on KVM. 5 Log in to the newly created NSX Manager. See Log In to the Newly Created NSX Manager. 6 Configure third-party packages on the KVM host. See Install Third-Party Packages on a KVM Host. 7 Deploy additional NSX Manager nodes to form a cluster. See Form an NSX Manager Cluster Using the CLI. 8 Review the NSX Edge installation requirements. See NSX Edge Installation Requirements. 9 Install NSX Edges. See Install NSX Edge on Bare Metal. 10 Create an NSX Edge cluster. See Create an NSX Edge Cluster. 11 Create transport zones. See Create Transport Zones. 12 Create host transport nodes. See Prepare Standalone Hosts as Transport Nodes. A virtual switch is created on each host. The management plane sends the host certificates to the control plane, and the management plane pushes control plane information to the hosts. Each host connects to the control plane over SSL presenting its certificate. The control plane validates the certificate against the host certificate provided by the management plane. The controllers accept the connection upon successful validation. Post-Installation When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and other network components through the NSX Manager UI or API at any time. When NSX Edges and hosts join the management plane, the NSX-T Data Center logical entities and configuration state are pushed to the NSX Edges and hosts automatically. For more information, see the NSX-T Data Center Administration Guide. NSX-T Data Center Configuration Workflow for Bare Metal Server Use the checklist to track your progress when configuring bare metal server to use NSX-T Data Center. VMware by Broadcom 20 NSX-T Data Center Installation Guide Follow the recommended order of procedures. 1 Review the bare metal requirements. See Bare Metal Server System Requirements. 2 Configure the necessary ports and protocols. See Ports and Protocols. 3 Install the NSX Manager. See Install NSX Manager and Available Appliances. 4 Configure third-party packages on the bare metal server. See Install Third-Party Packages on a Physical Server. 5 Create host transport nodes. A virtual switch is created on each host. The management plane sends the host certificates to the control plane, and the management plane pushes control plane information to the hosts. Each host connects to the control plane over SSL presenting its certificate. The control plane validates the certificate against the host certificate provided by the management plane. The controllers accept the connection upon successful validation. 6 Create an application interface for bare metal server workload. See Create Application Interface for Physical Server Workloads. VMware by Broadcom 21 Preparing for Installation 3 Before installing NSX-T Data Center, make sure your environment is prepared. Read the following topics next: n System Requirements n Ports and Protocols System Requirements Before you install NSX-T Data Center, your environment must meet specific hardware and resource requirements. Before you configure Gateway Firewall features, make sure that the NSX Edge form factor supports the features. See Supported Gateway Firewall Features on NSX Edge topic in the NSX-T Data Center Administration Guide. NSX Manager VM and Host Transport Node System Requirements Before you install an NSX Manager or other NSX-T Data Center appliances, make sure that your environment meets the supported requirements. Supported Hypervisors for Host Transport Nodes Hypervisor Version CPU Cores Memory vSphere Supported vSphere version 4 16 GB CentOS Linux KVM 7.9, 8.4 4 16 GB Red Hat Enterprise Linux 7.9, 8.2, 8.4 4 16 GB (RHEL) KVM SUSE Linux Enterprise Server 12 SP4 4 16 GB KVM Ubuntu KVM 18.04.2 LTS, 20.04 LTS 4 16 GB Note To avoid memory errors on a hypervisor host running vSphere ESXi version 7.x.x, ensure that at least 16 GB is available before deploying NSX Manager. VMware by Broadcom 22 NSX-T Data Center Installation Guide Table 3-1. Supported Hosts for NSX Managers Support Description Hypervisor ESXi For supported hosts, see the VMware Product Interoperability Matrices. KVM RHEL 7.7 and Ubuntu 18.04.2 LTS For ESXi hosts, NSX-T Data Center supports the Host Profiles and Auto Deploy features on vSphere 6.7 EP6 or higher. See Understanding vSphere Auto Deploy in the VMware ESXi Installation and Setup documentation for more information. Caution On RHEL and Ubuntu, the yum update command might update the kernel version, which must not be greater than 4.19.x, and break the compatibility with NSX-T Data Center. Disable the automatic kernel update when you run yum update. Also, after running yum install, verify that NSX-T Data Center supports the kernel version. Hypervisor Host Network Requirements The NIC card used must be compatible with the ESXi version that is running NSX-T Data Center. For supported NIC card, see the VMware Compatibility Guide. To find the supported driver for the NIC card vendor, see Firmware Versions of Supported Drivers. Tip To quickly identify compatible cards in the Compatibility Guide, apply the following criteria: n Under I/O Device Type, select Network. n Optionally, to use supported GENEVE encapsulation, under Features, select the GENEVE options. n Optionally, to use Enhanced Data Path, select N-VDS Enhanced Data Path. Enhanced Data Path NIC Drivers Download the supported NIC drivers from the My VMware page. NIC Card NIC Driver Intel 82599 ixgben 1.1.0.26-1OEM.670.0.0.7535516 Intel(R) Ethernet Controller X710 for 10GbE SFP+ i40en 1.2.0.0-1OEM.670.0.0.8169922 Intel(R) Ethernet Controller XL710 for 40GbE QSFP+ Cisco VIC 1400 series nenic_ens NSX Manager VM Resource Requirements Thin virtual disk size is 3.8 GB and thick virtual disk size is 300 GB. VMware by Broadcom 23 NSX-T Data Center Installation Guide Reser Memo vation Appliance Size ry vCPU Shares s Disk Space VM Hardware Version NSX Manager Extra Small 8 GB 2 81920, 8192 300 GB 10 or later (NSX-T Data Center 3.0 Norma MB onwards) l NSX Manager Small VM 16 GB 4 16384 16384 300 GB 10 or later ( NSX-T Data Center 2.5.1 0, MB onwards) Norma l NSX Manager Medium VM 24 GB 6 24576 24576 300 GB 10 or later 0, MB Norma l NSX Manager Large VM 48 GB 12 49152 49152 300 GB 10 or later 0, MB Norma l Note NSX Manager provides multiple roles which previously required separate appliances. This includes the policy role, the management plane role and the central control plane role. The central control plane role was previously provide by the NSX Controller appliance. n You can use the Extra Small VM resource size only for the Cloud Service Manager appliance (CSM). Deploy CSM in the Extra Small VM size or higher, as required. See Overview of Deploying NSX Cloud for more information. n The NSX Manager Small VM appliance size is suitable for lab and proof-of-concept deployments, and must not be used in production. n The NSX Manager Medium VM appliance size is the autoselected appliance size during deployment and is suitable for typical production environments. An NSX-T management cluster formed using this appliance size can support up to 128 hypervisors. Starting with NSX-T 3.1, a single NSX Manager cluster is supported. n The NSX Manager Large VM appliance size is suitable for large-scale deployments. An NSX-T management cluster formed using this appliance size can support more than 128 hypervisors. For maximum scale using the NSX Manager Large VM appliance size, go to the VMware Configuration Maximums tool at https://configmax.vmware.com/guest and select NSX-T Data Center from the product list. Language Support NSX Manager has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. NSX Manager Browser Support The following browsers are recommended for working with NSX Manager. VMware by Broadcom 24 NSX-T Data Center Installation Guide Browser Windows 10 Mac OS X 10.13, 10.14 Ubuntu 18.04 Google Chrome 80 Yes Yes Yes Mozilla Firefox 72 Yes Yes Yes Microsoft Edge 80 Yes Apple Safari 13 Yes Note n Internet Explorer is not supported. n Supported Browser minimum resolution is 1280 x 800 px. n Language support: NSX Manager has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. However, because NSX Manager localization utilizes the browser language settings, ensure that your settings match the desired language. There is no language preference setting within the NSX Manager interface itself. Network Latency Requirements The maximum network latency between NSX Managers in a NSX Manager cluster is 10ms. The maximum network latency between NSX Managers and Transport Nodes is 150ms. Storage Requirements n The disk access latency is under 10ms. n It is recommended that NSX Managers be placed on shared storage. n Storage must be highly available to avoid a storage outage causing all NSX Manager file systems to be placed into read-only mode upon event of a storage failure. Please consult documentation for your storage technology on how to best design a highly available storage solution. VMware by Broadcom 25 NSX-T Data Center Installation Guide NSX Edge VM System Requirements Before you install NSX Edge, make sure that your environment meets the supported requirements. Note The following conditions apply to the hosts for the NSX Edge nodes: n NSX Edge nodes are supported only on ESXi-based hosts with Intel-based and AMD-based chipsets. Otherwise, vSphere EVC mode may prevent NSX Edge nodes from starting, showing an error message in the console. n If vSphere EVC mode is enabled for the host for the NSX Edge VM, the CPU must be Haswell or later generation. n Only VMXNET3 vNIC is supported for the NSX Edge VM. NSX Cloud Note If using NSX Cloud, note that the NSX Public Cloud Gateway(PCG) is deployed in a single default size for each supported public cloud. See NSX Public Cloud Gateway: Architecture and Modes of Deployment for details. NSX Edge VM Resource Requirements Disk VM Hardware Appliance Size Memory vCPU Space Version Notes NSX Edge Small 4 GB 2 200 GB 11 or later Proof-of-concept deployments (vSphere 6.0 or only. later) Note L7 rules for firewall, load balancing and so on are not realized on a Tier-1 gateway if you deploy a small sized NSX Edge VM. NSX Edge Medium 8 GB 4 200 GB 11 or later Suggested for L2 through L4 (vSphere 6.0 or features and when the total later) throughput requirement is less than 2 Gbps: n NAT n Routing n L4 firewall n L4 load balancer VMware by Broadcom 26 NSX-T Data Center Installation Guide Disk VM Hardware Appliance Size Memory vCPU Space Version Notes NSX Edge Large 32 GB 8 200 GB 11 or later Suggested for L2 through (vSphere 6.0 or L4 features and when the later) total throughput requirement is between 2 ~ 10 Gbps. n NAT n Routing n L4 Firewall n L4 Load Balancer n L7 Load Balancer (for example, when SSL offload is required) n TLS Inspection See Scaling Load Balancer Resources in the NSX-T Data Center Administration Guide. For more information about what the different load balance sizes and NSX Edge form factors can support, see https:// configmax.vmware.com. NSX Edge Extra Large 64 GB 16 200 GB 11 or later Suggested for features that (vSphere 6.0 or have a higher total throughput later) requirement than the NSX Edge Large form factor: n L7 load balancer n VPN n TLS Inspection n L7 Access Profile (URL filtering) n IDS/IPS n Malware prevention (Adavanced Threat Protection) See Scaling Load Balancer Resources in the NSX-T Data Center Administration Guide. For more information about what the different load balance sizes and NSX Edge form factors can support, see https:// configmax.vmware.com. NSX Edge VM CPU Requirements For the DPDK support, the underlaying platform needs to meet the following requirements: n CPU must have AESNI capability. VMware by Broadcom 27 NSX-T Data Center Installation Guide n CPU must have 1 GB Huge Page support. Hardware Type CPU n Intel Xeon E7-xxxx (Westmere-EX and later CPU generation) n Intel Xeon 56xx (Westmere-EP) n Intel Xeon E5-xxxx (Sandy Bridge and later CPU generation) n Intel Xeon Platinum (all generations) n Intel Xeon Gold (all generations) n Intel Xeon Silver (all generations) n Intel Xeon Bronze (all generations) n AMD EPYC Series processors NSX Edge Bare Metal Requirements Before you configure the NSX Edge bare metal, make sure that your environment meets the supported requirements. NSX Edge Bare Metal Memory, CPU, and Disk Requirements Minimum Requirements Memory CPU Cores Disk Space 32 GB 8 200 GB Recommended Requirements Memory CPU Cores Disk Space 256 GB 24 200 GB NSX Edge Bare Metal DPDK CPU Requirements For the DPDK support, the underlaying platform needs to meet the following requirements: n CPU must have AES-NI capability. n CPU must have 1 GB Huge Page support. n NSX Edge Bare Metal supports up to 64 cores for the entire system. This means that a server with a single socket, the CPU can have up to 64 cores. On a server with 2 sockets, each socket cannot have more than 32 cores. VMware by Broadcom 28 NSX-T Data Center Installation Guide Hardware Type CPU n Intel Xeon E7-xxxx (Westmere-EX and later CPU generation) n Intel Xeon 56xx (Westmere-EP) n Intel Xeon E5-xxxx (Sandy Bridge and later CPU generation) n Intel Xeon Platinum (all generations) n Intel Xeon Gold (all generations) n Intel Xeon Silver (all generations) n Intel Xeon Bronze (all generations) n AMD EPYC Series processors NSX Edge Bare Metal Hardware Requirements If you are running NSX Edge Bare Metal version 3.2 to any version prior to 3.2.3, verify that the Bare Metal NSX Edge hardware is listed in this URL https://certification.ubuntu.com/server/ models/?release=18.04%20LTS&category=Server. If you are running NSX Edge Bare Metal version 3.2.3, verify that the Bare Metal NSX Edge hardware is listed in this URL https://certification.ubuntu.com/server/models/? release=20.04%20LTS&category=Server. If the hardware is not listed, the storage, video adapter, or motherboard components might not work on the NSX Edge appliance. Note Starting with NSX-T 3.2, NSX Edge Bare Metal supports both UEFI and legacy BIOS modes. However, in NSX-T 3.1 and previous releases, NSX Edge Bare Metal only supports legacy BIOS mode. NSX Edge Bare Metal NIC Requirements NIC Type Description Vendor ID PCI Device ID Firmware Version Mellanox PCI_DEVICE_ID_MELL 15b3 0x1013 12.21.1000 and above ConnectX-4 EN ANOX_CONNECTX4 Mellanox PCI_DEVICE_ID_MELL 15b3 0x1015 14.21.1000 and above ConnectX-4 Lx EN ANOX_CONNECTX4LX Mellanox ConnectX-5 PCI_DEVICE_ID_MELL 15b3 0x1017 16.21.1000 and above ANOX_CONNECTX5 Mellanox ConnectX-5 PCI_DEVICE_ID_MELL 15b3 0x1019 16.21.1000 and above EX ANOX_CONNECTX5EX Intel X520/Intel IXGBE_DEV_ID_82599 8086 0x10F7 n/a 82599 _KX4 IXGBE_DEV_ID_82599 8086 0x1514 n/a _KX4_MEZZ IXGBE_DEV_ID_82599 8086 0x1517 n/a _KR VMware by Broadcom 29 NSX-T Data Center Installation Guide NIC Type Description Vendor ID PCI Device ID Firmware Version IXGBE_DEV_ID_82599 8086 0x10F8 n/a _COMBO_BACKPLANE IXGBE_SUBDEV_ID_82 8086 0x000C n/a 599_KX4_KR_MEZZ IXGBE_DEV_ID_82599 8086 0x10F9 n/a _CX4 IXGBE_DEV_ID_82599 8086 0x10FB n/a _SFP IXGBE_SUBDEV_ID_82 8086 0x11A9 n/a 599_SFP IXGBE_SUBDEV_ID_82 8086 0x1F72 n/a 599_RNDC IXGBE_SUBDEV_ID_82 8086 0x17D0 n/a 599_560FLR IXGBE_SUBDEV_ID_82 8086 0x0470 n/a 599_ECNA_DP IXGBE_DEV_ID_82599 8086 0x1507 n/a _SFP_EM IXGBE_DEV_ID_82599 8086 0x154D n/a _SFP_SF2 IXGBE_DEV_ID_82599 8086 0x154A n/a _SFP_SF_QP IXGBE_DEV_ID_82599 8086 0x1558 n/a _QSFP_SF_QP IXGBE_DEV_ID_82599 8086 0x1557 n/a EN_SFP IXGBE_DEV_ID_82599 8086 0x10FC n/a _XAUI_LOM IXGBE_DEV_ID_82599 8086 0x151C n/a _T3_LOM Intel X540 IXGBE_DEV_ID_X540T 8086 0x1528 n/a IXGBE_DEV_ID_X540T 8086 0x1560 n/a 1 Intel X550 IXGBE_DEV_ID_X550T 8086 0x1563 n/a IXGBE_DEV_ID_X550T 8086 0x15D1 n/a 1 Intel X710 I40E_DEV_ID_SFP_X71 8086 0x1572 6.80 and later versions 0 (8.x versions are not supported) I40E_DEV_ID_KX_C 8086 0x1581 6.80 and later versions (8.x versions are not supported) VMware by Broadcom 30 NSX-T Data Center Installation Guide NIC Type Description Vendor ID PCI Device ID Firmware Version I40E_DEV_ID_10G_BA 8086 0x1586 6.80 and later versions SE_T (8.x versions are not supported) I40E_DEV_ID_10G_BA 8086 0x1589 6.80 and later versions SE_T4 (8.x versions are not supported) Intel XL710 I40E_DEV_ID_KX_B 8086 0x1580 6.80 and later versions (8.x versions are not supported) I40E_DEV_ID_QSFP_A 8086 0x1583 6.80 and later versions (8.x versions are not supported) I40E_DEV_ID_QSFP_B 8086 0x1584 6.80 and later versions (8.x versions are not supported) I40E_DEV_ID_QSFP_C 8086 0x1585 6.80 and later versions (8.x versions are not supported) I40E_DEV_ID_20G_KR 8086 0x1587 6.80 and later versions 2 (8.x versions are not supported) I40E_DEV_ID_20G_KR 8086 0x1588 6.80 and later versions 2_A (8.x versions are not supported) Intel XXV710 I40E_DEV_ID_25G_B 8086 0x158A 6.80 and later versions (8.x versions are not supported) I40E_DEV_ID_25G_SF 8086 0x158B 6.80 and later versions P28 (8.x versions are not supported) Cisco VIC 1300 series Cisco UCS Virtual 1137 0x0043 n/a Interface Card 1300 Cisco VIC 1400 Cisco UCS Virtual 1137 0x0043 n/a series Interface Card 1400 Note For all the supported NICs listed above, verify that the media adapters and cables you use follow the vendor's supported media types. Any media adapter or cables not supported by the vendor can result in unpredictable behavior, including the inability to boot up due to an unrecognized media adapter. See the NIC vendor documentation for information about supported media adapters and cables. VMware by Broadcom 31 NSX-T Data Center Installation Guide Bare Metal Server System Requirements Before you configure the bare metal server, make sure that your server meets the supported requirements. Important The user performing the installation may require sudo command permissions for some of the procedures. See Install Third-Party Packages on a Physical Server. Bare Metal Server Requirements Operating System Version CPU Cores Memory CentOS Linux 7.6 (kernel: 3.10.0-957) 4 16 GB 7.7 7.8 7.9 8.0 8.3 Red Hat Enterprise Linux 7.6 (kernel: 3.10.0-957) 4 16 GB (RHEL) 7.7 7.8 7.9 8.0 8.3 Oracle Linux 7.6 (kernel: 3.10.0-957) 4 16 GB 7.7 7.8 7.9 SUSE Linux Enterprise Server 12 SP3 4 16 GB 12 SP4 12 SP5 ( (starting in NSX-T Data Center 3.2.1) Ubuntu 16.04.2 LTS (kernel: 4.4.0-*) 4 16 GB 18.04 Windows Server 2016 (minor version 14393.2248 4 16 GB and later) 2019 n Ensure MTU is set to 1600 for Jumbo frame support by NIC or OS drivers. n Hosts running Ubuntu 18.04.2 LTS must be upgraded from 16.04 or freshly installed. Physical NICS For physical servers running Linux: There is no restriction on the physical NIC other than being supported by the operating system. VMware by Broadcom 32 NSX-T Data Center Installation Guide For physical servers running Windows on Segment-VLAN with MTU at 1500, there is also no restrictions on the physical NIC other than being supported by the operating system. For physical servers running Windows on Segment-Overlay or with Segment-VLAN with large MTU (> 1500), validate its associated driver support jumbo packet. To verify whether jumbo packet is supported, run the following command: $ (Get-NetAdapterAdvancedProperty -Name ““).DisplayName -Contains “Jumbo Packet” Where, must be replaced with the real adapter name of each physical NIC.” Table 3-2. Virtual NICs NIC Type Description PCI BUS ID Firmware Version e1000e Intel(R) 82574L Gigabit 0000:1b:00 12.15.22.6 and later version Network vmxnet3 vmxnet3 Ethernet Adapter 0000:0b:00 1.9.2.0 and later version Bare Metal Linux Container Requirements For bare metal Linux container requirements, see the NSX Container Plug-in for OpenShift - Installation and Administration Guide. Ports and Protocols Ports and protocols allow node-to-node communication paths in NSX-T Data Center, the paths are secured and authenticated, and a storage location for the credentials are used to establish mutual authentication. Configure the ports and protocols required to be open on both the physical and the host hypervisor firewalls in NSX-T Data Center. Refer to https://ports.vmware.com/home/NSX-T-Data- Center for more details. By default, all certificates are self-signed certificates. The northbound GUI and API certificates and private keys can be replaced by CA signed certificates. There are internal daemons that communicate over the loopback or UNIX domain sockets: n KVM: MPA, OVS n ESXi: nsx-cfgagent, ESX-DP (in the kernel) Note To get access to NSX-T Data Center nodes, you must enable SSH on these nodes. VMware by Broadcom 33 NSX Manager Installation 4 NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls. NSX Manager provides a system view and is the management component of NSX-T Data Center. For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. Starting with NSX-T Data Center 3.1, a single NSX Manager cluster deployment is supported. In a vSphere environment, the following functions are supported by NSX Manager: n vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters. n vCenter Server can use the Storage vMotion function to live migrate file system of an NSX Manager across hosts and clusters. n vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager across hosts and clusters. n vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters. NSX Manager Deployment, Platform, and Installation Requirements The following table details the NSX Manager deployment, platform, and installation requirements Requirements Description Supported deployment methods n OVA/OVF n QCOW2 Supported platforms See NSX Manager VM and Host Transport Node System Requirements. On ESXi, it is recommended that the NSX Manager appliance be installed on shared storage. IP address An NSX Manager must have a static IP address. You can change the IP address after installation. Only IPv4 addresses are supported. VMware by Broadcom 34 NSX-T Data Center Installation Guide Requirements Description NSX-T Data Center appliance n At least 12 characters password n At least one lower-case letter n At least one upper-case letter n At least one digit n At least one special character n At least five different characters n Default password complexity rules are enforced by the following Linux PAM module arguments: n retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error. n minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit). n difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed. n lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current minlen value. n ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current minlen value. n dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current minlen value. n ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value. n enforce_for_root: The password is set for the root user. Note For more details on Linux PAM module to check the password against dictionary words, refer to the man page. For example, avoid simple and systematic passwords such as VMware123! 123 or VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alpahabets, special characters, and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x. Hostname When installing NSX Manager, specify a hostname that does not contain invalid characters such as an underscore or special characters such as dot ".". If the hostname contains any invalid character or special characters, after deployment the hostname will be set to nsx-manager. For more information about hostname restrictions, see https://tools.ietf.org/html/ rfc952 and https://tools.ietf.org/html/rfc1123. VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or upgrade VMTools. VMware by Broadcom 35 NSX-T Data Center Installation Guide Requirements Description System n Verify that the system requirements are met. See System Requirements. n Verify that the required ports are open. See Ports and Protocols. n Verify that a datastore is configured and accessible on the ESXi host. n Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP Server IP or FQDN list for the NSX Manager or Cloud Service Manager to use. n If you do not already have one, create the target VM port group network. Place the NSX-T Data Center appliances on a management VM network. If you have multiple management networks, you can add static routes to the other networks from the NSX-T Data Center appliance. n Plan your NSX Manager IPv4 IP addressing scheme. OVF Privileges Verify that you have adequate privileges to deploy an OVF template on the ESXi host. A management tool that can deploy OVF templates, such as vCenter Server or the vSphere Client. The OVF deployment tool must support configuration options to allow for manual configuration. OVF tool version must be 4.0 or later. Client Plug-in The Client Integration Plug-in must be installed. Certificates If you plan to configure internal VIP on a NSX Manager cluster, you can apply a different certificate to each NSX Manager node of the cluster. See Configure a Virtual IP Address for a Cluster. If you plan to configure an external load balancer, ensure only a single certificate is applied to all NSX Manager cluster nodes. See Configuring an External Load Balancer. Note On an NSX Manager fresh install, reboot, or after an admin password change when prompted on first login, it might take several minutes for the NSX Manager to start. NSX Manager Installation Scenarios Important When you install NSX Manager from an OVA or OVF file, either from vSphere Client or the command line as a standalone host, OVA/OVF property values such as user names and passwords are not validated before the VM is powered on. However, the static IP address field is a mandatory field to install NSX Manager. When you install NSX Manager as a managed host in vCenter Server, OVA/OVF property values such as user names and passwords are validated before the VM is powered on. n If you specify a user name for any local user, the name must be unique. If you specify the same name, it is ignored and the default names (for example, admin and audit) are used. n If the password for the root or admin user does not meet the complexity requirements, you must log in to NSX Manager through SSH or at the console as root with password vmware and admin with password default. You are prompted to change the password. VMware by Broadcom 36 NSX-T Data Center Installation Guide n If the password for other local users (for example, audit) does not meet the complexity requirements, the user account is disabled. To enable the account, log in to NSX Manager through SSH or at the console as the admin user and run the command set user local_user_name to set the local user's password (the current password is an empty string). You can also reset passwords in the UI using System > User Management > Local Users. Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might cause system failure and potentially impact your network. You can only make changes using the root user credentials with the guidance of VMware Support team. Note The core services on the appliance do not start until a password with sufficient complexity is set. After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by powering off the VM and modifying the OVA settings from vCenter Server. Configuring NSX Manager for Access by the DNS Server By default, transport nodes access NSX Managers based on their IP addresses. However, this can be based also on the DNS names of the NSX Managers. You enable FQDN usage by publishing the FQDNs of the NSX Managers. Note Enabling FQDN usage (DNS) on NSX Managers is required for multisite Lite deployments. (It is optional for all other deployment types.) See Multisite Deployment of NSX Data Center in the NSX-T Data Center Administration Guide. Publishing the FQDNs of the NSX Managers After installing the NSX-T Data Center core components, to enable NAT using FQDN, you must set up the forward and reverse lookup entries for the manager nodes on the DNS server. Important It is highly recommended that you configure both the forward and reverse lookup entries for the NSX Managers' FQDN with a short TTL, for example, 600 seconds. In addition, you must also enable publishing the NSX Manager FQDNs using the NSX-T Data Center API. Example request: PUT https:///api/v1/configs/management { "publish_fqdns": true, "_revision": 0 } VMware by Broadcom 37 NSX-T Data Center Installation Guide Example response: { "publish_fqdns": true, "_revision": 1 } See the NSX-T Data Center API Guide for details. Note After publishing the FQDNs, validate access by the transport nodes as described in the next section. Validating Access via FQDN by Transport Nodes After publishing the FQDNs of the NSX Managers, verify that the transport nodes are successfully accessing the NSX Managers. Using SSH, log into a transport node such as a hypervisor or Edge node, and run the get controllers CLI command. Example response: Controller IP Port SSL Status Is Physical Master Session State Controller FQDN 192.168.60.5 1235 enabled connected true up nsxmgr.corp.com Read the following topics next: n Modifying the Default Admin Password Expiration Modifying the Default Admin Password Expiration By default, the administrative password for the NSX Manager and NSX Edge appliances expires after 90 days. However, you can reset the expiration period after initial installation and configuration. If the password expires, you will be unable to log in and manage components. Additionally, any task or API call that requires the administrative password to execute will fail. If your password expires, see Knowledge Base article 70691 NSX-T admin password expired. Procedure 1 Use a secure program to connect to the NSX CLI console. VMware by Broadcom 38 NSX-T Data Center Installation Guide 2 Reset the expiration period. You can set the expiration period for between 1 and 9999 days. nsxcli> set user admin password-expiration Note Alternatively, you can use API commands to set the admin password expiration period. 3 (Optional) You can disable password expiry so the password never expires. nsxcli> clear user admin password-expiration VMware by Broadcom 39 NSX Manager Cluster Requirements 5 The following subsections describe cluster requirements for NSX appliances and provides recommendations for specific site deployments. Cluster Requirements n In a production environment, the NSX Manager (Local Manager in an NSX Federation environment) or Global Manager cluster must have three members to avoid an outage to the management and control planes. Each cluster member should be placed on a unique hypervisor host with three physical hypervisor hosts in total. This is required to avoid a single physical hypervisor host failure impacting the NSX control plane. It is recommended you apply anti-affinity rules to ensure that all three cluster members are running on different hosts. The normal production operating state is a three-node cluster of the NSX Manager (Local Manager in an NSX Federation environment) or Global Manager. However, you can add additional, temporary nodes to allow for IP address changes. Important NSX Manager cluster requires a quorum to be operational. This means at least two out of three of its members must be up and running. If a quorum is lost, NSX management and control planes do not work. This results in no access to the NSX Manager's UI and API for configuration updates using the management plane as well as no access to add or vMotion VMs on NSX segments. The dynamic routing on Tier-0 remains operational. n For lab and proof-of-concept deployments where there are no production workloads, you can run a single NSX Manager or Global Manager to save resources. NSX Manager or Global Manager nodes can be deployed on either ESXi or KVM. However, mixed deployments of managers on both ESXi and KVM are not supported. n If you deploy a single NSX Manager or Global Manager in a production environment, you can also enable vSphere HA on the host cluster for the manager nodes to get automatic recovery of the manager VM in case of ESXi failure. For more information on vSphere HA, see the vSphere Availability guide in the vSphere Documentation Center. VMware by Broadcom 40 NSX-T Data Center Installation Guide n If one of the NSX Manager node goes down, the three node NSX Manager cluster provides redundancy. However, if one of the NSX Manager node loses its disk, it goes into read only state. It might still continue to participate in the cluster if its network connectivity is Up. This case can result in the management plane and control plane to become unvailable. To resolve the issue, power off the impacted NSX Manager. Recovery with vSphere HA You can use vSphere HA (High Availability) with NSX-T Data Center to enable quick recovery if the host running the NSX Manager node fails. See Creating and Using vSphere HA Clusters in the vSphere product documentation. Read the following topics next: n Cluster Requirements for an Individual Site n Cluster Requirements for Multiple Sites Cluster Requirements for an Individual Site Each NSX appliance cluster – Global Manager, Local Manager or NSX Manager – must contain three VMs. Those three VMs can be physically all deployed in the same data center or in different data centers, as long as latency between VMs in the cluster is below 10ms. Single Site Requirements and Recommendations The following recommendations apply to single site NSX-T Data Center deployments and cover cluster recommendations for Global Manager, Local Manager and NSX Manager appliances: n It is recommended that you place your NSX appliances on different hosts to avoid a single host failure impacting multiple managers. n Maximum latency between NSX appliances is 10ms. n You can place NSX appliances in different vSphere clusters or in a common vSphere cluster. n It is recommended that you place NSX appliances in different management subnets or a shared management subnet. When using vSphere HA it is recommended to use a shared management subnet so NSX appliances that are recovered by vSphere can preserve their IP address. n It is recommended that you place NSX appliances on shared storage also. For vSphere HA, please review the requirements for that solution. You can also use vSphere HA with NSX-T to provide recovery of a lost NSX appliance when the host where the NSX appliance is running fails. Scenario example: n A vSphere cluster in which all three NSX Managers are deployed. VMware by Broadcom 41 NSX-T Data Center Installation Guide n The vSphere cluster consists of four or more hosts: n Host-01 with nsxmgr-01 deployed n Host-02 with nsxmgr-02 deployed n Host-03 with nsxmgr-03 deployed n Host-04 with no NSX Manager deployed n vSphere HA is configured to recover any lost NSX Manager (e.g., nsxmgr-01) from any host (e.g., Host-01) to Host-04. Thus, upon the loss of any hosts where a NSX Manager is running, vSphere recovers the lost NSX Manager on Host-04. Cluster Requirements for Multiple Sites The following recommendations apply to dual site (Site A/Site B) and multiple-site (Site A/Site B/Site C) NSX-T Data Center deployments. Dual Site Requirements and Recommendations Note Starting from NSX-T Data Center v3.0.2 onwards, VMware Site Recovery Manager (SRM) is supported. n Do not deploy NSX Managers in a dual-site scenario without vSphere HA or VMware SRM. In this scenario, one site requires the deployment of two NSX Managers and the loss of that site will impact the operation of NSX-T Data Center. n Deployment of NSX Managers in a dual site scenario with vSphere HA or VMware SRM can be done with the following considerations: n A single stretched vSphere cluster contains all the hosts for NSX Managers. n All three NSX Managers are deployed to a common management subnet/VLAN to allow IP address preservation upon recovery of a lost NSX Managers. n For latency between sites, see the storage product requirements. Scenario example: n A vSphere cluster in which all three NSX Managers are deployed. n The vSphere cluster consists of six or more hosts, with three hosts in Site A and three hosts in Site B. n The three NSX Managers are deployed to distinct hosts with additional hosts for placement of recovered NSX Managers: Site A: n Host-01 with nsxmgr-01 deployed n Host-02 with nsxmgr-02 deployed VMware by Broadcom 42 NSX-T Data Center Installation Guide n Host-03 with nsxmgr-03 deployed Site B: n Host-04 with no NSX Manager deployed n Host-05 with no NSX Manager deployed n Host-06 with no NSX Manager deployed n vSphere HA or VMware SRM is configured to recover any lost NSX Manager (e.g., nsxmgr-01) from any host (e.g., Host-01) in Site A to one of the hosts in Site B. Thus, upon failure of Site A, vSphere HA or VMware SRM will recover all NSX Managers to hosts in site B. Important You must properly configure anti-affinity rules to prevent NSX Managers from being recovered to the same common host. Multiple (Three or More) Site Requirements and Recommendations In a scenario with three or more sites, you can deploy NSX Managers with or without vSphere HA or VMware SRM. If you deploy without vSphere HA or VMware SRM: n It is recommended that you use separate management subnets or VLANs per site. n Maximum latency between NSX Managers is 10ms. Scenario example (three sites): n Three separate vSphere clusters, one per site. n At least one host per site running NSX Manager: n Host-01 with nsxmgr-01 deployed n Host-02 with nsxmgr-02 deployed n Host-03 with nsxmgr-03 deployed Failure scenarios: n Single site failure: Two remaining NSX Managers in other sites continue to operate. NSX-T Data Center is in a degraded state but still operational. It is recommended you manually deploy a third NSX Manager to replace the lost cluster member. n Two site failure: Loss of quorum and therefore impact to NSX-T Data Center operations. Recovery of NSX Managers may take as long as 20 minutes depending on environmental conditions such as CPU speed, disk performance, and other deployment factors. VMware by Broadcom 43 Installing and Configuring NSX-T using vCenter Server Plugin 6 As a VI admin, you can install NSX Manager and NSX-T for virtual networking or security-only use case by installing and configuring the NSX-T plugin in vCenter Server. There are two workflows - Virtual Networking and Security Only - allowed from the NSX page on the v