Ultimate.VMware.NSX.for.Professionals.8196782624.epub

Full Transcript

[]{#cvi.xhtml} ![image](images/cover.jpg) []{#tp.xhtml} Ultimate VMware NSX for Professionals {#tp.xhtml#s1.tp} ===================================== Leverage Virtualized Networking, Security, and Advanced Services of VMware NSX for Efficient Data Management and Network Excellence {#leverage-vir...

[]{#cvi.xhtml} ![image](images/cover.jpg) []{#tp.xhtml} Ultimate VMware NSX for Professionals {#tp.xhtml#s1.tp} ===================================== Leverage Virtualized Networking, Security, and Advanced Services of VMware NSX for Efficient Data Management and Network Excellence {#leverage-virtualized-networking-security-and-advanced-services-of-vmware-nsx-for-efficient-data-management-and-network-excellence.tp1} =================================================================================================================================== ![](images/line.jpg) Vinay Aggarwal {#vinay-aggarwal.aut} ============== [www.orangeava.com](http://www.orangeava.com) []{#cop.xhtml} Copyright © 2023 Orange Education Pvt Ltd, AVA™ All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor **Orange Education Pvt Ltd** or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. **Orange Education Pvt Ltd** has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capital. However, **Orange Education Pvt Ltd** cannot guarantee the accuracy of this information. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. **First published:** December 2023 **Published by:** Orange Education Pvt Ltd, AVA™ **Address:** 9, Daryaganj, Delhi, 110002 **ISBN:** 978-81-96782-62-7 [www.orangeava.com](http://www.orangeava.com) []{#ded.xhtml} Dedicated To {#dedicated-to.ded} ============ *My beloved Parents:* *Shri Mangal Sen Aggarwal* *Ritu Aggarwal* *and* *My wife Neha and My Son Bhavik* []{#ata.xhtml} About the Author {#about-the-author.chap2} ================ **Vinay Aggarwal,** VCIX -- NV, VCIX -- DCV, CCNP -- DC, CCS, has been working in IT with more than a decade of experience on different VMware technologies. Starting as a Windows Administrator at HCL Technologies, Noida, India, he ventured overseas to Birmingham, UK, partnering with TCS, before eventually returning to his roots in India as a Senior Consultant at VMware. Over the years, Vinay has gained knowledge and experience in Software Defined Datacenters (SDDC) and in recent years, has become more focused on Software Defined Networking (SDN) with VMware NSX portfolio. Vinay's dedication extends beyond his career, his contributions to the tech community have been subtle yet significant, with sessions at VMware Explore India and VMUG Delhi, India. You can keep up with Vinay\'s tech adventures by following him on X (formerly Twitter) at \@vmvtips or delving into his day-to-day experiences with VMware technologies on his blog site,. []{#fm.xhtml} About the Technical Reviewer {#about-the-technical-reviewer.chap2} ============================ **Sreejith** works as a Consulting Architect with VMware. He has more than 15 years of experience as an information technology specialist. Indian by birth, Sreejith currently resides in the United Arab Emirates. He gathers customer requirements and creates designs of VMware Cloud Provider-based solutions that span the company\'s product line in order to fit the functional and business demands of businesses of all sizes and industries. He finds great delight in assisting Cloud Providers in deploying hyper-scale cloud platforms throughout Cloud Provider\'s datacenters using a pay-as-you-go, monthly subscription model. In addition to having multiple professional and advanced certificates, he has received seven vExpert recognitions. He is also a VMTN champion and the author of VMware NSX Network Essentials, which was published in 2016. You can find his LinkedIn ID at. He would like to express his profound gratitude to the VMware community and all of its members for their contributions thus far. Additionally, he would like to thank his family and colleagues at VMware for their unwavering support throughout his career. []{#ack.xhtml} Acknowledgements {#acknowledgements.chap2} ================ There are a few people I want to thank for the continued and ongoing support they have given me during the writing of this book. First and foremost, I would like to thank my family for continuously encouraging me to write the book --- I could have never completed this book without their support. I am grateful to Mr. Sreejith C for his kind attention to details and his guidance during the development of this book, taking time out of his busy schedule. I am thankful to all our professional editors, especially Subha, Sonali, and Sreeja, for their patience and guidance at every step of the book process. A special thanks to Orange AVA team for giving me the opportunity and guidance to transform a rough word document into a comprehensive book. []{#pre.xhtml} Preface {#preface.chap2} ======= Welcome to \"Ultimate VMware NSX for Professionals,\" a step-by-step guide providing understanding and mastering VMware\'s Network Virtualization and Security platform, NSX. In today's ever-evolving IT landscape, virtualization has become a key to build scalable and agile infrastructures. VMware NSX, a network and security virtualization platform provides strong Network Virtualization capabilities in Software Defined Datacenter. This book is designed to provide you with a solid foundation in NSX, whether you are an IT professional looking to implement virtualized networks, a system administrator seeking to enhance your skills, or a network architect striving to design software-defined datacenter solutions. VMware NSX offers a dynamic approach to network and security management that enables organizations to efficiently utilize their network infrastructure while reducing costs and complexity. In the pages that follow, you will embark on a journey through the core concepts and practical applications of VMware NSX. This book has been divided into 14 chapters, with each chapter covering a topic that I believe is needed to fully understand a concept. Although this book could be read cover to cover, it is designed to be flexible and allow you to move between chapters or sections of chapters to cover topics you need. However, few chapters should be read sequentially, such as [Chapter 3](#c03.xhtml) to [Chapter 7](#c07.xhtml) to understand Routing and Switching or [Chapter 8](#c08.xhtml) to [Chapter 10](#c10.xhtml) to understand security features in NSX. The details are listed below for the topics covered in each chapter. **[Chapter 1](#c01.xhtml)** covers some key challenges with traditional networking and how Software Defined Networking can help overcome those challenges. It then introduces NSX, a VMware software defined networking solution, and covers in depth its architecture and the different key components that build the NSX environment. **[Chapter 2](#c02.xhtml)** focuses on deployment of the first component of NSX environment, that is, NSX Manager. It covers different methods to deploy NSX Manager in vSphere environment and create an NSX Management cluster. Then focus shifts on preparing Data Plane of NSX environment, covering architecture of Transport Nodes, different pre-requisites to prepare a Transport Node and finally promoting ESXi host as an NSX Transport Node. **[Chapter 3](#c03.xhtml)** is all about logical switching in NSX environment. This chapter deals with switching configuration in NSX environment including Geneve Encapsulation to enable Layer 2 communication on top of Layer 3 underlay networks. This chapter covers the creation and configuration of segments and segment profiles along with Layer 2 packet walks in NSX environment. **[Chapter 4](#c04.xhtml)** marks the beginning of the first step in logical routing. This chapter covers one of the most important requirements to enable Logical Routing in NSX environment which is Edge nodes and Edge Clusters. This chapter discusses different options, design and configuration available to deploy Edge nodes in NSX environment. **[Chapter 5](#c05.xhtml)** continues on NSX Logical Routing. This chapter delves into the world of NSX logical router and follows a packet to see how it travels within NSX environment. This chapter covers different routing topologies supported by NSX and how to configure them, along with dynamic routing protocols BGP and OSPF. Last but not least, it covers high-availability choices for routing services. **[Chapter 6](#c06.xhtml)** covers VRF lite and EVPN to provide end-to-end isolation for different tenants in network fabric. **[Chapter 7](#c07.xhtml)** covers logical bridging, a network service provided by NSX that can help bridge the gap between overlay networks and VLAN-backed networks to allow layer 2 communication between VMs and physical workloads or workloads running in NSX-V environment. **[Chapter 8](#c08.xhtml)** begins the journey of security features in NSX environment. This chapter covers Micro-Segmentation with the help of Distributed Firewall that helps protect East-West traffic inside a datacenter and protect workloads from lateral attacks. **[Chapter 9](#c09.xhtml)** builds on top of micro-segmentation and sheds light on the advanced security features provided by NSX. This chapter starts with Gateway Firewall to protect North-South traffic leaving the NSX environment and then focuses on the Intrusion Detection/Prevention System. This chapter covers the architecture and configuration of IDS/IPS along with Malware Prevention. **[Chapter 10](#c10.xhtml)** covers the NSX Application platform and NSX Intelligence that provides recommendations for NSX DFW and detects anomalies in the network using its suspicious detectors. Also, it covers NSX Network Detection and Response which create campaigns by taking all events from Malware Prevention, Intelligence, IDS and analyzing them, reducing the burden on security administrators. **[Chapter 11](#c11.xhtml)** covers different datacenter features, that is, NAT, DHCP, DNS, IPSec VPN, and Layer 2 VPN, and how to configure them in an NSX environment. Also, it talks about a new feature introduced in NSX which enables stateful services on Active-Active gateways. **[Chapter 12](#c12.xhtml)** discusses the NSX Advanced Load Balancer, its features, architecture, and how it is integrated with the NSX environment. **[Chapter 13](#c13.xhtml)** covers different NSX Multisite solution, NSX Federation, its architecture, and how it is onboarded and managed in NSX environment. **[Chapter 14](#c14.xhtml)** covers different features provided by NSX to monitor and manage the NSX environment effectively and efficiently. It explains how to enable different authentication providers in the NSX environment to provide secure access to the environment along with RBAC to control the level of access. []{#fm1.xhtml} Downloading the code bundles and colored images {#downloading-the-code-bundles-and-colored-images.chap2} =============================================== Please follow the link or scan the QR code to download the\ ***Code Bundles*** of the book: {#httpsgithub.comava-orange-educationultimate-vmware-nsx-for-professionals.chap2} =============================================================================== ![](images/fm.jpg) The code bundles and images of the book are also hosted on\ **[*https://rebrand.ly/bc45e5*](https://rebrand.ly/bc45e5)** In case there's an update to the code, it will be updated on the existing GitHub repository. Errata {#errata.chap2} ====== We take immense pride in our work at **Orange Education Pvt Ltd** and follow best practices to ensure the accuracy of our content to provide an indulging reading experience to our subscribers. Our readers are our mirrors, and we use their inputs to reflect and improve upon human errors, if any, that may have occurred during the publishing processes involved. To let us maintain the quality and help us reach out to any readers who might be having difficulties due to any unforeseen errors, please write to us at : **** Your support, suggestions, and feedback are highly appreciated. []{#fm2.xhtml} ::: {.box3} DID YOU KNOW {#did-you-know.chap3} ============ Did you know that Orange Education Pvt Ltd offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at [**www.orangeava.com**](http://www.orangeava.com) and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at: **** for more details. At **[www.orangeava.com](http://www.orangeava.com)**, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on AVA™ Books and eBooks. PIRACY {#piracy.chap3} ====== If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at **** with a link to the material. ARE YOU INTERESTED IN AUTHORING WITH US? {#are-you-interested-in-authoring-with-us.chap3} ======================================== If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please write to us at [**business\@orangeava.com**](mailto:[email protected]). We are on a journey to help developers and tech professionals to gain insights on the present technological advancements and innovations happening across the globe and build a community that believes Knowledge is best acquired by sharing and learning with others. Please reach out to us to learn what our audience demands and how you can be part of this educational reform. We also welcome ideas from tech experts and help them build learning and development content for their domains. REVIEWS {#reviews.chap3} ======= Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions. We at Orange Education would love to know what you think about our products, and our authors can learn from your feedback. Thank you! For more information about Orange Education, please visit **[www.orangeava.com](http://www.orangeava.com)**. ::: []{#toc.xhtml} []{#c01.xhtml} []{#c01.xhtml#page1}[C[HAPTER]{.small} 1](#toc.xhtml#c01) {#chapter-1.chap} ========================================================= [Introduction to NSX Datacenter](#toc.xhtml#c01) {#introduction-to-nsx-datacenter.subchap} ================================================ [Introduction](#toc.xhtml#s1a) {#c01.xhtml#s1.sec1} ============================== Before we start with the world of **SDN (Software Defined Networking)** or VMware's offering for SDN called **NSX,** Let's take a step back and understand the challenges with traditional physical networking. If I ask you as an application developer with you being a Network Engineer, I want to deploy multi-tier applications (could be a monolithic or container base) with components in their own broadcast domain, a firewall applied to every component with centralized management, capability for automation to provision network services on demand as and when required and ask you to prepare the underlying network infrastructure for application; these requirements from application team become very tough and challenging very soon. But what if we had a magic wand, which we can wave and solve all these challenges? Well, cheesy lines aside, virtualization is the key to all these questions. We have solved similar challenges with compute and storage in the datacenter and by moving physical networking into software construct, we can achieve the same level of automation and flexibility with our Networking as well. So, fasten your seatbelts as we drive on the path to virtualize Networks. This chapter discusses current challenges with physical networks, how SDN can be an answer to these challenges and more specifically VMware SDN offering NSX. []{#c01.xhtml#page2}[Structure](#toc.xhtml#s2a) {#c01.xhtml#s2.sec1} =============================================== In this chapter, we will discuss the following topics: - Challenges with Physical Networks - Introducing SDN - Different models of SDN - Introducing NSX -- A VMware SDN Solution - NSX Architecture - Management Plane - Control Plane - Data Plane [Challenges with Physical Networks](#toc.xhtml#s3a) {#c01.xhtml#s3.sec1} =================================================== Networks are analogous to veins in the human body like veins carry blood from different organs back and forth similarly as their core functionality, Networks provide connectivity between different components in a datacenter. Apart from connecting different components together, networks also provide some other key services in a datacenter including but not limited to: - Layer 2 communication - Layer 3 connectivity - Firewall services - NAT and VPN services - Load Balancing services, and so on. Initially, we started with different applications running on various physical servers housed in a datacenter. In case a developer requires to run an application on different hardware, it would take time to procure hardware, set it up in the datacenter and have the operating system ready to run the application. In such cases, physical networks were able to keep up with the speed and requirements of the application. But with the evolution of virtualization and the rise of **SDDC (Software Defined Datacenter),** physical networks are lagging behind. Make no mistake, we still require physical networks to connect different components together even in an SDDC, but like the example, in the Introduction, there are new challenges that have arisen with physical networks. []{#c01.xhtml#page3}[Traffic Hairpinning](#toc.xhtml#s4a) {#c01.xhtml#s4.sec2} ========================================================= Traffic hairpinning or pinning is forcing traffic to flow through a single point. This can result in a bottleneck of traffic affecting the performance of the system. In networking, devices are assigned an IP address so they can communicate with each other via IP address. If IP addresses belong to the same subnet, they can communicate with each other directly. But if their IP address belongs to a different subnet, we need a router or L3 device that can forward the packet to the correct subnet known as the default gateway. For physical workloads, it is fine to keep the default gateway on one or more routers but it becomes an issue for virtual workloads. Imagine, two Virtual Machines sitting on the same physical server, in order to talk to each other they have to traverse the whole network path due to traffic hairpinned to the default gateway. The following *[Figure 1.1](#c01.xhtml#fig1_1)* shows an example where two virtual machines are running on same hypervisor but belongs to different subnets have traffic hairpinned to default gateway. **Figure 1.1:** Traffic Hairpinning in Virtual Workload Using firewalls or other security appliances such as IDS/IPS results in similar hairpinning of traffic affecting the performance of the application in case of non-optimal network design. [Speed and Agility](#toc.xhtml#s5a) {#c01.xhtml#s5.sec2} =================================== Physical networks are not capable to keep up with the speed and agility of SDDC (Software Defined Datacenter). In today's world, there is an ever-changing requirement from businesses to roll out new features and be on top of the competition. This requires faster provisioning of applications and a []{#c01.xhtml#page4}requirement to freely move an application from one environment to another. Server Virtualization, storage virtualization and Containers have enabled IT for faster provisioning of applications and application mobility but physical networks are holding them back. Let's take the same example from the introduction of the chapter. I need to create new subnet domains on demand for my application components, create new firewall rules and provide new load balancing services in order for the application to be up and running. How long do you think it would need to enable all these requirements; Days, Weeks or Months? It would not be an acceptable answer for businesses, they need it to be done now. What about using different devices and systems, they may use different protocols and standards, which can make it difficult for the administrator to manage all these devices. And then there is a limit to physical networks about how much can they expand, for example, 4096 VLANs limit per LAN. [Security](#toc.xhtml#s6a) {#c01.xhtml#s6.sec2} ========================== Security is the most important aspect of any network infrastructure. This is what keeps those hackers away and protects your business. In traditional networks, security is provided by Firewalls which can provide Layer 3, Layer 4 and Layer 7 inspection. There are other appliances that provide advanced security features such as **IDS/IPS (Intrusion Detection System/Intrusion Prevention System).** However, there are a few inherent flaws with these systems. First, these require network traffic to be hairpinned due to their non-distributed architecture and placement in North-South bound network traffic. Now, it could be challenged by saying how about utilizing firewall services directly on the workload itself. But different workloads have different OS and different consoles and it becomes very difficult to manage rules without any centralized management. Second, most of these Firewalls or Intelligent Systems are very good at acting as perimeter firewalls. They can stop most of the security attacks but not all. Most of the cyber-attacks in recent years have a common characteristic -- Lateral Movement. Once a malicious vector gets access to a system inside a datacenter, they use various vulnerabilities to exploit and move malicious code from server to server also known as East-West movement or lateral movement. Using a perimeter-only strategy makes it extremely difficult to prohibit lateral movements, and traditional firewalling makes it exceedingly expensive to safeguard traffic across all workloads in a datacenter. []{#c01.xhtml#page5}[Introducing SDN](#toc.xhtml#s7a) {#c01.xhtml#s7.sec1} ===================================================== SDN or more commonly Software Defined Networking is a networking approach where Control Plane (Decision Maker) is separated from **Forwarding Plane** **or Data Plane** (Who transmits data). This approach is different from traditional networking in the sense that instead of using dedicated physical devices such as switches or routers to control traffic, it uses software to either create and control a new Network Infrastructure on top of physical networks known as a Virtual Network or control traditional hardware. The key difference between SDN and traditional networking is infrastructure. While traditional networking is a hardware-based approach, SDN is a software-based approach. In an SDN architecture, the control plane is centralized and implemented in software, while the data/forwarding plane is implemented in network devices or servers. This separation allows for greater flexibility and programmability, as the control plane can be easily updated or modified without requiring changes to the physical network infrastructure. This allows network engineers to manage the entire network infrastructure from a centralized user interface and enable them to control the network, change configuration settings, provision resources or increase network capacity. [Different models of SDN](#toc.xhtml#s8a) {#c01.xhtml#s8.sec2} ========================================= The key aspect of SDN is the separation of the Control Plane and Data Plane. This can be achieved in different ways which constitute different models of SDN. [Open SDN](#toc.xhtml#s9a) {#c01.xhtml#s9.sec3} ========================== Network administrators use a protocol like OpenFlow to control the behavior of virtual and physical switches at the data plane level. [SDN by APIs](#toc.xhtml#s10a) {#c01.xhtml#s10.sec3} ============================== Instead of using an open protocol, **APIs (Application Programming Interfaces)** controls the flow of data over the network across multiple devices. [SDN Overlay Model](#toc.xhtml#s11a) {#c01.xhtml#s11.sec3} ==================================== Another type of software-defined networking runs a virtual network on top of an existing hardware infrastructure, creating dynamic tunnels to different on-premise and remote data centers. The virtual network allocates bandwidth over []{#c01.xhtml#page6}a variety of channels and assigns devices to each channel, leaving the physical network untouched. [Hybrid SDN](#toc.xhtml#s12a) {#c01.xhtml#s12.sec3} ============================= This model combines software-defined networking with traditional networking protocols in one environment to support different functions on a network. Standard networking protocols continue to direct some traffic, while SDN takes on responsibility for other traffic, allowing network administrators to introduce SDN in stages to a legacy environment. [Introducing NSX -- A VMware SDN Solution](#toc.xhtml#s13a) {#c01.xhtml#s13.sec1} =========================================================== VMware NSX is part of a wider VMware Virtual Cloud Network framework. In essence, **VCN (Virtual Cloud Network)** is a VMware framework to apply consistent network and security policies to different types of workloads which could be virtual machines, containers or applications running on different types of platforms such as hypervisors, bare metal servers or cloud platforms. VCN is all software in nature and requires no dedicated or specialized hardware to run, as long as minimum requirements can be fulfilled VCN can be deployed as a software layer. This software layer provides connectivity between different datacenters, cloud platforms or edge infrastructure and helps in deploying consistent security policies which help in overcoming challenges posed by traditional physical networks. Apart from providing networking and security, VCN includes several other solutions which are key to providing extensibility, integration, automation and consistency. *[Figure 1.2](#c01.xhtml#fig1_2)* captures key solutions part of VCN portfolio ![](images/1.2.jpg) **Figure 1.2:** VMware Virtual Cloud Network Portfolio []{#c01.xhtml#page7}Key solutions part of the VMware Virtual Cloud Network portfolio are: - **NSX Data Center** -- VMware solution for providing consistent networking and security policies across multiple applications and workloads running on different platforms - **NSX Advance Load Balancer (ALB) --** VMware distributed load balancing solution for virtual machines, containers, bare metal servers or workloads running on the cloud - **Hybrid Cloud Extension (HCX) --** Provide the capability to migrate workloads from legacy infrastructure to the cloud with minimal downtime - **NSX Network Detection & Response (NDR) --** Advance threat detection and analysis services provided by VMware for proactive response - **NSX IDS/IPS --** VMware advanced fully distributed Layer 7 intelligent threat detection and prevention services part of NSX solution - **NSX Cloud --** SaaS offering for NSX services which delivers networking and security services for applications running in public cloud natively. - **Tanzu Service Mesh --** Provide consistent networking and security management across different Kubernetes platforms from a central console - **Aria Operations for Network --** Provide visibility, monitoring for virtual and physical networks and based on flows captured provides recommendations for distributed firewall in the NSX environment **Note:** Key solutions listed above NSX IDS/IPS, NDR, and ALB are covered in this book as part of NSX and not as a standalone product. Other key solutions are outside the scope of this book. [NSX Data Center](#toc.xhtml#s14a) {#c01.xhtml#s14.sec2} ================================== NSX Data Center is a VMware SDN offering for networking and security and is based on four fundamental attributes: - **Policy and Consistency** -- NSX allows the user to provide desired state configuration with the help of API or UI which enables automation for fast pace business requirements. It has various controls and inventories in place to keep the system in a consistent state across different platforms. - **Connectivity** -- It provides logical switching and distributed routing capabilities separated from the management plane and control plane without being tied to a single compute domain. For example, multiple vCenter can be integrated with a single NSX to provide consistent networking across different clusters. It can further extend connectivity to different sites, public clouds or containers via specific implementations. - **[]{#c01.xhtml#page8}Security** -- It allows for distributed security across multiple workloads but still provides centralized management for security policies. This helps in providing consistent security policy on different workloads be it Virtual machines, containers or applications running in private or public cloud to maintain correct security posture. - **Visibility** -- It provides a wide variety of toolsets such as Traceflow, Live Traffic Analysis, Spanning, metric collection, and events all available in a single place greatly reducing operation complexities. [NSX Architecture](#toc.xhtml#s15a) {#c01.xhtml#s15.sec1} =================================== VMware NSX Data Center is based on the Overlay SDN model and provides consistent networking and security services in the IT environment. It implements these services including but not limited to Switching, Routing, NAT, VPN, firewalling, etc. in software hence bringing networking services into software construct. Further, these services can be provisioned on demand in different combinations to create different network environments as per application requirements with tremendous agility. NSX implements all these features with the help of three different functional planes. These are: - Management Plane - Control Plane - Data Plane *[Figure 1.3](#c01.xhtml#fig1_3)* shows different components which made up these functional planes which are covered in depth in upcoming section. **Figure 1.3:** NSX High-Level Architecture Components Before NSX-T 2.4 (Before version 4.x, NSX was named as NSX-T), Management plane and Control plane were deployed as separate appliance. However, starting []{#c01.xhtml#page9}NSX-T 2.4, the Management plane and Control plane are merged into a single appliance known as the Manager Appliance and Data Plane is implemented with the help of transport nodes -- more to be discussed in the upcoming section. **Note**: Starting NSX 4.0, KVM is no longer supported as a transport node. Also, VIO is the only supported OpenStack implementation with NSX 4.0 and above. [Management Plane](#toc.xhtml#s16a) {#c01.xhtml#s16.sec2} =================================== The management plane is the point of entry into the NSX environment. This is where cloud platforms, management tools or API clients interact with NSX using API queries or we as a user can interact either via Graphical User Interface or API queries. The primary responsibilities of the management plane are to store user configurations, handle any queries and perform operational tasks such as pushing policies to the control plane for realization, storing configuration in the database, etc. The management plane is implemented by NSX Manager Appliance in an NSX environment. NSX Manager provides a centralized interface to view and manage NSX deployments. As the NSX Manager appliance is the single point of entry in an NSX environment, in order to avoid a single point of failure, it is deployed as a group of three nodes to form NSX Management Cluster which provides high availability and scalability in an NSX environment. Key functionalities of the NSX Management Cluster include: - The entry point for user configurations either via RESTFul APIs or GUI (Graphical User Interface) - Store desired user configuration in distributed database - Push desired user configuration to control plane for realization - Communicate with Data Nodes to retrieve metrics as well as realized configuration [NSX Management Cluster](#toc.xhtml#s17a) {#c01.xhtml#s17.sec3} ========================================= As stated earlier, the NSX Management cluster is formed by grouping three NSX Manager appliances in a cluster. Prior to NSX-T 2.4, NSX Manager and NSX Controllers were separated by their roles and deployed as separate appliances. So, in total four appliances were needed, one for management and three for controllers but still it was a single point of failure of the management plane. Starting NSX-T 2.4, Management Plane components and Control Plane components merged into a single appliance eliminating a single point of failure for the management plane. []{#c01.xhtml#page10}NSX Manager cluster primarily runs the following functions: - Manager Role - Policy Role - Controller Role - Distributed Persistent Database Manager and Policy Role provides management plane functionality whereas the Controller role as the name suggests servers control plane functions. All the desired configuration is saved in a distributed database which is replicated across all three nodes, providing the same configuration view to all nodes in a cluster. *[Figure 1.4](#c01.xhtml#fig1_4)* depicts different roles which make up Management Plane and their cluster groups within a NSX Management Cluster. ![](images/1.4.jpg) **Figure 1.4:** NSX Management Cluster Components NSX Manager appliance is available in different sizes to be deployed based on different requirements. The following table highlights different options available to deploy an NSX manager appliance: +-----------------+-----------------+-----------------+-----------------+ | **Appliance | **Memory** | **CPU** | **Disk Space** | | Size** | | | | +-----------------+-----------------+-----------------+-----------------+ | **Extra Small | 8 GB | 2 vCPU | 300 GB | | VM** | | | | +-----------------+-----------------+-----------------+-----------------+ | **Small VM** | 16 GB | 4 vCPU | 300 GB | +-----------------+-----------------+-----------------+-----------------+ | **Medium VM** | 24 GB | 6 vCPU | 300 GB | +-----------------+-----------------+-----------------+-----------------+ | **Large VM** | 48 GB | 12 vCPU | 300 GB | +-----------------+-----------------+-----------------+-----------------+ **Table 1.1:** NSX Manager Appliance Size **Note**: Extra Small VM resource size is only for the Cloud Services Manager appliance. []{#c01.xhtml#page11}Small VM appliance is for lab or Proof-of-Concept deployment and should not be used for production deployments. [NSX Consumption Model](#toc.xhtml#s18a) {#c01.xhtml#s18.sec3} ======================================== NSX manager provides two different models to interact with the NSX environment or consume NSX services. These two different models are handled by two different roles: - Policy Role - Manager Role Primarily, you will be working with Policy Role which is a declarative state configuration model whereas the Manager role is an imperative state configuration model. Let's take an example to understand more about the Policy role and Manager role. Imagine going to a pizzeria and ordering a margarita pizza. We tell the pizzeria about our choice of margarita pizza and the size of the order whether it will be small, medium or large. We are providing the desired state of our order and we leave it up to the pizzeria to figure out the recipe, ingredients and step-by-step details to create a desired pizza. The policy role handles the desired end-state order from the user and then hands it over to the manager role to figure out the step-by-step process. So, we order through the Policy role and then the backend of the pizzeria is handled by the Manager role to figure out how to achieve the desired end state. To summarize, In Policy Model or Policy role we provide desired state whereas in the Manager role we need to provide step-by-step configurations. **NSX Policy:** NSX Manager's default UI mode is policy mode. Some of the key functions provided by NSX Policy role: - It provides a centralized interface for configuring networking and security policies across the environment - It takes the desired state configuration from users in NSX UI or can be accessed via API URI */policy/api/* - It enables the user to specify the final desired state of the system without worrying about the current state or underlying implementation steps **NSX Manager:** NSX Manager UI is disabled by default and is deprecated from NSX-T 3.2. It is temporarily used to address deployment created via Manager mode or upgrade from older NSX-T versions. NSX-T 3.2 has introduced a policy promotion tool to migrate configuration from manager mode to policy mode. Other key functions provided by NSX Manager Role: - []{#c01.xhtml#page12}It installs and prepares the data plane components - It retrieves and validates configurations from NSX Policy and pushes the configuration to the control plane - Also, it retrieves the metrics from data plane components **Using Policy vs. Manager UI** It is always recommended to use Policy UI for any new deployments as Manager UI is deprecated and new features are implemented in Policy UI only. However, for a few use cases, the use of Manager UI might be required. The following table highlights such use cases where Manager UI is required. +-----------------------------------+-----------------------------------+ | **Policy Mode** | **Manager Mode** | +-----------------------------------+-----------------------------------+ | For any new deployments should | Any deployment created using | | use policy mode | Advanced mode | | | | | NSX Federation is supported on | | | Policy mode only | | +-----------------------------------+-----------------------------------+ | NSX Cloud Deployments | Deployment where integration with | | | another plugin is required such | | | as Openstack or NSX Container | | | Plugin | +-----------------------------------+-----------------------------------+ | Networking features - VPN, DNS | | | Services, DNS Zones, are | | | supported in Policy mode only | | +-----------------------------------+-----------------------------------+ | Security features available in | Security features available in | | Policy mode only: | Manager mode only: | | | | | - Endpoint Protection | - Bridge Firewall | | - Network Introspection | | | - Context Profiles | | | - L7 applications | | | - FQDN | | | | | | New Distributed Firewall and | | | Gateway Firewall Layout | | | | | | - Categories | | | - Auto service rules | | | - Drafts | | +-----------------------------------+-----------------------------------+ **Table 1.2:** Using Policy UI or Manager UI []{#c01.xhtml#page13}[NSX Manager Communication Workflow](#toc.xhtml#s19a) {#c01.xhtml#s19.sec3} ========================================================================== We have discussed a lot about different roles and components in the NSX Manager appliance, so let's map them out and see how they interact with each other. *[Figure 1.5](#c01.xhtml#fig1_5)* shows key services part of NSX Manager and how they communicate with each other to take desired state configuration from user, validate it and save it in database. **Figure 1.5:** NSX Manager Communication Workflow As a first step, the user accesses the UI or REST API which comes through Reverse Proxy on NSX Manager. Revery Proxy is the first point of entry with authentication and authorization capabilities. Configuration is then sent to Policy which in turn update CorfuDB which is our persistent distributed database to save desired configurations. The policy then sends configuration to Proton which is our manager role. Proton in turn validates all the configurations provided by Policy and updates the CorfuDB database. After validation and updating CorfuDB, Proton then sends the configuration to Control Plane. Proton is one of the core components of NSX Manager and it is responsible for various key functionalities such as logical switching, logical routing, distributed firewall, etc. and both Proton and Policy save their data in CorfuDB which is replicated to other nodes in the cluster to provide consistent view to all nodes in NSX cluster. [Control Plane](#toc.xhtml#s20a) {#c01.xhtml#s20.sec2} ================================ Control Plane consists of the Controller role whose primary function is to maintain the realized state of the system based on desired state configuration []{#c01.xhtml#page14}received from the Manager role or Proton. The controller role performs this function by computing the runtime state based on the configuration from Proton and then pushing the stateless configuration to the data plane. *[Figure 1.6](#c01.xhtml#fig1_6)* highlights component of control plane. ![](images/1.6.jpg) **Figure 1.6:** NSX Control Plane Another key function of the Controller is to distribute topology information reported by data plane nodes and maintain the realized state configuration for the system. In NSX, the control plane functionality is achieved through a multi-tier approach and it is divided into two main parts, Central Control Plane and Local Control Plane as shown in *[Figure 1.7](#c01.xhtml#fig1_7)*: **Figure 1.7:** NSX Control Plane Architecture - **[]{#c01.xhtml#page15}Central Control Plane (CCP):** Central Control Plane or CCP resides on NSX Manager nodes and it is part of the Controller role. CCP is also implemented as a cluster form factor with a controller role running on all three NSX Manager nodes part of the cluster. This provides both high availability and load distribution within the control plane. As stated earlier, one of the primary functions of the controller role is to compute runtime state configuration and this is handled by CCP on NSX Manager nodes. Also, it distributes topology information reported by LCP running on data plane nodes to other CCP nodes so the same realized state is maintained across the environment. As CCP is logically separated from the data plane, any failure on the control plane doesn't affect data plane traffic or any user traffic is not passed to CCP avoiding any hairpinning. - **Local Control Plane (CCP) --** Local Control Plane or LCP exists on each and every data plane node which can be ESXi host, Edge nodes or bare metal servers (more on data nodes covered in Section Data Plane). Its primary function is to program or configure kernel modules on data nodes and push stateless configuration to forwarding engines. It monitors the link status on local data nodes and provides any update in the forwarding engine to CCP. ![](images/1.8.jpg) **Figure 1.8:** Information distribution from LCP to CCP An important point to note here is each data plane node is communicating with a single control node. Any time a transport node or data node is initialized, it is assigned to a control node. CCP receives any configuration update from NSX Manager and pushes the information to the LCP of transport nodes. In *[Figure 1.8](#c01.xhtml#fig1_8)*, transport node on left has a configuration update. How transport node on right receives these updates is shown in a multi-step process. []{#c01.xhtml#page16}If any update or change occurs on a transport node, as a first step the LCP running on transport nodes updates its assigned control node or CCP running on a specific control node. In the second step, CCP further distributes these changes to other control nodes via the help of the distributed database. Also, any transport nodes assigned to the same CCP will receive the changes. In the last step, CCP running on other control nodes push changes to their respective assigned LCP. So, changes in any node are propagated to the entire NSX environment to keep it consistent. This distribution of workload is achieved with the help of process Sharding. [Sharding](#toc.xhtml#s21a) {#c01.xhtml#s21.sec3} =========================== Sharding is the process to distribute workload across multiple control plane nodes. NSX Management cluster is made up of three nodes with each node running a controller role and more specifically CCP. With the help of sharding, transport nodes or data nodes are divided equally among different CCP nodes and the CCP node is responsible for maintaining state information on data nodes that are assigned to it. This helps in dividing load across the NSX Management cluster and avoids high load or high resource usage on a specific node only. *[Figure 1.9](#c01.xhtml#fig1_9)* shows transport nodes assigned to a specific controller node to share load. **Figure 1.9:** Controller Sharding On a high level: - The transport node is assigned to a specific CCP node for L2, L3 and distributed firewall rule configuration and distribution - Each CCP node receives updates from both the Manager role and Data plane node but maintains state configuration on the specific transport node it has been assigned to. []{#c01.xhtml#page17}But a question arises here -- What if my controller got failed? What will happen to transport nodes assigned to failed controller node? Will it affect my data plane traffic? The answer is pretty simple; we have separated our control plane from the data plane. So even if my controller node got failed, data plane traffic will remain unaffected and continues to flow without any effect. And there is a heartbeat running between all controller nodes so if one controller node gets failed, other controller nodes become aware of the failure and the sharding table is recalculated to again distribute the load among the remaining controller nodes. **Note**: If two controller nodes or NSX Manager appliance get failed in a three-node cluster then Management Plane goes into a read-only state and no configuration changes can be performed. [Data Plane](#toc.xhtml#s22a) {#c01.xhtml#s22.sec2} ============================= The data plane is the most important component in the NSX environment as this is where actual packet processing is performed. It is completely distributed and stateless in nature as if any data node, for example, ESXi host gets failed all Virtual Machines running on a specific ESXi host get failed over to other hosts. The data plane consists of Transport Nodes where TN (Transport Node) is the host who is running LCP (Local Control Plane) daemons and forwarding engines. NSX support different type of transport nodes and can be classified into two broad categories: [Host Transport Nodes](#toc.xhtml#s23a) {#c01.xhtml#s23.sec3} ======================================= Host transport nodes are either Hypervisor or Bare metal servers which are prepared and configured for NSX to provide network and security to workloads or applications running on them. The most common host transport nodes are: - **ESXi Host:** It provides a data plane function for different types of workloads such as VMs and containers. NSX implements a data plane on the ESXi host with the help of N-VDS (NSX Virtual Distributed Switch). Starting NSX-T 3.0, NSX can be directly installed on top of VDS v7 (vSphere Distributed Switch). Also, from NSX 4.0, N-VDS is no longer supported with ESXi. - **KVM:** Starting NSX 4.0, KVM is no longer supported. However, in an earlier release, NSX-T installed N-VDS which is based on OVS (Open vSwitch) to provide data plane functions to VMs running on KVM. - **[]{#c01.xhtml#page18}Bare Metal Server:** NSX Agent or NSX third-party packages can be installed on bare metal servers running Windows or Linux to secure applications running on bare metal servers. Also, Bare Metal Server can be prepared as an Edge Transport Node to provide routing and network services to the NSX environment. [Edge Transport Nodes](#toc.xhtml#s24a) {#c01.xhtml#s24.sec3} ======================================= Edge Nodes are special NSX appliances that are dedicated to run stateful and central network services that cannot be distributed in nature such as Gateway Firewall, N-S Routing, NAT or VPN, and so on. Edge TNs are grouped into a cluster to provide high availability and scalability for network services by abstracting and presenting compute resources as a pool of resources to centralized network services. Edge Transport Node can be deployed in two forms: - **Edge VM Node:** As the name suggests, the Edge transport node is deployed as a virtual machine in a vSphere environment. This is the most common deployment and can cater to most business requirements. - **Bare Metal Edge Node:** Instead of deploying a virtual machine, a bare metal server can be instantiated as an Edge transport node. Bare metal edge nodes are usually deployed where bandwidth requirements are much higher than a VM form factor can provide. Some of the key functions provided by Data Plane are: - It forwards the packets based on the configuration provided by the local control plane which in turn is provided by the central control plane based on the user-desired configuration - It processes packets based on various flow tables and rules which are populated by the control plane - It reports the topology information back to the control plane with help of the local control plane which is further distributed to other data plane nodes - It monitors the status of links and tunnels and in case of any failure and performs failover - Last but not least, it maintains statistics and metrics at the packet level which helps in determining resource utilization and different events [Transport Node Communication Path](#toc.xhtml#s25a) {#c01.xhtml#s25.sec3} ==================================================== As we stated earlier in Management Plane and Control Plane sections, part of their function is to communicate with data plane nodes, collect metrics or statistics and push stateless configurations respectively. Instead of the Manager from the Management plane and CCP from the Control plane communicating directly with the data node or more appropriately transport node, they use a proxy called APH (Appliance Proxy Hub) to communicate with the transport node. *[Figure 1.10](#c01.xhtml#fig1_10)* highlights different ports and services responsible for communication between NSX Manager node and Transport node. []{#c01.xhtml#page19} ![](images/1.10.jpg) **Figure 1.10:** Transport Node communication with Manager Node APH (Appliance Proxy Hub) runs as a service on the NSX Manager node and provides secure connectivity to the transport node based on the NSX-RPC protocol. Similar to APH on the NSX Manager node, NSX-Proxy runs on a transport node that communicates with APH to provide statistics or receive configurations. NSX preparation or initial configuration is handled by the Manager running on the NSX Manager node and when a transport node is added to NSX, it sends configuration to APH which in turn talks to NSX-Proxy over TCP Port 1234 to configure the data plane on the local node. Post preparation of NSX over data node, NSX-Proxy collects stats and metrics from the local node and sends it to APH over TCP Port 1234 for further analysis by the Manager. Similarly, CCP pushes stateless configurations computed based on desired user configurations to APH who in turn sends it to NSX-Proxy over TCP Port 1235 for it to apply on forwarding engines. Also, any topology reported by a local node is communicated by NSX-Proxy to APH over TCP Port 1235 which further communicates information to CCP. As there are multiple NSX Manager nodes and Transport Nodes, each establishes its own communication channel over a specified port and all traffic passing between different nodes is secure and encrypted in nature. [Conclusion](#toc.xhtml#s26a) {#c01.xhtml#s26.sec1} ============================= In this chapter, we discussed challenges with traditional physical networks and how VMware NSX can overcome those challenges without the need for any []{#c01.xhtml#page20}specific hardware with help of Software Defined Networking. Like any network device, NSX can be divided into three functional planes which are logically separated from each other to avoid effect on one plane due to the failure of another. Policy and Manager roles are key components of the Management Plane whereas CCP and LCP constituents control the plane. NSX supports a wide array of platforms that can be configured as data plane nodes or transport nodes to provide Network and Security services to different workloads or applications running on them. Real fun with NSX begins with hands on experience. In next chapter, we will begin with deployment of NSX manager and cover how to prepare infrastructure for NSX. [Key Terms](#toc.xhtml#s27a) {#c01.xhtml#s27.sec1} ============================ - **VLAN** **(Virtual Local Area Network):** A group of devices logically segmented in same broadcast domain on a switched network - **NAT (Network Address Translation\>):** It is a method to map multiple or single private IP address to a public IP address before passing traffic over network - **SDN (Software Defined Networking):** A networking approach to separate control plane from data plane with help of software and can be implemented in different models - **Overlay:** Virtual network created on top of physical network to create different network services with help of encapsulation and tunnels - **NSX Manager Node:** A specialized appliance provide by VMware to implement virtual network in datacenter. Also, it serves management and control plane functions - **NSX Manager Cluster:** A group of three NSX Manager nodes to provide high availability and scalability to management plane and control plane - **Policy Role:** A declarative based approach which take user inputs in form of desired state configurations - **Manager Role:** A imperative based approach which provide step by step configuration to achieve desired state configuration - **CorfuDB:** A persistent distributed database to save configurations and inventory - **CCP (Central Control Plane):** Controller role runs on NSX Manager node and compute stateless configurations from desired user configurations provided by manager to further push to local control plane - **[]{#c01.xhtml#page21}LCP (Local Control Plane):** Controller role running on transport node helps in pushing stateless configuration to forwarding engine to achieve realized state configuration - **Transport Node:** Any host such as hypervisor or bare metal server prepared for NSX running LCP daemons and forwarding engines - **APH (Appliance Proxy Hub):** Service running on NSX Manager node responsible for communication between NSX Manager node and transport node []{#c02.xhtml} []{#c02.xhtml#page22}[C[HAPTER]{.small} 2](#toc.xhtml#c02) {#chapter-2.chap} ========================================================== [Deploying NSX Infrastructure](#toc.xhtml#c02) {#deploying-nsx-infrastructure.subchap} ============================================== [Introduction](#toc.xhtml#s28a) {#c02.xhtml#s28.sec1} =============================== In the previous chapter, we reviewed NSX architecture on a high level and different key components in an NSX environment such as **Manager**, **Controller** and **Transport Nodes.** This is all good but the real fun begins with deployments. This chapter covers the deployment of NSX Manager, attaching it to a **vCenter** server and most important preparing data nodes. Key steps in preparing NSX data nodes are: - Creating Transport Zone - Creating uplink profiles - Preparing ESXi host for NSX This chapter covers all the preceding steps to prepare your NSX environment in addition to deploying NSX Manager. Starting NSX 4.x, KVM is no longer supported, hence this chapter covers NSX infrastructure deployment and preparation for the vSphere environment only. KVM preparation for the NSX environment is out of the scope of this book but documentation for the same has been provided in the References section. [Structure](#toc.xhtml#s29a) {#c02.xhtml#s29.sec1} ============================ In this chapter, we will cover the following topics: - Deploying NSX Manager Cluster on vSphere - Deploying NSX Manager - []{#c02.xhtml#page23}NSX Manager Base Configuration - Deploying Additional NSX Manager Node - Creating NSX Management Cluster using CLI - Configuring Virtual IP for Cluster - Validating NSX Management Cluster - Preparing NSX Data Plane - Architecture of Transport Nodes - Transport Zones - Uplinks or pNICs - Uplink Profile - Transport Node Profile Preparing ESXi Host as Transport Node [Deploying NSX Manager Cluster on vSphere](#toc.xhtml#s30a) {#c02.xhtml#s30.sec1} =========================================================== Before configuring or utilizing NSX networking in the environment, it is important to set up a management plane as it provides entry into the NSX environment. As covered earlier, the management plane consists of three Manager appliances grouped in a cluster which are deployed as **Open Virtualization Appliance (OVA)** appliances either on standalone ESXi hosts or ESXi hosts managed by a vCenter server. The recommended way to deploy NSX Manager is through the vCenter server as the vCenter server can deploy any virtual machine that NSX Manager directs it to thus removing manual overhead. In a nutshell, first NSX Manager is deployed using the OVA file provided by VMware and post successful deployment of NSX Manager, the vCenter server is registered with NSX Manager as a compute Manager. Subsequent NSX Managers are deployed from within NSX UI to create a three-node management cluster. As a next step, prerequisites for Data Plane preparation are configured such as Transport Zone, uplink profile, etc. and ESXi host is prepared as Transport Nodes. Last but not least, edge nodes are deployed and configured in an Edge Cluster to provide centralized network services. *[Figure 2.1](#c02.xhtml#fig2_1)* shows the steps to implement NSX in the vSphere environment workflow. []{#c02.xhtml#page24} **Figure 2.1:** Deploying NSX on vSphere Workflow All the steps mentioned in *[Figure 2.1](#c02.xhtml#fig2_1)* are covered in detail in the subsequent sections. **Note:** Registration of the vCenter server is required if additional NSX Manager nodes need to be deployed from NSX UI, otherwise additional NSX Manager nodes can be deployed using the OVA file and form management cluster with help of NSX CLI. [Deploying NSX Manager](#toc.xhtml#s31a) {#c02.xhtml#s31.sec1} ======================================== NSX Manager is provided by VMware as an **OVA (Open Virtualization Appliance)** template which can be deployed in a vSphere environment using UI or in an automated fashion through APIs or scripts. NSX Manager can be deployed either directly on a standalone ESXi host or using vCenter server-managed ESXi hosts, however appropriate permissions are required to successfully deploy NSX Manager in either option. VMware provides different components bundled together in a single appliance template, so the same template can be used to deploy CSM, NSX Manager or Global Manager (required for federation, covered in detail in *[Chapter 13, NSX Multisite Deployment](#c13.xhtml)*) and once NSX Manager is deployed, Edge VM or NSX VIB installation on ESXi host can be done from same NSX Manager without the need of extra binaries. The step-by-step process to deploy the NSX Manager in the vSphere environment using the vCenter server are as follows: **[]{#c02.xhtml#page25}Step 1:** Download the `NSX Manager OVA` file from VMware using Customer Connect Portal. **Step 2:** Login into the `vCenter` server with the user account assigned with proper permissions. Select the `Cluster` or `ESXi host` on which to deploy NSX Manager and either right-click or go to `Actions` and select `Deploy OVF Template`. Select `Local File` and browse the directory containing the OVA file. *[Figure 2.2](#c02.xhtml#fig2_2)* shows the dialog box with NSX Manager OVA selected. ![](images/2.2.jpg) **Figure 2.2:** Select an OVF Template dialog box **Step 3:** Click `Next` and provide an appropriate name for NSX Manager VM which will be deployed on the ESXi host and optionally select VM folder to place NSX Manager VM. **Step 4:** Click `Next` and select `compute resource` either Cluster or specific ESXi host to run NSX Manager VM. **Step 5:** Clicking `Next` presents the `Review` screen with details about the OVA template file such as `Vendor, Version, Product` and `Publisher` information. Review the information and click `Next`. **Step 6:** The next dialog box named `Configuration` presents four different size options to select. As covered in *[Chapter 1, Introduction to NSX Datacenter](#c01.xhtml)*, these are: 1. `ExtraSmall`: Only supported for Cloud Service Manager Role 2. `Small`: Only supported for Global Manager production deployment. It can be used for POC or Lab deployment of NSX Manager. 3. `Medium`: Supports up to 128 ESXi hosts (at the time of writing with NSX version 4.0.1) in a production NSX Manager cluster deployment 4. `Large`: Supports up to 1024 ESXi hosts (at the time of writing with NSX version 4.0.1) in a production NSX Manager cluster deployment *[Figure 2.3](#c02.xhtml#fig2_3)* shows the `Configuration` screen with the preceding options. **Figure 2.3:** Configuration dialog box with different size options **Step 7:** Select the desired `Datastore` and provisioning type for storage and `vSphere Port group` for network connectivity in subsequent options and click `Next.` **Step 8:** The next screen named `Customize Template` is the most important screen as it will take input of all properties for NSX Manager which are: 5. `System GRUB Root User Password`: Set password for the GRUB boot menu to avoid tempering of boot options 6. `System Root User Password`: Set root password for NSX Manager appliance, provide shell access to VM 7. `CLI admin User Password`: Set admin password for NSX Manager, provide admin access to NSX environment 8. `CLI audit User Password`: Set audit user password for NSX Manager, provides read-only access to NSX environment, this can be setup later on from within NSX UI 9. `Hostname`: Hostname of NSX Manager accessible over the network 10. `Rolename`: Select the appropriate role to deploy NSX Manager, Cloud Service Manager or Global Manager 11. `Management Network Details`: Provide IP details to access NSX Manager over the network. 12. `DNS Details`: Setup DNS Server IP and Search List to resolve the hostname to IP within NSX Manager 13. `Service Configuration`: Select if SSH services should be enabled or not as per requirement. Important is to set up the NTP server correctly to avoid any issues in the network due to time drift between different components. **Step 9:** Click `Next` and this is the final screen. Review all details and click `Finish` to start deployment of the first NSX Manager. *[Figure 2.4](#c02.xhtml#fig2_4)* shows the `Ready to Complete` screen. ![](images/2.4.jpg) **Figure 2.4:** Ready to Complete screen with details entered earlier **Step 10:** It takes a few minutes depending upon the underlying hardware to complete the deployment of the NSX Manager appliance VM and after the NSX Manager is deployed, either right-click on the `VM` in the vCenter server or go to `Actions`, `Power` and click `Power On`. **Step 11:** Preconfigured first boot scripts configure NSX Manager with the information provided earlier. *[Figure 2.5](#c02.xhtml#fig2_5)* shows such scripts in action. []{#c02.xhtml#page28} **Figure 2.5:** First boot of NSX Manager Appliance **Note:** Screenshots for configurations or deployment provided in this book are based on **vSphere 7.0 Update 3** and **NSX 4.0.1.1**. There can be a few UI elements changes between different versions hence reader discretion is advised. Congratulations! You have successfully deployed your first NSX Manager. However, we have just started with deployment and still, a long way needs to be covered. So, grab a cup of tea or coffee as in the next section we dive into base configurations of our newly deployed NSX Manager. [NSX Manager Base Configuration](#toc.xhtml#s32a) {#c02.xhtml#s32.sec2} ================================================= After the first NSX Manager appliance is deployed there are a few initial configurations that need to be completed first before proceeding further. Some of these initial configurations are optional in nature but recommended to complete, these include but are not limited to registering the vCenter server with NSX Manager, installing licenses and deploying additional NSX Manager nodes to form a cluster. Unlike NSX-V or NSX for vSphere which support only one vCenter per NSX-V Manager in NSX Manager multiple compute Managers (or vCenter server) can be integrated up to a maximum of 16 in a Large deployment and 2 in Medium deployment. The benefit of registering the vCenter server with NSX Manager is NSX polls the vCenter server to collect cluster information for Data Plane preparation. Also, in order to deploy additional NSX Manager nodes or Edge nodes from NSX itself, it requires a compute Manager. But in order to []{#c02.xhtml#page29}perform these configurations, access to NSX Manager is needed which can be achieved in multiple ways. **Tip**: Use VMware Configuration Maximum Tool to understand maximum supported for each component by visiting link. [Accessing NSX Manager](#toc.xhtml#s33a) {#c02.xhtml#s33.sec3} ======================================== NSX Manager can be accessed in the following ways: - Through supported web browser by logging into NSX UI - Through CLI - Through API The most commonly used method is a web browser where the user can perform configuration in an easy-to-use GUI (graphical user interface) interface. This is more involving in nature as the configuration has to be completed in a step-by-step fashion. **Accessing NSX Manager using NSX UI** **Step 1:** Open a web browser and enter the `FQDN (Fully Qualified Domain Name)` or IP Address of the NSX Manager appliance. **Step 2:** On the `credential` prompt, provide a `username` and `password` to login into NSX Manager. If this is the first time login, provide `admin` as username and password as set earlier during deployment to log in. *[Figure 2.6](#c02.xhtml#fig2_6)* shows such a prompt in action. ![](images/2.6.jpg) **Figure 2.6:** Accessing NSX Manager using a web browser **[]{#c02.xhtml#page30}Step 3:** First time logging into NSX Manager, accept `EULA` presented and skip the `introductory` `walkthrough` of UI. Also, accept either to join `VMware Customer Experience Improvement Program (CEIP)` or uncheck the box to opt out of VMware CEIP. *[Figure 2.7](#c02.xhtml#fig2_7)* shows the first login screen post accepting EULA and CEIP. **Figure 2.7:** NSX Manager User Interface **Accessing NSX Manager using NSX CLI** In order to access NSX Manager using CLI, either an SSH session or console session needs to be opened and an admin credential needs to be provided. During deployment, NSX Manager provides an option to enable SSH session, however, if it is not selected during deployment then first SSH services need to be started and enabled in NSX Manager using Virtual Machine console from ESXi host or vCenter server. The following console output shows the commands to start the SSH service, set it to start on boot automatically and get the status of the SSH service. +-----------------------------------------------------------------------+ | `sa-nsx-01> start service ssh` | | | | `sa-nsx-01> set service ssh start-on-boot` | | | | `sa-nsx-01> get service ssh` | | | | `Mon Jan 02 2023 UTC 22:47:45.910` | | | | `Service name:      ssh` | | | | `Service state:     running` | | | | `Start on boot:     True` | | | | `Root login:        enabled` | +-----------------------------------------------------------------------+ **Table 2.1:** Console output to start SSH service on NSX Manager node []{#c02.xhtml#page31}NSX CLI can be accessed from an SSH client after SSH services are started on NSX Manager and use `list` command to show all available commands to perform retrieve information or configure the NSX environment. **Tip**: NSX CLI is available across all NSX prepared infrastructure NSX Manager, ESXi host and Edge Nodes and is consistent in nature across all nodes. **Accessing NSX Manager using API** NSX Manager uses RESTful API and it drives everything, even NSX CLI and NSX UI are API clients. Usually, API is used when GUI cannot be used or certain tasks need to be automated using scripts or other tools such as Terraform or Ansible. NSX Manager accepts API requests on TCP port 443 over HTTP application protocol to perform HTTP actions such as: - **GET**: Command to query, read and retrieve NSX objects - **PUT, POST, PATCH**: Command to create, modify or update NSX objects - **DELETE**: Command to delete NSX objects A complete list of API URI path and commands can be accessed on VMware which include documentation and sample codes. [Registering vCenter Server with NSX Manager](#toc.xhtml#s34a) {#c02.xhtml#s34.sec3} ============================================================== After accessing the NSX Manager for the first time, one of the important tasks to complete is registering a vCenter server or in NSX Manager terms Compute Manager. As stated earlier, registering a vCenter server with NSX Manager provide multiple functions such as additional NSX Manager node deployment directly from NSX UI, preparing ESXi host as a Transport Node without manual registration or deeper integration with vSphere for Kubernetes. Following is the step-by-step process to register the vCenter server with NSX Manager. **Step 1:** In order to register the vCenter server with NSX Manager, log in to `NSX Manager UI` with admin privileges. Navigate to `System` menu and in the left pane, expand `Fabric`. Click on `Compute Managers`. Any registered vCenter server will be visible here or if this is a new environment, it will be an empty screen. *[Figure 2.8](#c02.xhtml#fig2_8)* shows `Compute Manager` window with no registered vCenter server. []{#c02.xhtml#page32} ![](images/2.8.jpg) **Figure 2.8:** NSX Compute Managers **Step 2:** Click on the `+` `(Plus)` icon `Add Compute Manager` and a new dialog box with the name `New Compute Manager` will open as shown in *[Figure 2.9](#c02.xhtml#fig2_9)*. **Figure 2.9:** New Compute Manager **[]{#c02.xhtml#page33}Step 3:** Enter the required details and click on `Add`. A few key details to be entered here are: 1. **Name**: Name of Compute Manager in NSX UI 2. **Type**: vCenter 3. **FQDN or IP Address**: Either the DNS name or IP address of the vCenter server to be connected 4. **Port:** 443 (default), needs to be changed if vCenter is using a custom port 5. **Username**: User account name with required privileges to access the vCenter server (detailed list of privileges can be viewed at ) 6. **Password:** User account password 7. **Create Service Account**: Optional, can be enabled for vSphere 7.0 and above. If using vSphere 7.0, it is recommended to create a service account 8. **Enable Trust**: Optional, can be enabled for vSphere 7.0 and above 9. **Access Level**: It can be enabled if Trust is enabled. 1. **Full Access**: Full access is required for the integration of NSX Manager with vSphere for Kubernetes and for using vSphere Lifecycle Manager to manage host preparation 2. **Limited Access**: Can be selected if only using vSphere Lifecycle Manager to manage host preparation **Step 4:** A new warning prompt will pop up alerting about `Missing Thumbprint`. Validate the thumbprint and click on `Add`. *[Figure 2.10](#c02.xhtml#fig2_10)* shows such a pop-up. []{#c02.xhtml#page34} ![](images/2.10.jpg) **Figure 2.10:** Warning Pop-up for missing Thumbprint **Step 5:** Give it a minute or two and the vCenter server will be registered in NSX Manager with connection status as `Up`. *[Figure 2.11](#c02.xhtml#fig2_11)* shows the vCenter server registered with NSX Manager. **Figure 2.11:** vCenter server registered with NSX Manager [Adding Licenses in NSX Manager](#toc.xhtml#s35a) {#c02.xhtml#s35.sec3} ================================================= Another important task is to add licenses in NSX Manager as without adding appropriate licenses, NSX Manager will not let you prepare the ESXi host as Transport Nodes. There is an evaluation license to test NSX free of cost but it is not recommended for production deployments. To add licenses, navigate to **System** menu and in the left-hand pane, select `Licenses` under `Settings`. Click on the `+` **(plus)** icon `Add License` and enter the license key. *[Figure 2.12](#c02.xhtml#fig2_12)* shows the license window with a newly added license key. ![](images/2.12.jpg) **Figure 2.12:** Adding a new license key in NSX UI []{#c02.xhtml#page35}A few other important tasks are to configure Backups for NSX Manager which can be done in Backup & Restore under the System tab, replace the self-signed certificate with CA signed certificate and integrate NSX Manager with Identity providers such as Active Directory (AD). These tasks have been covered in detail with a step-by-step process in *[Chapter 14, Monitoring and Managing NSX](#c14.xhtml)*. [Deploying Additional NSX Manager Node](#toc.xhtml#s36a) {#c02.xhtml#s36.sec2} ======================================================== After the first NSX Manager appliance is deployed, configured and the vCenter server has been registered as a compute Manager, the second and third NSX Manager nodes can be deployed directly from NSX UI to form the NSX Management cluster providing high availability and scalability in NSX environment. The step-by-step process to deploy the NSX Manager node from NSX UI are as follows: **Step 1:** After logging into the first NSX Manager node, go to `System` tab and click on `Appliances` in the left-hand pane under `Configuration.` `Appliances` window shows the existing NSX Manager deployed and their status. In the same window, there are options to deploy additional NSX Manager nodes, configure Virtual IP and add NSX Advance Load Balancer appliance (covered in detail in *[Chapter 12, NSX DataCenter Services -- 2](#c12.xhtml)*). *[Figure 2.13](#c02.xhtml#fig2_13)* shows details of the first NSX Manager deployed earlier. **Figure 2.13:** Details of NSX Manager appliance **Step 2:** Click on `Add NSX Appliance` and a new window will pop up with details to be filled in: 1. `Name`: Provide the FQDN or hostname for the NSX Manager node 2. `Management IP Details`: Provide the IP details to access the newly deployed NSX Manager over the network 3. `DNS Server, NTP Server` and` Search Domains` are pre-filled by setup using the first NSX Manager. These can be overridden if required. 4. `Node Size`: Select the appropriate appliance size as per requirement. It is recommended to keep the NSX Manager size the same for all three nodes. After providing all details it will look similar to *[Figure 2.14](#c02.xhtml#fig2_14)*. Click `Next` to continue deployment. ![](images/2.14.jpg) **Figure 2.14:** Appliance Information for additional NSX Manager Node **Step 3:** The second step is about resource `configuration` where NSX Manager virtual machine will be deployed. Select the appropriate `Compute Manager`, `Cluster`, `Datastore` and `Network port group` to provide compute, storage and network resources to Virtual Machine. *[Figure 2.15](#c02.xhtml#fig2_15)* shows an example of such a configuration. **Figure 2.15:** Configuration of NSX Manager Virtual Machine **[]{#c02.xhtml#page37}Step 4:** In the last step, provide the root password for the NSX Manager appliance and select if SSH login will be allowed or disallowed. Audit and Admin CLI passwords can be overridden if required. Click `Install Appliance` to start deployment of NSX Manager as visible in *[Figure 2.16](#c02.xhtml#fig2_16)*. However, once a new node is joined to NSX Cluster, system root, audit and admin credentials will be overridden with the first NSX Manager node's credentials. ![](images/2.16.jpg) **Figure 2.16:** Final step to deploy NSX Manager from UI This will start the deployment of the NSX Manager node. As a first step, NSX Manager will direct the vCenter server to deploy the OVF template on selected resources. Post successful deployment of the NSX Manager template, it will be powered on and the first boot scripts will run similar to the first node. *[Figure 2.17](#c02.xhtml#fig2_17)* is a capture of vCenter tasks directed by the NSX Manager. **Figure 2.17:** Tasks initiated by NSX Manager in the vCenter server After the additional NSX Manager is fully deployed, it initiates a join request to the first NSX Manager to join the NSX Management cluster and post successful join services are distributed across all nodes in the management cluster, the cluster becomes stable as in *[Figure 2.18](#c02.xhtml#fig2_18)* and additional nodes can be deployed. []{#c02.xhtml#page38} ![](images/2.18.jpg) **Figure 2.18:** Additional NSX node deployed successfully [Creating NSX Management Cluster using CLI](#toc.xhtml#s37a) {#c02.xhtml#s37.sec2} ============================================================ Deploying the NSX Manager node using NSX UI takes care of forming a management cluster without the need to run any commands but this requires a compute Manager to be registered with NSX Manager. What if all three NSX Manager instances are deployed using the OVF template or a new NSX Manager instance is deployed from the OVF template instead of NSX UI? In that case, all NSX Managers nodes will be running independently not aware of each other. In such cases, NSX CLI comes to the rescue, with help of NSX CLI, independent instances can be joined together to form a management cluster. **Step 1:** Open an SSH session or Console session to NSX Manager deployed first and log in using admin credentials. **Step 2:** Run the following commands in sequence and capture their output: `get certificate api thumbprint` 1. capture the command output string unique for this node `get cluster config` 2. capture the Cluster ID for the first node []{#c02.xhtml#page39}*[Table 2.2](#c02.xhtml#tab2_2)* shows the console output and text in bold to be captured. +-----------------------------------------------------------------------+ | sa-nsx-01\> get certificate api thumbprint | | | | `Tue Jan 03 2023 UTC 15:53:35.246` | | | | `9b17d790b1bfd10799906649c609c4e0e02f503fc05b99e4623695e674b26b0a` | | | | `sa-nsx-01> get cluster config` | | | | `Tue Jan 03 2023 UTC 15:53:43.722` | | | | `Cluster Id: 46231ef6-ef07-4f91-9c61-162fb1411b54` | | | | `Cluster Configuration Version: 1` | | | | `` | +-----------------------------------------------------------------------+ **Table 2.2:** Console output on the first NSX Manager node **Step 3:** Open SSH session or Console session to new deployed NSX Manager node and log in using admin credentials. **Step 4:** Run the join command in the following format to join the new NSX Manager node with the first NSX Manager instance. `join cluster-id username password thumbprint ` Here, 1. `Manager-IP`: IP address of First NSX Manager node 2. `Cluster-id`: Cluster ID of first NSX Manager node as captured in Step 1 3. `Manager-username`: Admin username of first NSX Manager 4. `Manager-password` : Admin password of first NSX Manager 5. `Manager-thumbprint`: Thumbprint of first NSX Manager as captured in Step 1 *[Table 2.3](#c02.xhtml#tab2_3)* provides an example output of the preceding command when run on a newly deployed NSX Manager node not part of the management cluster. +-----------------------------------------------------------------------+ | `sa-nsx-03> join 10.10.10.21 cluster-id 46231ef6-ef07-4f91-9c61-162fb | | 1411b54 username admin password xxxxxxxxxxx thumbprint 9b17d790b1bfd1 | | 0799906649c609c4e0e02f503fc05b99e4623695e674b26b0a` | | | | `Data on this node will be lost. Are you sure? (yes/no): yes` | | | | `Join operation successful. Services are being restarted. Cluster may | | take some time to stabilize.` | +-----------------------------------------------------------------------+ **Table 2.3:** Console output on the new NSX Manager node []{#c02.xhtml#page40}**Step 5:** Log in to the first NSX Manager UI and all NSX Manager nodes will be present under Appliances in System tab. **Tip**: If there is a need to change the size of NSX Manager appliances, deploy a new NSX Manager instance with desired node size. Once the new NSX Manager instance is fully deployed and part of a cluster, delete the old NSX Manager node with a smaller size. Repeat this process for NSX Manager instances until node size is uniform in the NSX Management cluster. [Configuring Virtual IP for Cluster](#toc.xhtml#s38a) {#c02.xhtml#s38.sec2} ===================================================== Now that three NSX Manager nodes are grouped together to form a management cluster, the database is replicated to all three nodes providing high availability of services. NSX management cluster serves two important functions. The first one is to provide northbound entry for system configuration to different endpoints and user access. The second one is to establish communication between different NSX components. As covered in *[Chapter 1, Introduction to NSX Datacenter](#c01.xhtml)*, the controller role takes care of establishing communication between the controller and Transport Node and distributing Transport Nodes between different NSX Manager nodes with help of sharding. Similarly, the NSX Management cluster can provide different availability modes for northbound access to API and user access. These different models are: - Default Deployment with no common IP address - Configure NSX Manager with Cluster VIP - NSX Manager with External Load Balancer Let's discuss these different models in detail. [Default Deployment with no common IP address](#toc.xhtml#s39a) {#c02.xhtml#s39.sec3} =============================================================== In this simplistic model, all three NSX Managers are deployed without any further configuration. There is no common IP address available between different nodes. []{#c02.xhtml#page41}Different endpoints (Users or external systems) can connect to different NSX Manager nodes with the help of their unique FQDN or IP address. However, availability for API or GUI access is driven outside of the NSX Manager appliance. In case of failure of any node, external systems or scripts are required to point to a different healthy node which may require manual intervention. Similarly, administrator or users' needs to change their address in a web browser in order to connect to the healthy node which is not desirable for many organizations. [Configuring NSX Manager with Cluster VIP](#toc.xhtml#s40a) {#c02.xhtml#s40.sec3} =========================================================== This model works on the Active/Standby availability model where a virtual IP address is configured in the NSX management cluster. Different endpoints (users or external systems) connect to virtual IPs which provide node-level redundancy. In this model, out of three nodes in the management cluster, one node is assigned as the owner of the virtual IP. All requests hitting virtual IP/FQDN are sent to the owner node. There is no load distribution mechanism with this approach and all northbound traffic is handled by a single node. In case of failure of owner node, a new node out of the remaining healthy nodes is selected as the owner of the virtual IP which brings to one key consideration for this model. As virtual IP remains the same, it assumes all three nodes belong to the same subnet. The cluster VIP feature uses GARP to update the mac-address and the ARP table of the upstream network devices. Hence, it is mandatory to have all NSX Manager instances in the same NSX management cluster to be in the same subnet. *[Figure 2.19](#c02.xhtml#fig2_19)* displays three NSX Manager nodes running with cluster VIP with one node selected as the owner node. **Figure 2.19:** NSX Management Cluster with Cluster VIP []{#c02.xhtml#page42}Cluster VIP can be configured in NSX UI by navigating to `System` tab and selecting `Appliances` under `Configuration`. Click on `Set Virtual IP` and then provide a free IP address from the same subnet of the NSX Manager node to configure it on the NSX management cluster. NSX UI screen will refresh and configured Cluster VIP will be visible along with the owner node IP as captured in *[Figure 2.20](#c02.xhtml#fig2_20)*. ![](images/2.19.jpg) **Figure 2.20:** NSX Cluster Virtual IP with owner node information [NSX Manager with External Load Balancer](#toc.xhtml#s41a) {#c02.xhtml#s41.sec3} ========================================================== The external load balancer can be configured with the NSX management cluster to distribute incoming requests to different NSX Manager nodes. A VIP is configured on the load balancer and NSX Manager nodes are configured as physical servers in a server pool. This model is helpful where NSX Manager nodes need to be in different subnets due to underlying physical topology (for example, could be different VLANs across different racks and the NSX Manager needs to be run in different racks). This model introduces additional components to manage and configure complexities based on different load balancer models. NSX Advanced Load Balancer (formally AVI Networks) is also supported for this type of configuration. A key point to note here is Source IP persistence is required for session-based authentication (when the NSX Manager is accessed from the web browser). Other types of authentication supported with external load balancer are: - HTTP Basic Authentication - Authentication using an X.509 certificate and a Principal Identity - []{#c02.xhtml#page43}Authentication in VMware Cloud on AWS (VMC) *[Figure 2.21](#c02.xhtml#fig2_21)* presents a scenario where NSX Manager nodes are load balanced by an external LB and source persistency is in use to redirect requests from the same client to the same NSX Manager node. **Figure 2.21:** NSX Management cluster with an external Load balancer [Validating NSX Management Cluster](#toc.xhtml#s42a) {#c02.xhtml#s42.sec2} ==================================================== Well hats off to you for coming this far! You have successfully deployed your first NSX management cluster and configured a cluster VIP to provide redundancy in the NSX environment. It's time to look behind scenes and validate what we have learned and deployed. Let's open an SSH session or Console session on the NSX Manager instance and the first command that can be run to validate the management cluster is: `get cluster status` This command provides the overall status of the management cluster, services running in the management cluster and their individual status on each node. As visible in the example output of the NSX Manager node in *[Table 2.4](#c02.xhtml#tab2_4)*, different roles such as Manager, HTTPS, Controller, and Corfu running in a stable state with overall status as stable. +-----------------------------------------------------------------------+ | []{#c02.xhtml#page44}`sa-nsx-03> get cluster status` | | | | `Tue Jan 03 2023 UTC 18:28:00.994` | | | | `Cluster Id: 46231ef6-ef07-4f91-9c61-162fb1411b54` | | | | `Overall Status: STABLE` | | | | `Group Type: DATASTORE` | | | | `Group Status: STABLE` | | | | `Members:` | | | | `    UUID                                       FQDN` | | | | `IP               STATUS` | | | | `    e3101642-5ab1-c518-b5c3-24a8853f7f3c       sa-nsx-01             | |                       10.10.10.21      UP` | | | | `    73f2617f-54e3-4d61-89c5-0a9ce75a860b       sa-nsx-02.lab.local   | |                       10.10.10.22      UP` | | | | `    ec311642-6ec9-931b-2207-26dce73dfe70       sa-nsx-03             | |                       10.10.10.23      UP` | | | | `Group Type: CLUSTER_BOOT_MANAGER` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: CONTROLLER` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: MANAGER` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: HTTPS` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: MESSAGING-MANAGER` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: MONITORING` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: IDPS_REPORTING` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: SITE_MANAGER` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: CM-INVENTORY` | | | | `Group Status: STABLE` | | | | `` | | | | `Group Type: CORFU_NONCONFIG` | | | | `Group Status: STABLE` | | | | `` | +-----------------------------------------------------------------------+ **Table 2.4:** Cluster Status output on the NSX Manager node []{#c02.xhtml#page45}Another extended version of the same command is with **verbose** option which provides leader node or owner node information for different services as shown in the example output as follows. +-----------------------------------------------------------------------+ | `sa-nsx-03> get cluster status verbose` | | | | `Tue Jan 03 2023 UTC 18:35:41.308` | | | | `Cluster Id: 46231ef6-ef07-4f91-9c61-162fb1411b54` | | | | `Overall Status: STABLE` | | | | `` | | | | `Leaders:` | | | | `?

Use Quizgecko on...
Browser
Browser