🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

nutanix-ahv-200-technical-overview-Hadyg1te.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Nutanix AHV - 200 - Technical Overview Nutanix AHV is a virtualization platform that powers virtual machines (VMs) and containers for applications and cloud-native workloads on-premises and in public clouds. Ten years after its release in 2014, AHV has evolved into a virtualization technology of ch...

Nutanix AHV - 200 - Technical Overview Nutanix AHV is a virtualization platform that powers virtual machines (VMs) and containers for applications and cloud-native workloads on-premises and in public clouds. Ten years after its release in 2014, AHV has evolved into a virtualization technology of choice, allowing IT operations to easily scale and stretch across private data centers and various public clouds. In this lesson, you will learn all about AHV, its features, and how best to position it for your prospects and customers. IN TR ODUCTION AN D OVER VIEW Getting Started NCP Overview AH V Invisible Virtualization with AHV Introduction to AHV Key Features & Use Cases – Part I Key Features & Use Cases – Part 2 AHV Architecture AHV Security & Migration AH V N ETWOR K AHV Networking Deep Dive Cisco ACI SUMMAR Y AN D R ESOUR CES Resources Feedback Lesson 1 of 12 Getting Started Course Navigation Review the following information to learn how to effectively navigate the course. Scroll Select Use Scroll through the course to Select the images to enlarge Use any smart device to progress through each of them or reduce them. proceed through the course the lessons. Try selecting this image. —phone, tablet, and computer. Navigate Sections Select the "hamburger" in the top left to open/close the navigation menu. Be sure to scroll to the bottom of each section and, if required, select the down arrow to continue to the next lesson. Continue Bar Some lessons and activities are followed by a CONTINUE bar. If you see the lock icon on the bar, go back and make sure you have completed all of the activities. Lesson 2 of 12 NCP Overview NCP Overview Nutanix Cloud Platform The Nutanix Cloud Platform is a secure, resilient, and self-healing platform for building a hybrid multicloud infrastructure to support all kinds of workloads and use cases across public and private clouds, multiple hypervisors and containers, and all with varied compute, storage, and network requirements. The following shows a high-level architecture of the Nutanix Cloud Platform. It visually represents how the Nutanix Cloud Platform integrates various components and services to provide a comprehensive solution for managing enterprise IT applications across hybrid multi-cloud environments. The vision is "One platform to run applications and data anywhere." Layers in the image represent the following elements from top to bottom: Applications: application types supported by the Nutanix Cloud Platform Nutanix Central: centralized management hub with intelligent operations, self- service, and security features Nutanix Data Services: services that provide unified storage, data services, and database management solutions Nutanix Cloud Inf rastructure: the foundation of the platform, includes various components for network flow and application security, disaster recovery, security features, and Cloud infrastructures Application and Data Portability: the platform facilitates seamless application and data movement across different environments Building Blocks of the Nutanix Cloud Platform The building blocks of the Nutanix Cloud Platform are Nutanix Cloud Infrastructure and Nutanix Cloud Manager. These are the products that fall under each of them to form a complete solution. Nutanix Cloud Infrastructure (NCI) is a distributed infrastructure platform for enterprise IT applications. NCI software combines computer, storage, and networking resources from a cluster of servers into a single logical pool with integrated resiliency, security, performance, and simplified administration. With NCI, organizations can efficiently deploy and manage data and applications across data center, edge, and cloud environments without the complexity or cost of traditional infrastructure. NCI enables IT to spend less time and money on infrastructure management and empowers users to manage their workloads. Nutanix Cloud Infrastructure (NCI) 1 AOS Scale-Out Storage 2 AHV Hypervisor 3 Virtual Networking 4 Disaster Recovery 5 Container Services 6 Data and Network Security Nutanix Cloud Manager (NCM) 1 AI Operations (NCM Intelligent Operations) 2 Self-Service Infrastructure/App Lifecycle Management (NCM Self-Service) 3 Cost Governance 4 Security Central NCI Inclusions Select each checkbox as you review the list of key terms and definitions below. Nutanix AOS: scale-out storage technology that makes a hyperconverged infrastructure (HCI) possible Nutanix AHV: a lightweight cloud hypervisor built into NCI that offers enterprise-grade virtualization capabilities and built-in Kubernetes support Nutanix Kubernetes Platform (NKP): an enterprise Kubernetes management solution that simplifies provisioning, operations, and lifecycle management (of Kubernetes) Nutanix Disaster Recovery: a natively integrated DR solution that helps organizations stay prepared during a disaster while minimizing data loss and downtime Prism: a management UI that includes Prism Element and Prism Central for infrastructure management, authentication, and RBAC; includes REST APIs for integration with existing management systems Data and Network Security: a comprehensive data security that uses encryption, a built-in local key manager, and software- based firewalls for networks and applications with Flow Network Security and virtual networking C O NT I NU E Lesson 3 of 12 Invisible Virtualization with AHV Invisible Virtualization with AHV Introducing AHV, the Center of Your Cloud Architecture Nutanix AHV is our enterprise-ready virtualization platform that combines robust capabilities, ease of use, and cost-effectiveness. AHV is natively integrated and included in the Nutanix Cloud Platform. Its simplicity emanates from the single management interface of Prism Central and offers advanced functionality for VMs and application containerization. AHV delivers a full-featured experience, providing the most critical features you need to get up and running quickly and easily. A virtualized environment is meant to abstract the hardware away from the operating system and to help you get the most out of your infrastructure. Realizing the full benefit of your infrastructure shouldn’t come at the expense of navigating a labyrinth of little-used features or elaborate deployment options. Nutanix provides simplicity without compromise. Select each marker below to see the benefits of AHV.       Powers the Nutanix Cloud Platform  Secure by Design  Includes Native Containerization  Managed by Prism Central  Unlocks a Cloud Operating Model When you combine the basic features above with the core functions of manageability, data protection, high availability, operational insight, and performance, AHV provides an enterprise-ready solution, capable of competing with the current industry veterans. C O NT I NU E Legacy Virtualization Is at Odds with the Cloud Operating Model Questioning the Viability of the Legacy Three-Tier Platform In today's dynamic business landscape, organizations increasingly question the viability of their three-tier, single-vendor architectures. While these structures have traditionally served well and were once the stalwarts of business infrastructure, they now need to be more adept amidst the swift technological shifts. This is especially true with the desire to embrace more of a cloud operating model. Three-tier platforms—which segregate computing, storage, and networking into distinct silos—can introduce complexity and inefficiencies, especially when integrating with modern, cloud-native applications. As a result of these complexities and inefficiencies, the legacy virtualization and three-tier platforms can now impede an organization’s progression. Legacy Three-Tier Platform Legacy Virtualization and Three-Tiered Platform Limitations Select each hotspot (+) below to learn more about the limitations of legacy virtualization and three-tired platforms. Inability to Scale On Demand Contains rigid architectures that are not suited to modern businesses and the applications' dynamic, on-demand needs Lacks the required scalability, flexibility, and speed needed to rapidly deploy and adjust to changing business applications and services No Longer Cost-Effective Requires cumbersome maintenance and updates, which leads to: Cost-prohibitive upgrades Longer lead times for provisioning resources and deploying applications Complicated Capabilities Lacks self-service capabilities Contains complicated DR and Business Continuance capabilities Requires multiple management interfaces Can complicate application development and/or deployment Outdated Infrastructure As businesses strive for agility, innovation, and quick response times to market shifts, being tethered to these outdated infrastructures can: Stifle growth Curb competitive advantage Hinder digital transformation efforts The escalating demand for diverse applications, coupled with the constraints of a single-vendor setup or monolithic infrastructures, is becoming a pressing concern. Consequently, enterprises are looking for infrastructural solutions that align with the cloud paradigm and accommodate the ever-growing application demands. C O NT I NU E What Is a Cloud Operating Model? A cloud operating model for IT organizations delineates a structured methodology for deploying and managing IT services via cloud-based infrastructures. It refers to the approach that organizations adopt when they transition to or design operations in a cloud computing environment. A key aspect of the cloud operating model is self-service management. Application teams have the ability to request and provide services without relying on lengthy and manual setup times from the infrastructure team. The cloud operating model is tailored for cloud computing environments and encompasses: Infrastructure Applications Processes Operational governance Self-service management and provisioning The Approach to the Cloud Operating Model Scalable Central to the cloud operating model is the on-demand provisioning of resources, scalability, and automated orchestration. Innovative Instead of traditional on-premises setups with their capital expenditures and static capacities, the cloud operating model takes a consumption-based approach, dynamically allocating resources based on real-time demand. Resilient The cloud operating model integrates practices like Infrastructure as Code (IaC), continuous integration/continuous deployment (CI/CD) pipelines, and microservices architectures. Cost-Effective The cloud operating model's management aspect emphasizes governance, compliance, cost monitoring, and performance optimization, enabling a holistic view of resources, ensuring security, and allowing for strategic decision-making. Agile By adopting and effectively managing the cloud operating model, IT organizations achieve heightened agility and optimized resource utilization and are poised at the cutting edge of technological progression. C O NT I NU E Why AHV? AHV is the virtualization platform that has been built for the modern era and enables the cloud operating model. Xi VPN Simple, tailored Enterprise- Performance- operations Secure by ready and built optimized design on modern open-source components What Does AHV Include? AHV is our virtualization platform built on the KVM open-source hypervisor and completely integrated into our full Nutanix stack. KVM has been proven in every vertical and scale and is core to a majority of public cloud providers. AHV includes: 1 The best parts of KVM and QEMU with: Performance-tuning optimizations unique to our stack Functionality extensions and enhancements that make deploying, upgrading, and operating simple 2 All enterprise features, such as: High availability Affinity policies On-demand cross-cluster migrations vGPU and GPU passthrough Native Kubernetes engine 3 Prism Central, which: Simplifies managing VMs and workloads Provides infrastructure management, monitoring, and health, all from a single view—from on-premises multi-clusters to the single cluster at the edge to the public cloud 4 Secure Boot, vTPM, and Secure Nutanix Guest Tools (NGT) to provide a security-first design approach for protection 5 Nutanix LCM (Lifecycle Management) to provide lifecycle and maintenance needs for staying up-to-date 6 815+ validated applications and solutions, including names like Cisco, NVIDIA, Microsoft, Cohesity, Juniper, Redhat, and many, many more… C O NT I NU E Lesson 4 of 12 Introduction to AHV Why Consider Different Virtualization? You have been running some other type of virtualization over the last decade and may have a range of opinions about this virtualization. No one will argue that vCenter is challenging to manage for HA or multi-site scenarios. The appliance made some of the old pains easier, but it is still cumbersome for HA scenarios and when you have many of these to manage. Why AHV? AHV Is a Linux-Based Virtualization Appliance and Platform for Greater Innovation AHV: Uses the best from open-source software Is especially tailored for Nutanix integration Is easy to secure and upgrade Makes virtualization invisible Has a hypervisor that is: Simple to design and deploy Easy to manage Invisible (as storage and other data center functions) AHV Provides Enterprise-Ready Virtualization Let's review some of the leading benefits that Nutanix and over 70% of our customer deployments consider indicative of AHV's enterprise readiness. Select each card to view both sides and use the arrows to navigate through them all. Secure AHV is hardened by default, secured through automation, and ready for what's next. The critical layers through the stack, such as hypervisor, storage, d t l 1 of 4 Powerful AHV focuses on running your applications without complexity, which is why it is: Built and tuned to efficiently run demanding applications and work seamlessly ith di t ib t d 2 of 4 3 of 4 Powers All Clouds AHV is a powerful, hybrid virtualization platform that Fully Integrated can: AHV is/was built on the Xi VPN Power all of your foundation of our data applications, including a center's opinionated design data center, edge sites, so it can take advantage of a and public clouds out of full-stack hybrid solution. AHV: I i t t d lti l 4 of 4 C O NT I NU E Today, we have an adoption rate of over 69%! This rate is based on the percentage of nodes deployed over the rolling average of the last 4 quarters with AHV. AHV Growth AHV Adoption Growth is accelerating and providing continuous enterprise value delivery. Review the AHV Adoption Growth graph to learn more. Reflecting on AHV growth, we need to state: There is a huge change from 3–4 years ago (~40%) to 61% today Our goal is not 100% The adoption of AHV is a journey, but we’ve reached the inflection point where the major features are available for many enterprise customers We are uniquely positioned to provide the best on-premise cloud experience that seamlessly extends to public cloud. C O NT I NU E AHV Summary In summary, we just discussed why AHV is an excellent choice for the foundation of your data center and a viable choice today and in the future for enterprises and smaller organizations. Our approach of utilizing the proven capabilities of the open-source offering, along with the opinionated Nutanix features, makes this easy to deploy, run, secure, and upgrade the platform. We have made the hypervisor layer invisible as we did with storage and other data center functions because of the benefits of a simple design and deploy hypervisor and the ease of management. C O NT I NU E Simplified Management with Prism Central Prism Central: Easier Management and Simple Deployment Prism Central (PC) is a significant advantage here. Not only can we eliminate many independent vSphere products, but the overall management story is far less complex. PC offers a simple automated deployment method from a cluster. Prism Central: Suited to Size and Easily Upgraded PC provides VM and host examples suited to size by asking a couple of simple questions: Do you need high availability? What size of an environment will it need to manage? Once PC is deployed, it is upgraded with a simple one-click upgrade method regardless of whether it is a single small instance of deployment or a highly available XL deployment. C O NT I NU E Review the following multiple response question, choose all answers that apply, and then select Submit. What are some benefits of choosing AHV? Enterprise-ready Performance-optimized Tailored operations Secure Powers the public and private cloud SUBMIT Lesson 5 of 12 Key Features & Use Cases – Part I AHV Features and Use Cases Acropolis Dynamic Scheduler (ADS) Acropolis Dynamic Scheduler (ADS) is a key component in the AHV stack. It runs in the background and continuously monitors and optimizes the infrastructure. ADS is responsible for VM migrations and placements during many operations, including: Resolving host hotspots (hosts with high CPU and storage usage) Maintaining High Availability guarantees Optimizing VM placement to free up resources Enforcing administrator-defined policies ADS includes numerous key features. Select each hotspot (+) below to learn more. Initial VM placement ADS provides initial VM placement, which involves choosing the AHV host for a VM and defragmenting the cluster (if needed) to ensure sufficient resources are available for the new VM. Background policy enforcement (e.g., Affinity/Anti-Affinity) ADS provides background policy enforcement, which might include moving VMs around to respect VM and host affinity policies and the relationship between VM and VM anti-affinity policies. Integrated high availability guarantees When HA guarantees are enabled, ADS will move VMs in the running system to ensure all VMs on each host can be recovered following the failure of an individual host. Integrated storage and CPU hotspot mitigation ADS monitors each host, and when a hotspot is detected, it works to resolve that hotspot. Dynamic GPU management The GPU management dynamically supports NVIDIA GPUs to provide specific vGPU profiles based on the VMs running the system. Remediation plans based on cost of movement ADS creates remediation plans based on the cost of movement to address each type of case mentioned above. As ADS focuses on hotspot mitigation and host cost-minimizing remediation plans, this results in fewer VM movements that would be required for an active load-balancing scenario. ADS Hotspot Mitigation Example Let's look at an example of an ADS CPU hotspot mitigation. 1 Host 1 uses 90% CPU, more than the 85% threshold. No other hosts can accommodate host 1’s VMs without exceeding 85%. Therefore, ADS identifies that VM movements are required. 2 ADS computes a plan to move one of the 8GB VMs from host 3 to host 2, making space for one of the VMs on host 1 to move over. The amount of memory on each of the VMs on host 1 is considered when deciding which VM can be moved more easily. 3 ADS can now move one of the VMs using 45% CPU to host 3. It moves the VM identified as the lowest cost, which, in this case, is the one with the least RAM. 4 The hotspots have been mitigated, and all hosts are under the threshold. C O NT I NU E High Availability (HA) Preservation AHV VM HA is a feature built to ensure VM availability in the event of a host or block outage. In the event of a host failure, the VMs previously running on that host will be restarted on other healthy nodes throughout the cluster. The Acropolis Leader is responsible for restarting the VM(s) on the healthy host(s). High Availability (HA) Preservation - Restarts VMs on another host during host unavailability with guaranteed or best-effort priority HA – Host Monitoring Once the libvirt connection goes down, the countdown to the HA restart is initiated. Should libvirt connection fail to be re-established within the timeout, Acropolis will restart VMs that were running on the disconnected host. When this occurs, VMs should be restarted within 120 seconds. In the event the Acropolis Leader becomes partitioned, isolated, or fails, a new Acropolis Leader will be elected on the healthy portion of the cluster. If a cluster becomes partitioned (e.g., X nodes can’t talk to the other Y nodes), the side with quorum will remain up, and VM(s) will be restarted on those hosts. There are two main modes for VM HA. Select each card to view both sides and learn more about HA – Host Monitoring. Default This mode requires no configuration and is included by default when installing an AHV-based Nutanix cluster. When an AHV host becomes unavailable, the VMs that were running on the failed Guarantee This nondefault configuration reserves space throughout the AHV hosts in the cluster to guarantee that all failed VMs can restart on other hosts in the AHV cluster during a host failure. When you select the Enable Resource Reservations When using the Guarantee mode for VM HA, the system will reserve host resources for VMs. The amount of resources which are reserved is summarized by the following: If all containers are RF2 (FT1) = one “host” worth of resources is reserved If any containers are RF3 (FT2) = two “hosts” worth of resources are reserved When hosts have uneven memory capacities, the system will use the largest host’s memory capacity when determining how much to reserve per host. C O NT I NU E AHV Templates AHV has always had the image library. This library focused on capturing the data with a single vDisk so that it could be easily cloned. However, input from the administrator was needed to complete the process of declaring the CPU, memory, and network details. VM Templates take this concept to the next level of simplicity and provide a familiar construct for administrators who have utilized templates on other hypervisors. AHV VM Templates are created from existing virtual machines, inheriting the attributes of the defining VM, such as the CPU, memory, vDisks, and networking details. The template can then be configured to customize the guest OS upon deployment and can optionally provide a Windows license key. AHV Template - Allows for the creation of VMs on AHV from a standard template to simplify and standardize bulk VM creation Benefits of Templates Templates allow for: Maintenance of multiple versions Easy updates: Updates can be applied without the need to create a new template (e.g., operating system and application patches). Administrator choice: Administrators can choose which version of the template is active, allowing the updates to be staged ahead of time or the ability to switch back to a previous version if needed. Select Start, then utilize the arrows to view other key benefits of VM templates. Template Features VM Templates provide simplicity and a familiar construct for administrators who have utilized templates on other hypervisors. Step 1 AHV's VM templates provide the capability to capture all aspects of the VM in the template (e.g., CPU, network, and memory). Step 2 AHV's VM templates allow for guest customization (e.g., Windows and Linux). Step 3 AHV's VM templates permit the joining of domains. Step 4 AHV's VM templates provide a license key. Step 5 AHV's VM templates provide versioning and updates for the template lifecycle. C O NT I NU E Review the following multiple response question, choose all answers that apply, and then select Submit. What are the two main modes of VM HA? Default Authentication Guarantee Security SUBMIT C O NT I NU E AHV Memory Overcommit One of virtualization's central benefits is the ability to overcommit compute resources, making it possible to provision more CPUs to VMs than are physically present on the server host. Most workloads don’t need all their assigned CPUs 100% of the time, and the hypervisor can dynamically allocate CPU cycles to workloads that need them at each point. Much like CPU or network resources, memory can also be overcommitted. At any given time, the VMs on the host may or may not use all their allocated memory, and the hypervisor can share that unused memory with other workloads. Memory Overcommit makes it possible for administrators to provision a greater number of VMs per host by combining the unused memory and allocating it to VMs that need it. Overcommit is disabled by default and can be defined on a per-VM basis. This allows for sharing to be done on all or just a subset of the VMs on a cluster. Memory Overcommit Share unused memory from VMs with overcommit enabled Share up to 75% of a VMs memory if unused Adaptive sharing – gradually share more as the host monitors VMs memory usage Enabled on per-VM bases Via PC, ACLI, or API AOS 6.02 and PC 2021.12 for UX option Does affect performance for enabled VMs Best used for non-production workloads Balloon driver or host swap mechanisms Memory Overcommit - Allows the administrator to mark specific VMs that can be overprovisioned with memory and have that memory reclaimed during host contention Elastic Memory Resource Use AHV also offers the ability to utilize elastic memory resources. In specific scenarios or environments, it may be required to overcommit the amount of memory available on a host or cluster. This is most common within development and testing development environments where the maximum density is more important than the performance consistency. VMs with more aggregate virtual RAM than physical RAM Per-cluster or per-VM setting Easily turned on for existing nodes Balloon and back-up swap techniques Adaptive – no user inputs Elastic Memory Resource Use - Reclaim memory from VMs that are marked with memory overcommit during host contention AHV allows memory overcommit to be enabled on a per-VM setting within the VM's configuration. This function is turned off by default. The hypervisor will use ballooning and swap technologies to manage the amount of memory from each VM that can be shared and then adjust as needed. The Nutanix guest tools must be installed to reach their full capabilities. C O NT I NU E AHV Metro Availability and Synchronous Replication Our goal with Metro Availability on AHV is to build a 0-RPO (zero data loss), 0-RTO (transparent failovers), and 0-Touch (Witness) driven HA solution for all Nutanix workloads: VMs, Volumes, Files, and Calm Apps. Metro Availability on AHV is the sum of three parts. Select each hotspot (+) below to learn more. 0 RPO – Synchronous Replication Synchronous replication of VM data, metadata, and associated policies, including Flow and Calm blueprints 0 RTO – Cross-Cluster Live Migration of VMs A live migration of VMs and Volumes across clusters, which is orchestrated by Recovery Plans across the associated environment 0 Touch – Witness Orchestrated Automated Failovers A witness to monitor for failures of fault domains and arbitrate a fault domain for business continuity upon failure Primary Site and Secondary Site—Policy-based disaster recovery protection across sites with flexible RPO and RTO values. HIGHLIGHTS Policy-based for predictable outcomes Protect mission-critical applications against complete site failure Support for bi-directional synchronous replication and failover Zero-Touch automated witness-based failover BENEFITS Easy to set up and manage, which removes the burden of specialized personnel and equipment 0 RPO and near 0 RTO using synchronous replication and the witness capability, which provides the highest level of availability for the business 1-Click live migration of VMs between sites, allowing for greater flexibility for tasks like DC maintenance or Disaster Avoidance REST-API-Driven, allowing our workflows to be included as a part of a larger DR runbook Instant Restore with Storage Migration When recovering from any outage, whether human, application, or hacker-related, the time to recover is critical. This is why instant restores are available with backup vendors that enable using our APIs. In these scenarios, when using a supported product and vendor, the restored data will be cloned from the backup data and mounted from the backup store and start- up. This allows the VM to be nearly instantly recovered and later migrated to a Nutanix storage container. There are multiple ways of providing instant restores with storage migration from an external source. These include: Instant Restore with Storage Migration: You can Bring VMs up on another cluster in another location using a backup copy of their data. 1. Mounting an external data storage repository from a backup solution like Rubrik or Veeam* 2. Cloning a backup-based snapshot / raw disk and mounting it to a VM on an AHV cluster 3. Supporting for data migration from the external data source to an AOS container, which includes: Adaptive Foreground Migration: This migration is based on workload IO. When a read request is received and certain criteria are met, the data will be migrated to the AOS cluster. Background Migration: This migration is based on a policy set that allows a background migration task from external storage to an AOS cluster. C O NT I NU E Lesson 6 of 12 Key Features & Use Cases – Part 2 AHV Features and Use Cases Continued Comprehensive Management with Prism Prism is a distributed resource management platform that allows users to manage and monitor objects and services across their Nutanix environment, whether hosted locally or in the cloud. Reinforce management efficiency. Everything is done from Prism. We can even manage ESXi VMs in when there are vSphere and AHV clusters in the environment. Prism UI Prism's UI manages multiple clusters, VMs, and entities all from a single interface. Prism Capabilities and Components Prism capabilities are broken down into two key categories. Select each hotspot (+) below to learn more. Interfaces HTML5 UI, REST API, CLI, PowerShell CMDlets, etc. Management Capabilities Platform management, VM / Container CRUD, policy definition and compliance, service design and status, analytics and monitoring A Prism service runs on every CVM with an elected Prism Leader, which is responsible for handling HTTP requests. Similar to other components, which have a Leader, if the Prism Leader fails, a new one will be elected. When a CVM that is not the Prism Leader gets an HTTP request, it will permanently redirect the request to the current Prism Leader using HTTP response status code 301. Prism is broken down into two main components. Select each hotspot (+) below to learn more. Prism Central (PC) Multi-cluster manager responsible for managing multiple Nutanix Clusters to provide a single, centralized management interface. Prism Central is an optional software appliance (VM) which can be deployed in addition to the AOS Cluster (can run on it). 1-to-many cluster manager Prism Element (PE) Localized cluster manager responsible for local cluster management and operations. Every Nutanix Cluster has Prism Element built-in. 1-to-1 cluster manager C O NT I NU E Review the multiple-choice question, choose the best answer, and select Submit. Which of the following is NOT a Prism interface? HTML UI Rest API CLI Bootstrap SUBMIT C O NT I NU E vSphere to AHV Cross-Hypervisor Disaster Recovery (CHDR)! Cross-Hypervisor Disaster Recovery (CHDR) provides an ability to migrate the VMs from one hypervisor to another (ESXi to AHV or AHV to ESXi) by using the protection domain semantics of protecting VMs, generating snapshots, replicating the snapshots, and then recovering the VMs from the snapshots. To perform these operations, you must install and configure NGT on all the VMs. For more information, see Enabling NGT and Mounting the NGT Installer in a VM in the Prism Web Console Guide. NGT configures the VM with all the required drivers for VM portability. After you prepare the source VMs, You can recover them in a different hypervisor type. Review the graphic and bullets below to learn more about CHDR. CDHR replicates across sites and hypervisors. Overview U.S.-based Agriculture Firm with >40 sites Mix of AHV & ESX Hypervisor Challenges Needed a single solution to backup and recover all sites centrally Solution CHDR allows hypervisor choice while providing the ability to restore any of the VMs to the Primary Datacenter on AHV. Benefits Lower TCO from AHV Ability to restore the entire site locally Standardization across the environment CDHR Requirements and Limitations Review the table below to learn more about CDHR requirements and limitations. CDHR Requirements CDHR Limitations Does not support vSphere snapshots or delta disk files Supports only VMs with files (If delta disks are attached to the VM during migrations, the VMs are lost.) Does not support VMs with Supports IDE/SCSI and SATA disks attached volume groups or only shared virtual disks Supports CHDR on both sites with all Does not support PCI/NVMe the AOS versions that are not on EOL disks or metro availability Set SAN policy to OnlineAll for all the Windows VMs and all the non-boot Cannot perform on a vCenter SCSI disks so these can automatically VM be brought online. For migration from AHV to ESXi, manually reattach the VGs to the VMs. (Note: This is essential because the automatic reattachment of iSCSI- Has limitations depending on based volume groups (VGs) fails after the replication configured in VMs are migrated to ESXi because of the cluster vCenter. Without the manual reattachment, the Nutanix cluster does not obtain the VMs' IP addresses.) Do not enable VMware vSphere Fault Tolerance (FT) on VMs that are protected with CHDR. If already enabled, disable VMware FT. (Note: VMware FT does not allow registration of VMs enabled with FT on any ESXi node after migration. This also results in a failure or crash of Uhura service.) For more information, visit: Cross-Hypervisor Disaster Recovery documentation Data Protection documentation C O NT I NU E vGPU in AHV Graphics and Compute Performance Select each hotspot (+) below to learn more about graphics and compute performance. NVDIA GRID Technology AHV supports NVIDIA GRID technology, which enables multiple guest VMs to use the same physical GPU concurrently. Concurrent use is made possible by dividing a physical GPU into discrete virtual GPUs (vGPUs) and allocating those vGPUs to guest VMs. Each vGPU is allocated a fixed range of the physical GPUs framebuffer and uses all the GPU processing cores in a time-sliced manner. Virtual GPUs Virtual GPUs are of different types (vGPU types are also called vGPU profiles) and differ by the amount of physical GPU resources allocated to them and the class of workload they target. Therefore, the number of vGPUs into which a single physical GPU can be divided depends on the vGPU profile used on a physical GPU. Physical GPUs Each physical GPU supports more than one vGPU profile, but a physical GPU cannot run multiple vGPU profiles concurrently. After a vGPU of a given profile is created on a physical GPU (after a vGPU is allocated to a VM that is powered-on), the GPU is restricted to that vGPU profile until it is freed up completely. To understand this behavior, consider that you configure a VM to use an M60-1Q vGPU. When the VM is powered on, it is allocated an M60-1Q vGPU instance only if a physical GPU that supports M60-1Q is either unused or already running the M60-1Q profile and can accommodate the requested vGPU. Free Physical GPUs that Supports M60-1Q If an entire physical GPU that supports M60-1Q is free when the VM powers on, an M60-1Q vGPU instance is created for the VM on the GPU, and that profile is locked on the GPU. In other words, until the physical GPU is completely freed up again, only M60-1Q vGPU instances can be created on that physical GPU (that is, only VMs configured with M60-1Q vGPUs can use that physical GPU). Review the graphic and bullets below to learn more about graphics and compute performance. Graphics and Compute Performance Supports for NVIDIA vGPUs Drives higher VDI density Provides a better desktop experience Supports high-end graphic workstations Supports High Performance Computing (HPC) applications Supports vGPU live migration ADS handles placement and hot spot avoidance. vGPU and GPU passthrough are selectable when creating or updating VMs. AHV Compute-Only Use Case AHV can run a Compute-Only (CO) node with a Greenfield cluster that includes only CO and Storage-Only (SO) nodes. Based on the use case, the optimized database solution requires SO nodes to be HCI or NVMe Only. Database deployments are a common workload that runs on Nutanix environments and sometimes requires differing amounts of compute versus storage. As a result, many different node sizes and configurations have been offered that can be compute or storage-heavy. However, in some situations, a workload and an environment may need additional CO nodes. Adding CO nodes can help a cluster grow with enough storage capability, but there is a growing need for compute capacity. These nodes are deployed using foundation-like HCI nodes but offer the choice at deployment and, when added to the cluster, will only run workload VMs. Guidelines on the balance of HCI, SO, and CO nodes within a cluster exist, and Nutanix Sizer has this intelligence coded into it to simplify the process. Review the graphics and bullets below to learn more about CO. Compute-Only Compute and Storage-Only options are in Foundation and Sizer. Greenfield cluster with only CO and SO nodes SO nodes: HCI or NVMe Only Performance similar to HCI cluster Manageability Support Foundation: Imaging and Cluster Creation LCM: LCM ESXi CO upgrade (NX) NCC: NCC checks for CO AOS: Expand Cluster, Remove Node, Network segmentation, Data protection Primary use case is database services Sizer: Option to size with CO and SO AHV Compute-Only Compute and Storage-Only nodes are for the Database Optimized Solution. 1 Foundation of the CO and SO nodes 2 Create new storage container or use default container 3 Create UVM using Prism 4 For boot UVMs, use direct-attached iSCSI disks and non-boot (data) drives, UVMs use in-guest attached iSCSI disks using Nutanix Volume Groups or direct-attached disks VMs provisioned with AHV – no change Disk access over HCI storage fabric / Balanced across CVMs NO CVM – all (most) cores available for UVMs Not for generic virt today – specifically approved use cases only and must meet license restrictions / maximize license spend C O NT I NU E NGT Serial Communication & Direct Installation Nutanix guest tools (NGT) have existed since the beginning of AHV and are vital for several use cases involving hypervisor and storage functions. To work within more secure environments, we now default to a serial port-based communication method between the workload VM and our controller VM. If this default port-based method cannot communicate, it will fail back to the network- based method. If the network path between the VM and CVM is not available, the task will fail, and the failure will be reflected in the event logs. Review the graphic and bullets below to learn more about NGT Communication. Improved Security for NGT Communication AHV feature for NGT Uses serial port attached to VM Eliminates VM to CVM network connection On by default Falls back to IP-based communication If fallback is needed, so is a certificate The NGT Communication Path provides improved security. NGT Direct Installation With NGT playing an important role, a common request is for an easy mass installation and upgrade process. We offer the ability to select VM(s) within Prism to install or upgrade NGT, but for large environments, customers typically want to use an existing software delivery method. To support this, we offer an NGT installable file that can be used with third-party software tools to perform deployments and upgrades. When installing, a full line of options can be configured via command-line switches. Integrating with Third-Party Tools Review the graphic and bullets below to learn more about third-party integration. All supported versions of Windows, RPM-based, and DEB-based guest operating systems Distributable by endpoint management solutions No need to provide login credentials to Prism Central Enablement, through Prism Central, can occur after installation You can deploy NGT with your endpoint management tools. LCM Simplifies Upgrade Complexity LCM simplifies Nutanix IT infrastructure life cycle operations, helps automate processes, and upgrades workflows. Select the arrows to learn more about the benefits of LCM. Step 1 Unified Control Plane Xi VPN LCM simplifies Nutanix IT infrastructure life cycle operations by consolidating software and firmware component upgrades into a unified control plane. An infrastructure inventory shows software and firmware versions running across the environment, including any new versions available for deployment. Similar to a Linux YUM package manager, LCM enables the deployment of multiple infrastructure upgrade packages across a distributed cluster like AOS and Prism Central. Step 2 Seamless Dependency Management LCM elevates IT teams from the minutiae of software and firmware dependencies by automating the entire process. When an IT administrator selects an update package, LCM automatically adds all corresponding upgrades that are part of the dependency chain, ensuring that everything required will be installed with one click. Step 3 Optimal Upgrade Plan Customers benefit from streamlined upgrade workflows with minimized infrastructure maintenance windows due to how LCM intelligently calculates the most optimal plan for all upgrade packages to be deployed. Based on the bundle of components—software or firmware—that are selected for upgrade by the IT administrator, LCM creates a plan that ensures the updates are deployed with the minimal number of service or host restarts. C O NT I NU E Match the following AHV features with their definitions, then select Submit. Created f rom existing virtual Templates machines, inheriting the attributes of the def ining VM Resource management platform that allows users to Prism manage and monitor objects and service Provides an ability to migrate Cross-hypervisor disaster the VMs f rom one hypervisor to recovery (CHDR) another SUBMIT C O NT I NU E Lesson 7 of 12 AHV Architecture Select the arrows to learn more about AHV architecture and its internal workings. AHV Host Internals The main base functions of the type 1 hypervisor are KVM and QEMU. These work together to do all the hardware-assisted and hardware virtualization. KVM is a kernel module and is the basis of hardware-assisted virtualization. QEMU is used to emulate virtual machines that run on top of KVM with different operating systems. Anatomy of AHV The CVM sends commands for tasks that must be completed on a VM to Libvirtd. In this example: CVM sends a power-on command for the virtual machine Libvirtd issues the command to gemu, and the VM will be powered on AHV Turbo Differentiation – Why AHV Is Better In legacy hypervisors (left), there was a method to add more storage performance by using multiple SCSI controllers on a single VM. This process was manual, and the administrator needed to know about it, when it was beneficial, and how to execute it. In AHV (right), a function called Turbo takes a different approach but still allows for a similar but more modern end result. As the VM scales in compute resources, an additional Frodo (Turbo) path is added for more storage capabilities. This is done when additional CPU cores are added to a VM and another path is added. This process happens automatically, and the VM benefits from the increase if it can utilize it or goes unnoticed. AHV Storage Redirector – 1 Key benefits to our storage fabric are high storage performance and availability. AHV Storage Redirector – 2 With AHV, if the local CVM becomes unavailable through a maintenance operation or an unexpected issue, the I/O for the workloads will be automatically redirected and distributed to the other nodes within the cluster that contain the necessary data to service the I/O requests. AHV Storage Redirector – 3 When the local CVM returns to availability, I/O changes will synchronize, and the reads and writes will begin to be serviced locally again. C O NT I NU E Lesson 8 of 12 AHV Security & Migration AHV Security & Migration AHV Security Our enterprise-grade security is comprised of features from our entire platform and products. Select each hotspot (+) below to learn more about the AHV features for maintaining robust enterprise-grade security.          Microsegmentation Microsegmentation allows simple per-VM virtual firewall rules to be visualized and configured through policies on individual VMs or groups.  Authentication, RBAC, Auditing Access and authentication are obtained through secure logins, 2nd-factor options, role-based access controls, and secure auditing. AHV provides the ability to grant or deny permissions on a granular level for groups of users or individuals.  Cluster Lockdown The cluster lockdown feature prevents remote access to the infrastructure levels of a cluster.  Congifuration Management Automation Configuration management hardens our data and control layers by default. If a change is made by mistake or maliciously, automation corrects the setting that might otherwise lower the security level.  Unstructured Data Analytics Data analytics on unstructured data provides valuable insight into potential malicious activities and auditing data usage and consumption.  Native Key Manager The native key manager runs as a service distributed among all the nodes. You can easily activate it f rom Prism Element or Prism Central, so all customers can enable encryption without yet another silo to manage. Customers looking to simplify their inf rastructure operations can now have one-click inf rastructure for their key manager as well.  Data-at-Rest Encryption Data-at-rest encryption is supported through software and self-encrypting drives. In the most secure environments, these can be used together.  Network Segmentation Network segmentation enhances security, resilience, and cluster performance by isolating a subset of Nutanix cluster traff ic to its own network. Built-In Boot Security Microsoft, as well as other operating systems, is trying to provide methods to further secure your workloads from bad actors. AHV supports the most popular and common methods, such as: Secure boot to ensure the hypervisor and guest OS are booting a known good system Credential guard (Microsoft feature) for protecting valuable credentials vTPM, a necessary feature for running Windows 11 Review the bullets and graphic below to learn more about built-in boot security. BUILT-IN BOOT SECURITY AHV hypervisor secure boot (5.16) AHV VM secure boot (5.16) UEFI support (5.16) Microsoft Credential Guard (5.19) vTPM/Windows 11 (6.1.2/6.6) Migrating to AHV Whether you are already on Hyper-V, a public cloud, ESX, or even bare metal servers, the move to AHV offers multiple methods and is uncomplicated. We have every scenario covered, including Nutanix Move (the most elegant), an in-place upgrade, and cross-cluster migrations, and we may be able to leverage other tools that you may prefer. What we thought was complicated in the past—moving from physical hosts to VMs, moving VMs from a virtual host to the public cloud, or migrating to AHV and between public clouds—is much easier today with sophisticated tools such as Nutanix Move. With AHV, you can live-migrate to the cloud with NC2 and back and forth between AWS, Azure, and on-premises with Nutanix Move, providing you the flexibility of where your applications and workloads best meet the demands of your business. Workload Rehosting with Nutanix Move Nutanix Move (Move) is a cross-hypervisor mobility solution to migrate virtual machines (VMs) and files with minimal downtime. All businesses depend on virtualization, and the demand continues to grow with the need for efficient and reliable VM migration solutions. Nutanix Move offers a fast, secure, and simple way to migrate VMs from different platforms to the Nutanix Cloud Platform running AHV. Review the bullets and graphics below to learn more about Nutanix Move. Simplify Operations Individual or multiple migrations per plan 1-click migration cut-over Flexible source infrastructure 1-Click Migrations Migrate via UI or APIs No application refactoring/rearchitecting Easily map source-target networks Minimize Downtime Pre-seed data days or weeks ahead of time Pre-cutover workload testing Fast workload cutovers Move Benefits 1 Easy, automated migrations: Nutanix Move automates most of the migration process, reducing manual intervention and the risk of human error. IT teams can rest assured knowing that their migrations are in safe hands. 2 Wide compatibility: Nutanix Move supports various virtualization platforms (VMware ESXi, Microsoft Hyper-V, and Nutanix AHV), making it easy for businesses to migrate VMs to AHV, regardless of existing infrastructure. 3 Minimal downtime: With almost zero downtime needed for migration, Nutanix Move ensures that businesses experience minimal disruption to their operations. 4 Risk-free: Nutanix Move does not delete or make changes to origin datastores or platforms, allowing rolling back if necessary. Move Supports VM Migration Move supports VM migration from specific source platforms to various second platform targets. Select each hotspot below (+) the source platforms and targets supported by Move VM. VMware ESXi (Legacy Infrastructure or Nutanix) Targets to: AHV VMware ESXi on Nutanix VMware ESXi Targets to: Nutanix Cloud Clusters (NC2) on AWS NC2 on Microsoft Azure Microsoft Hyper-V Targets to: AHV VMware ESXi on Nutanix NC2 on AWS AWS EC2 Targets to: AHV VMware ESXi on Nutanix NC2 on AWS Microsoft Azure Cloud Targets to: AHV VMware ESXi on Nutanix NC2 on Azure Nutanix AHV Targets to: Nutanix AHV AWS EC2 Microsoft Azure Cloud NC2 on AWS/Azure NC2 On AWS/Azure targets to Nutanix AHV On Azure targets to NC2 on Azure C O NT I NU E What if I Want to Run AHV Later? A common method for customers is to run ESXi as a natural first step and then migrate to AHV when they are ready. We provide several ways to simplify this process. You can start with Nutanix AOS running on ESXi and then move to Nutanix AOS on AHV later on. Review the table below to learn more about the tools, benefits, considerations, and other notes for migrating to AHV at a later time. Tool Benefits Considerations Notes Nutanix Move is the simplest and most efficient method. It requires a target cluster to land on, Simple, and then groups of Nutanix Free, Brief downtime, VMs can be Move Batch VMs, Second cluster logically grouped Test for replication and failover. For the most flexibility, failover can be done with a single VM or groups. CHDR allows replication between ESXi and the AHV cluster, but it is mainly used for DR Integrated, scenarios where Brief downtime, CHDR Already in AHV is run in the Second cluster use DR location. It has fewer features and flexibility than Nutanix Move and is not ideal as a migration tool. Cluster Conversion takes a running cluster and converts the Brief downtime, hypervisor to AHV Cluster In-place Interoperability, on a rolling basis, Conversion conversion Testing or similar to LCM Batching upgrades. It is best suited for remote sites with a single small cluster. Third-Party Advanced Cost, Learning Third-party tools features curve will most likely provide a similar experience to Nutanix Move with the possibility of added features. However, this will come at a software cost and a learning curve. An application- based method may offer a data replication-based method that is best used rather than Application Flexibility, Complexity, copying the VM. Based Refactor Expertise This is typical for things like active directory servers, some database scenarios, and others. Migrating Directly from ESXi to AHV If you don't have Nutanix at all, and instead use shared storage, you can move directly to Nutanix AOS and AHV with Move. Review the table below to learn more about the tools, benefits, considerations, and notes for migrating directly from ESXi to AHV. Tool Benefits Considerations Notes Nutanix Move is the simplest and most efficient method. It requires a target cluster to land on, and then groups of VMs can be logically grouped for replication and failover. For the most flexibility, Simple, failover can be Nutanix Free, Brief downtime, done with a single Move Batch Second cluster VM or groups. VMs, Test There will always be some downtime when converting a VM format between different hypervisors. However, a tool like Nutanix Move minimizes this downtime as much as possible. Third-party tools will most likely provide a similar experience to Nutanix Move with Advanced Cost, Learning Third-Party the possibility of features curve added features. However, this will come at a software cost and a learning curve. An application- based method may offer a data replication-based method that is best used rather than Application Flexibility, Complexity, copying the VM. Based Refactor Expertise This is typical for things like active directory servers, some database scenarios, and others. C O NT I NU E Lesson 9 of 12 AHV Networking Deep Dive AHV Networking AHV Network Constructs The default networking that we describe in the Nutanix AHV Best Practice Guide covers a wide range of scenarios that Nutanix administrators encounter. Use this networking guide for situations with unique VM and host networking requirements that are not covered elsewhere. The default Nutanix AHV networking configuration provides a highly available network for guest VMs and the Nutanix Controller VM (CVM). This default configuration includes IP address management and simple VM traffic control and segmentation using VLANs. Network visualization for AHV available in Prism also provides a view of the guest and host network configuration for troubleshooting and verification. Nutanix AHV Networking Overview Nutanix AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and guest VMs to each other and to the physical network on each node. The CVM manages the OVS inside the AHV host. Don't allow any external tool to modify the OVS. Select each hotspot (+) to review the following network constructs. Subnet/VLAN Nutanix AHV exposes many popular OVS features through the Prism GUI, such as VLAN tagging, uplink load balancing, and link aggregation control protocol (LACP). AHV supports VLANs for the CVM, AHV host, and guest VMs by default. Geneve overlay possible too. Prism GUI, the Acropolis CLI (aCLI), or REST Managed at Cluster or PC level – No additional host configuration Open vSwitch OVS is an open source software switch designed to work in a multiserver virtualization environment. In Nutanix AHV VLAN-backed networks, the OVS behaves like a layer 2 learning switch that maintains a MAC address table. The hypervisor host and VMs connect to virtual ports on the switch. Virtual Switch / Bridge (br# - default br0) In Nutanix AOS versions 5.19 and later, you can use a virtual switch to manage multiple bridges and uplinks in Prism. For more information, see the Host Network Management section in the AHV Administration Guide. We designed the virtual switch configuration to provide flexibility in configuring virtual bridge connections. A virtual switch defines a collection of AHV nodes and the uplink ports on each node. It aggregates similar OVS bridges on all the AHV nodes in a cluster. For example, vs0 is the default virtual switch and aggregates the br0 bridges and br0-up uplinks from all the nodes. Virtual Switch defines a collection of AHV nodes and the uplink ports on each node Used to manage multiple bridges and uplinks Managed at Cluster or PC level Bridges A bridge switches traffic between physical and virtual network interfaces. The default Nutanix AHV configuration includes an OVS bridge called br0 and a native Linux bridge called virbr0. The virbr0 Linux bridge carries management and storage communication between the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0 OVS bridge by default. The AHV host, VMs, and physical interfaces use ports for connectivity to the bridge. Network Controller (new in AOS 6.7, PC 2023.3) Deployed as part of Flow Virtual Networking Prism Central-based Network Controller Used for VLAN and Overlay (VPC) network deployment Managed at PC level Ports Ports are logical constructs created in a bridge that represent connectivity to the virtual switch. Nutanix uses several port types: Internal port—with the same name as the default bridge (br0)—acts as the AHV host management interface. Tap ports connect VM virtual NICs (vNICs) to the bridge. VXLAN ports are only used for the IP address management (IPAM) functionality provided by AHV. Bonded ports provide NIC teaming for the physical interfaces of the AHV host. Bond (br#-up) Bonded ports aggregate the physical interfaces on the AHV host for fault tolerance and load balancing. By default, the system creates a bond named br0-up in bridge br0 containing all physical interfaces. Changes to the default bond (br0-up) using manage_ovs commands can rename it to bond0 when using older examples, so your system might be named differently than the following diagram. Nutanix recommends using the name br0-up to quickly identify this interface as the bridge br0 uplink. Using this naming scheme, you can also easily distinguish uplinks for additional bridges from each other. OVS bonds allow for several load-balancing modes to distribute traffic, including active-backup, balance-slb, and balance-tcp. Administrators can also activate LACP for a bond to negotiate link aggregation with a physical switch. During installation, the bond_mode defaults to active- backup, which is the configuration we recommend for ease of use. Aggregate of physical NICs Active-Backup (Default), Active-Active (balance-slb), Active-Active (balance-tcp) LACP Only use NICs of the same speed in the same bond AHV – VM Networking The following diagram illustrates the networking configuration of a single host immediately after imaging. The best practice is to use only the 10 GbE or more NICs and to disconnect the 1 GbE NICs if you don't need them. For additional information on bonds, refer to the Nutanix AHV Networking Best Practices section. VMs and VLAN Tags Nutanix AHV supports the use of VLANs for the CVM, AHV host, and guest VMs. In the Nutanix AHV Networking Best Practices section, we discuss the steps for assigning VLANs to the AHV host and CVM. You can easily create and manage a vNIC's networks for VMs using the Prism GUI, the Acropolis CLI (aCLI), or REST without any additional AHV host configuration. Each virtual network in AHV maps to a single VLAN and bridge. You must create each VLAN and virtual network created in AHV on the physical top-of-rack switches as well, but integration between AHV and the physical switch can automate this provisioning. CVM Networking – Host View Each node in every Nutanix cluster hosts a Nutanix Controller Virtual Machine (CVM), where important cluster management and storage services are hosted. This holds true for all hypervisor deployments on Nutanix. Starting with the default configuration, every CVM will have a minimum of 2 interfaces, an eth0 and eth1, for internal and external traffic. The external traffic includes cluster management, disaster recovery replication, iSCSI volumes, and storage replication. Within Nutanix Prism, it is easy to isolate traffic types by configuring network segmentation to add dedicated network interfaces to address security and performance requirements. As of AOS 5.5, you could create a dedicated interface for storage RF replication. With more recent AOS releases, 5.11 and 5.17, you could further segment your CVM traffic with ntnx0 and ntnx1 for iSCSI volumes and DR traffic, respectively. Each interface can have its dedicated VLAN and vSwitch, so you’ll have complete design control for segmenting your environment. CVM NETWORK REQUIREMENTS DEFAULT All public CVM traffic uses eth0 eth1 for internal storage ADVANCED: ONLY IF NEEDED Add CVM interfaces for Replication, iSCSI, and DR if required Dedicated VLAN + Subnet + vSwitch C O NT I NU E AHV Bridge and Uplink Configuration You can manage your AHV bonds either via Prism or the command line. The image below shows how you can select your preferred load-balancing algorithm and easily configure it to group all network adapters of the same speed. This makes it incredibly easy to follow our best practices recommendations. We recognize that many customers are looking for more advanced and granular control of their host and cluster networking. Our virtual switch UI allows you to control the network configuration across the hosts in the AHV cluster. Creating a Virtual Switch: Bond and Uplink Port Selection Bridge and Uplink Management 5.19 + For AHV uplink management, we’re introducing virtual switches in Prism. Each virtual switch maps to an OVS bridge and a set of uplink ports. The virtual switch configuration also controls the load-balancing algorithm and MTU. Each network, or VLAN, will connect to a virtual switch. In most clusters, we expect just a single virtual switch to be present and will use vs0 by default. You can configure additional virtual switches, and even create switches with no uplinks for local host-only communication. Virtual switches are created cluster-wide with a name, MTU, and description. In this example, we will select balance-tcp to configure an LACP-enabled bridge with a pair of uplinks. We can use incoming LLDP information from connected physical switches to choose the right uplinks, especially for a host that is connected to two or more switches. The desired pair of switches is selected as a filter, and the AHV interfaces plugged into those switches can be configured as uplink members for all hosts in the cluster. You can also easily configure different uplinks on a per-host basis if more advanced configuration is needed. It’s now easy to configure and manage your AHV host networking in Prism. C O NT I NU E Traffic Mirroring Traffic Mirroring enables you to mirror traffic from the interfaces of the AHV hosts to the virtual NIC (vNIC) of guest VMs. Traffic Mirroring mirrors the packets from a set of source ports to a set of destination ports. You can mirror inbound, outbound, or bidirectional traffic flowing on a set of source ports. You can then use the mirrored traffic for security analysis and to gain visibility of traffic flowing through the set of source ports. Traffic Mirroring is a useful tool for troubleshooting packets and is necessary for compliance. 1 Mirror traffic from: Interfaces of the AHV hosts VM vNIC 2 Mirror traffic to the vNIC of guest VMs. 3 Mirror inbound, outbound, or bidirectional traffic on a set of source ports. Use mirrored traffic for security analysis Gain visibility of traffic on source ports 4 Span destination MUST BE VM on the same host. 5 Two SPAN sessions per host Review the graphics below to learn more about traffic mirroring interface of the AHV host and vNIC of VMs. Host VM C O NT I NU E AHV Host Link Aggregation For host-to-switch connectivity, Nutanix environments support 3 modes for link aggregation or load-balancing algorithms. This allows you to configure your clusters based on your preferences and performance requirements. Select each tab to learn more about AHV host link aggregation. A CT IVE A CT IVE – M A C A CT IVE – B A CKU P A CT IVE A CT IVE – L A G / L A CP PIN N IN G This is typically the default configuration for AHV and recommended for environments where easy setup at scale is desired. There are no special requirements for the upstream switch except to present the necessary trunked VLANs. However, it does have implications for host bandwidth as only a single link per vswitch will be used at any given time. Highlights: Host and Network Default Nutanix recommended for AHV Pros: Easy to set up No special switch configuration Cons: Limited to one active uplink A CT IVE A CT IVE – M A C A CT IVE – B A CKU P A CT IVE A CT IVE – L A G / L A CP PIN N IN G Active-active, switch-independent mode is a great choice for aggregating bandwidth on a per-vswitch basis across multiple links. However, your individual VMs will still be limited to a single link bandwidth at most. In AHV environments, this mode can have implications where there is a multicast requirement so try to stick to either active-backup or LACP instead. For VMware environments, active-active has been a tried and trusted deployment for many years now, and we recommend it also for Nutanix. Overview: AKA: balance-slb Not recommended for AHV Pros: Good bandwidth for host Cons: VM on single link IGMP pruning mcast conflict A CT IVE A CT IVE – M A C A CT IVE – B A CKU P A CT IVE A CT IVE – L A G / L A CP PIN N IN G If you have applications with high-performance requirements, we recommend considering active active with LACP as this will allow you to aggregate the host links for optimal per VM bandwidth. This does require your upstream switches to support LACP configurations, especially if there are two or more switches. You’ll need vPC or MLAG support, but it's well worth it to guarantee the best performance for your applications. Highlights: AKA: balance-tcp Network admin preferred Nutanix recommended for high performance Pros: Best host and VM bandwidth Cons: More switch configuration Complexity and fallback testing C O NT I NU E Physical Switch Network Requirements For host-to-switch connectivity, Nutanix environments support 3 modes for link aggregation or load-balancing algorithms. This allows you to configure your clusters based on your preferences and performance requirements. Select each tab to learn more about physical switch network requirements. E N T E RPRISE E DG E / ROB O OU T OF B A N D ON LY Overview: High performance or >= 8 nodes Larger buffers Datacenter / Storage models 10GbE or faster mandatory Examples: Cisco Nexus 9k, 7k, 5k E N T E RPRISE E DG E / ROB O OU T OF B A N D ON LY Overview: Edge/ROBO or < 8 nodes Smaller or shared buffers Campus or Access models 10GbE preferred, but 1GbE possible Examples: Cisco Nexus 3k Cisco Catalyst 9300, 3850 E N T E RPRISE E DG E / ROB O OU T OF B A N D ON LY Overview: Use only for OOB Mgmt 1 GbE Oversubscription OK Danger Zone: Data plane on FEX Data plane on 1G switch 10G uplinks Physical Switch Topology Comparison Review the comparison pros and cons between the traditional and recommended physical switch topography. TRADITIONAL Core Aggregation-Access Pros: Common network configuration Modular design Cons: Scaling East-West can have bottlenecks Spanning tree Core Aggregation Access Model RECOMMENDED FOR SCALE Spine-Leaf Pros: Optimized bandwidth East-West scalability without spanning tree Cons: Advanced switch configuration Spine Leaf Model Review the multiple-choice question, choose the best answer, and select Submit. Which network feature enables you to mirror traffic from the interfaces of the AHV hosts to the virtual NIC (vNIC) of guest VMs? VM Networking Host Link Aggregation Traffic Mirroring Host View SUBMIT C O NT I NU E Lesson 10 of 12 Cisco ACI VMM Integration for Nutanix AHV Cisco ACI VMM Integration: Problem Statement Customers using Cisco ACI want the network to be integrated with the hypervisor. They also want the source of truth to be the network layer controlled by Cisco ACI. Historically, this has been possible with other hypervisors using the Cisco ACI VMM Domain Integration. We are happy to announce that Cisco ACI VMM Integration for AHV to provision networks and provide visibility in AHV. Problem Statement Summary: 1 Separation of Concerns – Network Admin vs. Server Admin 2 Cisco ACI as Network "Source of Truth" 3 Orchestrator of the network domain for AHV 4 Create all networks only in Cisco ACI 5 Synch ACI state with Nutanix Prism Central 6 ACI End-Point-Groups mapped to virtual networks should trigger a new network-create on AHV 7 Automated creation of VLAN networks in Prism Central Before we discuss the Cisco integration, let’s briefly define ACI. What Is ACI? ACI stands for Application Centric Infrastructure. Cisco came up with ACI when the industry was abuzz with Software-Defined Networking. The underlying driver in Software-Defined Networking was to abstract UVM's networking away from physical networks and empower the infrastructure tenants to self-service consume networks. Let’s take a closer look at the physical networking. The Day 0 Problem There is a Day 0 problem—bringing up the physical network conforming to a certain architecture and then managing all the configuration that needs to go on those switches. This provides the customers with a network for that architecture (e.g., Leaf- Spine Architecture). The switches are identified with certain roles, and the controller generates the necessary configuration for each. To manage all this, a centralized Controller was introduced. It is called APIC— Application Policy Infrastructure Controller and it is shown below. Review the graphic below to learn more about an application centric infrastructure. C O NT I NU E Summary & Takeaways 1 With Cisco’s ACI release 6.0(3), ACI officially supports integration with Nutanix Prism Central and AOS 6.5. 2 This is just the very first release, and Cisco will keep updating this integration. 3 As part of the integration, when Cisco customers create End Point Groups on ACI, it will translate to the automated cration of VLAN networks on Prism Central. 4 Inter-EPG traffic can be regulated with ACI Contracts. 5 Intra-EPG traffic regulation can be accomplished with Flow Network Security. 6 The Nexus 9000 based switching fabric is treated as a classic physical networking underlay. Our Flow Virtual Networking- based Overlay network is used for secure muti-tenant networking. Available Today Cisco ACI Native Integration with Nutanix AHV: Use Cisco Application Policy Infrastructure Controller (APIC) v6.0(3)+ to create and work with Nutanix VLAN networking constructs. Support for ACI EPGs to configure VLAN Networking on Nutanix Enforce Inter-EPG policies as ACI Contracts (across physical hosts & VMs). Nutanix cluster visibility from APIC Fetch inventory data like VMs, hosts, switches, subnets, and security policies from Nutanix. Collect statistics and provide statistics display from APIC. Support for intra-EPG policies using Flow Network Security (as has always been the case) Support for self-service overlay virtual networking using Flow Virtual Networking (as has always been the case) C O NT I NU E Review the following question, choose the best answer, and select Submit. True or False: The Day 0 problem is to bring up the physical network conforming to a certain architecture and then manage all the configuration that needs to go on those switches. True False SUBMIT C O NT I NU E Lesson 11 of 12 Resources Additional AHV resources for NCP can be found here: Definitive Guide to HCI This document describes how Nutanix Cloud Platform works and how HCI is the foundation to make it all happen. NUTANIX.COM HCI — eBook Top 20 HCI Questions Answered NUTANIX.COM Test Drive Check out the Nutanix platform yourself. Available for customers to explore. TEST DRIVE Run AHV Review how to run AHV RUNAHV.COM AHV Best Practices Review AHV Best Practices AHV BEST PRACTICES AHV Networking Best Practices Review AHV Networking Best Practices Guide AHV NETWORKING BE... AHV Administration Guide Review AHV Administration Guide AHV ADMINISTRATIO... CHDR Documentation Review more information on Cross-Hypervisor Disaster Recovery documentation. CHDR DOCUMENTATION Data Protection Documentation Review more information on Data Protection documentation. DATA PROTECTION C O NT I NU E Lesson 12 of 12 Feedback We greatly appreciate any comments or feedback you can provide to help us to improve the learning experience for the Sales community. Please select the following link to provide feedback to our team. Feedback for Learning Academy on this course

Use Quizgecko on...
Browser
Browser