Nutanix Cloud Clusters on AWS Deployment Guide PDF
Document Details
Uploaded by Deleted User
2024
Tags
Summary
This document provides a guide on deploying Nutanix Cloud Clusters (NC2) on Amazon Web Services (AWS). It covers various deployment models, infrastructure requirements, and security considerations. The guide also details licensing and billing aspects.
Full Transcript
Nutanix Cloud Clusters on AWS Deployment and User Guide Cloud Clusters (NC2) Hosted December 12, 2024 Contents About This Document.......................................................................................6 Document Organization...............................
Nutanix Cloud Clusters on AWS Deployment and User Guide Cloud Clusters (NC2) Hosted December 12, 2024 Contents About This Document.......................................................................................6 Document Organization...............................................................................................................................6 Reference Information for NC2................................................................................................................... 7 Document Revision History.........................................................................................................................8 Nutanix Cloud Clusters (NC2) Overview.......................................................12 Use Cases................................................................................................................................................. 14 AWS Infrastructure for NC2...................................................................................................................... 15 NC2 Planning Guidance.......................................................................................................................... 16 NC2 on AWS Deployment Models........................................................................................................... 17 Single Availability Zone Deployment..............................................................................................18 Multiple Availability Zone Deployment........................................................................................... 18 Multicluster Deployment................................................................................................................. 19 AWS Components Installed...................................................................................................................... 20 Tags for AWS Resources.............................................................................................................. 23 Viewing Cloud Resources.............................................................................................................. 23 Blockstore Support with SPDK................................................................................................................. 24 NC2 Architecture....................................................................................................................................... 25 Preventing Network Partition Errors...............................................................................................26 Resolving Bad Disk Resources......................................................................................................26 Maintaining Availability: Node and Rack Failure............................................................................26 Maintaining Availability: AZ Failure................................................................................................ 28 NC2 Security Approach.............................................................................................................................28 Getting Started With NC2............................................................................... 30 Requirements for NC2 on AWS................................................................................................................30 Supported Regions and Bare-metal Instances......................................................................................... 34 Installing NVIDIA Grid Host Driver................................................................................................. 37 Supported Storage Volumes..................................................................................................................... 38 Limitations..................................................................................................................................................40 Non-Applicable On-Premises Configurations............................................................................................41 Creating My Nutanix Account................................................................................................................... 44 Starting a Free Trial for NC2.................................................................................................................... 45 EC2 Instance Tenancy Types........................................................................ 47 Default Tenancy........................................................................................................................................ 47 Windows License Included Instances............................................................................................ 47 Dedicated Host Tenancy...........................................................................................................................49 NC2 on AWS Deployment Workflow............................................................. 50 Create AWS Resources Manually..................................................................54 Creating VPC-only.....................................................................................................................................54 Creating Subnet.........................................................................................................................................55 ii Creating an Internet Gateway................................................................................................................... 56 Creating a NAT Gateway..........................................................................................................................57 Configuring Routing...................................................................................................................................58 Associating Subnet to the Route Table......................................................................................... 58 Assigning Routes to Internet Gateway...........................................................................................59 Assigning Routes for Management Subnet to NAT Gateway........................................................ 59 Cluster Deployment........................................................................................ 61 Creating an Organization.......................................................................................................................... 61 Updating an Organization...............................................................................................................62 Adding an AWS Cloud Account................................................................................................................62 Deleting a Cloud Account.............................................................................................................. 65 Reconnecting a Cloud Account......................................................................................................66 Adding a Cloud Account Region....................................................................................................66 Updating CloudFormation Stack.....................................................................................................67 Creating a Cluster..................................................................................................................................... 72 Proxy Server Configuration....................................................................................................................... 87 AWS VPC Private Endpoints......................................................................................................... 91 Flow Virtual Networking Configuration.........................................................94 NAT Traffic Flow....................................................................................................................................... 98 Configuring Connectivity for Flow Virtual Networking User VMs with NAT....................................98 No-NAT Traffic Flow................................................................................................................................110 Configuring Connectivity for Flow Virtual Networking User VMs with No-NAT............................ 111 Configuring No-NAT Traffic Flow Through AWS Transit Gateway.............................................. 118 Limitations................................................................................................................................................119 Prism Central Configuration........................................................................ 120 Deploying Prism Central......................................................................................................................... 120 Logging into Cluster Using Prism Element............................................................................................. 121 Logging into Cluster Using SSH............................................................................................................. 123 NC2 Licensing and Billing............................................................................125 PAYG Subscriptions................................................................................................................................ 127 Nutanix Direct Subscription.......................................................................................................... 128 Usage Metering............................................................................................................................ 130 License Capacity Reservation.................................................................................................................135 Reserving License Capacity......................................................................................................... 137 Unreserving Reserved License Capacity..................................................................................... 139 AWS Marketplace Private Offers............................................................................................................ 141 Subscription Management.......................................................................................................................147 Changing or Cancelling a Subscription Plan............................................................................... 148 Viewing Billing and Usage Details............................................................................................... 149 Using the Usage Analytics API.................................................................................................... 150 User VM Network Management................................................................... 156 Creating a UVM Network........................................................................................................................ 156 Creating a UVM Network using Prism Element........................................................................... 157 Creating a UVM Network using Prism Central............................................................................ 159 Updating a UVM Network....................................................................................................................... 161 Updating a UVM Network using Prism Element.......................................................................... 161 iii Updating a UVM Network using Prism Central............................................................................162 Using EC2 Instances on the Same Subnet as UVMs............................................................................ 162 AWS Elastic Network Interfaces (ENIs) and IP Addresses.................................................................... 163 Adding a Virtual Network Interface to a UVM........................................................................................ 163 Enabling Outbound Internet Access to UVMs........................................................................................ 164 Enabling Inbound Internet Access to UVMs........................................................................................... 164 Deploying a Load Balancer to Allow Internet Access.............................................................................165 Prism Central UI Access for Site-to-Site VPN Setup..............................................................................167 Network Security using AWS Security Groups..........................................169 Controlling Traffic With Security Groups.................................................................................................170 Default Security Groups.......................................................................................................................... 170 Custom Security Groups......................................................................................................................... 171 Ports and Endpoints Requirements........................................................................................................ 175 Cluster Management..................................................................................... 180 Updating Cluster Capacity...................................................................................................................... 180 Manually Replacing a Host..................................................................................................................... 185 Creating a Heterogeneous Cluster......................................................................................................... 186 Hibernate and Resume in NC2...............................................................................................................187 Hibernating Your NC2 Cluster......................................................................................................188 Resuming an NC2 Cluster........................................................................................................... 189 Limitations in Hibernate and Resume.......................................................................................... 190 Terminating a Cluster..............................................................................................................................191 Multicast Traffic Management................................................................................................................. 192 Configuring AWS Transit Gateway for Multicast..........................................................................196 AWS Events in NC2................................................................................................................................197 Displaying AWS Events................................................................................................................198 Viewing Licensing Details....................................................................................................................... 199 Support Log Bundle Collection............................................................................................................... 200 Excluding Clusters From the VPC Gateway Node Election Process......................................................200 Cluster Protect Configuration...................................................................... 202 Prerequisites for Cluster Protect............................................................................................................. 204 Limitations of Cluster Protect.................................................................................................................. 205 Protecting NC2 Clusters..........................................................................................................................206 Creating S3 Buckets.....................................................................................................................207 Protecting Prism Central Configuration........................................................................................ 208 Deploying Multicloud Snapshot Technology................................................................................ 209 Protecting UVM and Volume Groups Data.................................................................................. 210 Disabling Cluster Protect..............................................................................................................213 Recovering NC2 Clusters........................................................................................................................214 Setting Clusters to Failed State................................................................................................... 215 Recreating a Cluster.....................................................................................................................217 Recovering Prism Central and MST............................................................................................ 222 Recovering UVM and Volume Groups Data................................................................................ 224 Reprotecting Clusters and Prism Central.....................................................................................227 CLI Commands Library........................................................................................................................... 228 NC2 Management Consoles........................................................................232 NC2 Console........................................................................................................................................... 232 Main Menu.................................................................................................................................... 232 iv Navigation Menu...........................................................................................................................233 Audit Trail......................................................................................................................................236 Notification Center........................................................................................................................ 237 Configuring Email Notifications for Alerts.....................................................................................238 NC2 User Management................................................................................. 240 User Roles...............................................................................................................................................240 Adding Users from the NC2 Console..................................................................................................... 242 Managing Support Authorization............................................................................................................. 254 NC2 API Key Management........................................................................... 256 Cost Analytics............................................................................................... 259 Integrating Cost Governance with NC2.................................................................................................. 259 Displaying Cost Analytics in the Cost Governance Console.................................................................. 259 File Analytics................................................................................................. 261 Disaster Recovery......................................................................................... 262 Disaster Recovery Using Multicloud Snapshot Technology (MST)........................................................ 262 MST DR with Zero Compute Deployment................................................................................... 263 MST DR with Pilot Light Deployment.......................................................................................... 264 Disaster Recovery with Layer 2 Stretch................................................................................................. 265 Use Case...................................................................................................................................... 265 Configuring Disaster Recovery with Layer 2 Stretch over VTEP................................................. 266 Preserving UVM IP Addresses............................................................................................................... 268 Third-Party Backup Solutions......................................................................271 System Maintenance..................................................................................... 272 Health Check........................................................................................................................................... 272 Monitoring Certificates.............................................................................................................................272 Nutanix Software Updates...................................................................................................................... 272 Managing Nutanix Licenses.................................................................................................................... 272 System Credentials................................................................................................................................. 273 Managing Access Keys and AWS Service Limits.................................................................................. 273 Emergency Maintenance.........................................................................................................................273 Troubleshooting Deployment Issues....................................................................................................... 274 Documentation Support and Feedback.................................................................................................. 274 Nutanix Support.......................................................................................................................................275 AWS Support...........................................................................................................................................275 Release Notes................................................................................................276 Copyright........................................................................................................277 v ABOUT THIS DOCUMENT This user guide describes the deployment processes for NC2 on AWS. The guide provides instructions for setting up the Nutanix resources required for NC2 on AWS deployment, subscribing to NC2 payment plans. It also provides detailed steps on UVM network management, end-to-end steps for creating a Nutanix cluster, and more. This document is intended for users responsible for the deployment and configuration of NC2 on AWS. Readers must be familiar with AWS concepts, such as AWS EC2 instances, AWS networking and security, AWS storage, and VPN/Direct Connect. Readers must also know other Nutanix products, such as Prism Element, Prism Central, and NCM Cost Governance (formerly Beam). Document Organization The following table depicts how this user guide is organized and helps you find the most relevant sections in the guide for the tasks that you want to perform. Table 1: NC2 on AWS User Guide Roadmap For information on See the following A high-level overview of NC2, essential concepts Nutanix Cloud Clusters (NC2) Overview on for NC2 on AWS, architecture, and infrastructure page 12 guidance. Data encryption and network security details for NC2 Security Approach on page 28 NC2 clusters. Details on various subscription and payment plans, NC2 Licensing and Billing billing workflow, cancel, and manage your existing subscription plan. Getting started with NC2 on AWS, its requirement Getting Started With NC2 on page 30 and limitation, and create a My Nutanix account and start a free trial for NC2. Details on the EC2 instance tenancy types, EC2 Instance Tenancy Types including dedicated hosts. A high-level deployment workflow. NC2 on AWS Deployment Workflow To create various AWS resources required for NC2 Create AWS Resources Manually on AWS. To create an organization, add your cloud account Cluster Deployment to NC2, create an NC2 cluster, and create an AWS VPC endpoint for S3. Details on the support EC2 instance tenancy types. EC2 Instance Tenancy Types To configure a proxy server with NC2. Proxy Server Configuration To deploy and configure Prism Central and log into Prism Central Configuration a cluster by using the Prism Element web console or SSH. To create and manage UVM networks. User VM Network Management Cloud Clusters (NC2) | About This Document | 6 For information on See the following Identify security groups and policies for traffic Network Security using AWS Security Groups control, modify default UVM security groups, and create custom security groups. Details about the port and endpoints requirements for inbound and outbound communication. To create a heterogeneous cluster, update cluster Cluster Management on page 180 capacity. Hibernate, resume, and terminate a cluster. To configure and use the Cluster Protect feature to Cluster Protect Configuration protect your NC2 clusters. To add users to NC2, manage NC2 user roles and NC2 User Management manage authorization to Nutanix Support. Planning guidance for size and cost optimization. NC2 Planning Guidance on page 16 Supported regions for various bare-metal instances Supported Regions and Bare-metal Instances on and its limitations while using NC2 on AWS. page 34 To analyze your cloud consumption using Cost Cost Analytics on page 259 Governance. To configure Disaster Recovery and preserve IP Disaster Recovery addresses during partial and full subnet failover. Reference information on the system and System Maintenance on page 272 operational features, such as health check and routine maintenance tasks. Reference Information for NC2 The following additional documentation is available for NC2. While using NC2, you need to use several other Nutanix products, such as Prism, Flow Virtual Networking, and Nutanix Disaster Recovery. Nutanix recommends that you read product documentation for these products and understand how you can use these products. Table 2: NC2 on AWS Reference Documents Document Description Nutanix Validated Design Illustrates an example of a typical NC2 customer deployment that has been validated by Nutanix. Nutanix Cloud Clusters on AWS - Solution Tech Explains how NC2 key features are used, such Note as migration to AWS, storage, high availability, capacity optimization, encryption, and so on. Hybrid Cloud Design Guide Describes the components, integration, and configuration for the Nutanix Validated Design (NVD) packaged hybrid cloud solution and covers topics related to core Nutanix infrastructure, backup and disaster recovery, and self-service automation and integration with third-party applications. Nutanix Cloud Clusters on AWS GovCloud Provides supplementary information on running a Supplement secure Nutanix cluster on AWS GovCloud. Cloud Clusters (NC2) | About This Document | 7 Document Description Nutanix Cloud Clusters on AWS Release Notes Provides details on the new features released for NC2, supported AOS and Prism Central versions, and known and resolved issues. Compatibility and Interoperability Matrix Provides a compatibility and interoperability matrix for various NC2 features against specific AOS versions. Nutanix Configuration Maximums Lists the configuration limits for NC2. NC2 on AWS Networking Best Practices Covers networking concepts and best practices for networking in NC2 on AWS. Nutanix University Lists the Nutanix University videos on some key concepts, features, and configurations in NC2. Table 3: Supporting Nutanix Product Documents Document Description Prism Central Infrastructure Guide Describes the infrastructure-specific attributes and operations that you can perform using Prism Central, such as monitoring system performance and configuring settings related to compute, storage, network, security, and data protection and recovery. Prism Central Admin Center Guide Provides information about how to configure and manage administrative tasks using Admin Center in Prism Central. Prism Central Alerts and Events Reference Guide Provides information about all the alerts and events generated in Prism Central across registered clusters, and how to view and manage the alerts and events in Prism Central. Prism Element Web Console Guide Describes how to manage your clusters through Prism Element web console. Flow Virtual Networking Guide Explains Flow Virtual Networking concepts and architecture, and describes how to enable and deploy Flow Virtual Networking on Prism Central. Nutanix Disaster Recovery Guide Explains Nutanix Disaster Recovery concepts, architecture, and requirements. It also describes how to enable and configure disaster recovery. Document Revision History The document revision history describes the changes that were implemented in this document. The changes are listed by revision date, starting with the release of AOS 6.8. Cloud Clusters (NC2) | About This Document | 8 Table 4: Revision history Revision Date Revision Description December 12, 2024 Added information on the minimum number of EBS volumes that can be attached to instances. See Supported Storage Volumes. December 11, 2024 Added information on Prism Central port requirements. See Ports and Endpoints Requirements. December 05, 2024 Added the following topics: Blockstore Support with SPDK Excluding Clusters From the VPC Gateway Node Election Process MST DR with Zero Compute Deployment December 03, 2024 Added support for m6id.metal in Asia Pacific (Singapore) and Canada (Central) regions. See Supported Regions and Bare-metal Instances. November 28, 2024 Added support for Asia Pacific (Malaysia) region. See Supported Regions and Bare-metal Instances. November 27, 2024 The following sections are updated for improvements and fixes: Added information on Prism Central health alert notifications. See Health Check. Added information on DNS server for Prism Central. See Deploying Prism Central. November 14, 2024 Updated the S3 bucket object lock requirements for Cluster Protect. See Creating S3 Buckets. October 23, 2024 Added support for AOS 6.8 and later for running Microsoft Windows Server on an NC2 on AWS cluster. See Windows License Included Instances. October 17, 2024 Updated support for synchronous replication in Disaster Recovery. September 30, 2024 Added a checklist for all the required and optional steps that must be performed while deploying NC2 on AWS. See NC2 on AWS Deployment Workflow. September 18, 2024 The following sections are updated for improvements and fixes: Added support for AOS 6.8.1 for running Microsoft Windows Server on an NC2 on AWS cluster. See Windows License Included Instances. Added support for user-defined tags for cluster resources. See Tags for AWS Resources. Updated information related to user-defined tags, overlapping of subnets, and Flow Virtual Networking subnet CIDR. See Creating a Cluster. Revised the Note to update the CloudFormation template. See Requirements for NC2 on AWS and Adding an AWS Cloud Account. September 17, 2024 Added support for Multicloud Snapshot Technology (MST) Disaster Recovery (DR) with Pilot Light Deployment. See MST DR with Pilot Light Deployment. Cloud Clusters (NC2) | About This Document | 9 Revision Date Revision Description September 06, 2024 Revised the Cluster Protect Configuration chapter for Prism Central backup and restore. August 16, 2024 The following sections are updated for improvements and fixes: Added support for Synchronous replication in Disaster Recovery. Added a note about NVIDIA host driver version 16.x or 17.x in Installing NVIDIA Grid Host Driver. Added information on auto-replacement of hosts in Manually Replacing a Host. Updated information on reserving licenses during a free trial in Reserving License Capacity. Updated information on using a workspace in Creating My Nutanix Account. Added information on a proxy server in Resuming an NC2 Cluster. Added information on availability zone in Limitations of Cluster Protect and Recovering NC2 Clusters. Added information on the tags used for AWS resources in Creating a Cluster. July 31, 2024 Updated the information on the number of supported entities with the Cluster Protect feature and MST in the Limitations of Cluster Protect section. July 25, 2024 The NC2 Licensing and Billing chapter is updated. July 17, 2024 Updated information on ERPs in transit and user VPC. July 16, 2024 Updated Ports and Endpoints Requirements. June 28, 2024 Minor enhancements throughout the document, including steps to access the NC2 console from the My Nutanix dashboard, updating the CloudFormation stack, and cloud account credentials. June 12, 2024 The following sections are updated for improvements and fixes: Updating CloudFormation Stack Deploying a Load Balancer to Allow Internet Access Adding a Virtual Network Interface to a UVM Cloud Clusters (NC2) | About This Document | 10 Revision Date Revision Description May 15, 2024 Revised the document for the following features delivered as part of AOS 6.8 and Prism Central 2024.1 release: Attach EBS volumes to NC2 clusters. See Supported Storage Volumes. Enable Flow Virtual Networking on AWS. See Creating a Cluster. Use the Point-in-Time Backup feature to backup and restore your Prism Central data. See Protecting Prism Central Configuration and Recovering Prism Central. Improvements in Cluster Protect. See Cluster Protect Configuration. Create a subnet from Prism Central using Create Subnets option. See Creating a UVM Network using Prism Central. Cloud Clusters (NC2) | About This Document | 11 NUTANIX CLOUD CLUSTERS (NC2) OVERVIEW Nutanix Cloud Clusters (NC2) delivers a hybrid multicloud platform designed to run applications in private or multiple public clouds. NC2 operates as an extension of on-premises datacenters and provides a hybrid cloud architecture that spans private and public clouds, operated as a single cloud. NC2 extends the simplicity and ease of use of the Nutanix software stack to public clouds using a unified management console. Using the same platform on both clouds, NC2 on AWS reduces the operational complexity of extending, bursting, or migrating your applications and data between clouds. NC2 runs AOS and AHV on the public cloud instances and packages the same CLI, GUI, and APIs that cloud operators use in their on-premises environments. NC2 resources, including bare-metal hosts, are deployed in your AWS account so that you can leverage your existing cloud provider relationships, credits, commits, and discounts. Nutanix provisions the full bare-metal host for your use, and the bare-metal hosts are not shared by multiple customers. Every customer that deploys NC2 will be provisioning bare-metal hosts independent of other customers’ bare-metal hosts. The bare-metal hosts are not shared by multiple tenants. Figure 1: Overview of the Nutanix Hybrid Multicloud Platform NC2 on AWS place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM (CVM) and Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS Elastic Network Interface (ENI) to connect to the network. AHV user VMs do not require any additional configuration to access AWS services or other EC2 instances. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 12 AHV runs a native networking stack that integrates UVM networking with AWS networking. AWS allocates all UVM IP addresses from the AWS subnets in the existing VPCs. Integrating the native networking stack with AWS networking allows you to seamlessly use AWS services on AHV user VMs without encountering the complexities of a network deployment or performance loss. Starting with AOS 6.8, you can now enable Flow Virtual Networking on AWS. NC2 on AWS uses the Flow Virtual Networking network controller to create overlay networks, providing granular control for Nutanix administrators while enabling seamless connectivity to AWS services. Flow Virtual Networking is dependent on Prism Central. Flow Virtual Networking requires reliable connectivity between Prism Central and registered AHV clusters. To utilize the Flow Virtual Networking feature on NC2, ensure that the AOS version on your system is 6.8 or higher and the Prism Central version is on pc.2024.1 or higher. To learn more about Flow Virtual Networking, see Flow Virtual Networking. NC2 can withstand hardware failures and software glitches and ensures that application availability and performance are managed as per the configured resilience. Combining features such as native rack awareness with AWS partition placement groups supported with default instance tenancy allows Nutanix to operate freely in a dynamic cloud environment. Note: AWS EC2 dedicated hosts do not support partition placement groups. Without partition placement control, Nutanix cannot guarantee resiliency against AWS rack partition failures with Dedicated Hosts. You must implement your own data backup strategy that can withstand AWS rack partition failures with Dedicated Hosts. In addition to the traditional resilience solutions for Prism Central, NC2 on AWS also provides the Cluster Protect feature that helps to protect Prism Central. UVM, and volume groups data in case of full cluster failures caused by scenarios, such as Availability Zones (AZs) failures or users shutting down all nodes from the AWS console. For details, see Cluster Protect Configuration. NC2 on AWS provides on-premises workloads, a home in the cloud, offering native access to available AWS services without requiring you to reconfigure your software. You use the NC2 console to deploy a cluster in a VPC in AWS. After you launch a Nutanix cluster in AWS by using NC2, you can operate the cluster in the same manner as you operate your on-premises Nutanix cluster with no change in nCLI, the Prism Element and Prism Central web console, and APIs. You use the NC2 console to create, hibernate, resume, update, and delete your Nutanix cluster. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 13 Figure 2: Overview of NC2 on AWS Following are the key points about NC2 on AWS: Runs on the EC2 bare-metal instances, either with default tenancy or with dedicated host tenancy based on your selections while creating a cluster. For more information on the supported EC2 bare-metal instances, see Supported Regions and Bare-metal Instances. Supports three or more EC2 bare-metal instances. See the Limitations section for more information about the number of nodes supported by NC2. Supports only the AHV hypervisor on Nutanix clusters running in AWS. Supports both existing on-premises Prism Central instance or Prism Central instance deployed on NC2 on AWS. Use Cases NC2 on AWS is ideally suited for the following key use cases: Disaster Recovery on AWS: Configure a Nutanix Cloud Cluster on AWS as your remote backup and data replication site to quickly recover your business-critical workloads in case of a disaster recovery (DR) event for your primary data center. Benefit from AWS’ worldwide geographical presence and elasticity to create an Elastic DR configuration and save DR costs by only expanding your pilot light cluster when DR need arises. Capacity Bursting for Dev/Test: Increase your developer productivity by provisioning additional capacity for Dev/Test workloads on NC2 on AWS if you may be running out of capacity on on-premises. Utilize a single management plane to operate and manage your workloads across your data center and NC2 on AWS environments. Modernize Applications with AWS: Significantly accelerate your time to migrate applications to AWS with a simple lift-and-shift operation—no need to refactor your workloads or rewrite your applications. Get your Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 14 on-premises workloads to AWS faster and modernize your applications with direct integrations with all AWS services. For more information, see NC2 Use Cases. NC2 eliminates the complexities in managing networking, using multiple infrastructure tools, and rearchitecting the applications. NC2 offers the following key benefits: Cluster management: A single management console to manage private and public clouds Built-in integration into public cloud networking Burst into public clouds to meet a seasonal increase in demand Modernize applications and connect natively to cloud services Use public clouds for high availability and disaster recovery Easy to deploy and manage App mobility: Lift and shift applications with no retooling and refactoring Same performance as on-premises cloud and public clouds Cost management: Flexible subscription options Pay based on your actual usage Use your existing Nutanix licenses for NC2 AWS Infrastructure for NC2 The NC2 console places the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM (CVM) and Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS Elastic Network Interface (ENI) to connect to the network. AHV user VMs do not require any additional configuration to access AWS services or other EC2 instances. Within the VPC where NC2 is deployed, you need the following subnets to manage inbound and outbound traffic: One private management subnet for the internal cluster management and communication between CVM, AHV, and so on. One public subnet with an Internet gateway and NAT gateway to provide external connectivity to the NC2 portal. One or more private subnets for UVM traffic, depending on your needs. Note: All NC2 cluster deployments are single AZ deployments. Therefore, your UVM subnets will be in the same AZ as the Management subnet. You must not add the Management subnet as a UVM subnet in Prism Element because UVMs and Management VMs must be on separate subnets. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 15 Figure 3: AWS Infrastructure for NC2 When you deploy a Nutanix cluster in AWS by using the NC2 console, you can either choose to deploy the cluster in a new VPC and private subnet, or choose to deploy the cluster in an existing VPC and private subnet. If you opt to deploy the cluster in a new VPC, during the cluster creation process, the NC2 console provisions a new VPC and private subnet for management traffic in AWS. You must manually create one or more separate subnets in AWS for user VMs. NC2 Planning Guidance This section describes how you can plan costs, sizing, and capacity for your Nutanix Cloud Clusters (NC2) infrastructure. Costs Costs for deploying an NC2 infrastructure include the following: AWS EC2 bare-metal instances: AWS sets the cost for EC2 bare-metal instances. Engage with AWS or see AWS documentation about how your EC2 bare-metal instances are billed. For more information, see: EC2 Pricing AWS Pricing Calculator Note: If you use EC2 dedicated host tenancy, the cost might be higher than the instances running with default tenancy. For more information on AWS pricing for dedicated hosts, see AWS Documentation. NC2 on AWS: Nutanix sets the costs for running Nutanix clusters in AWS. Engage with your Nutanix sales representatives to understand the costs associated with running Nutanix clusters on AWS. Sizing You can use the Nutanix Sizer tool to enable you to create the optimal Nutanix solution for your needs. See the Sizer User Guide for more information. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 16 Capacity Optimizations The Nutanix enterprise cloud offers capacity optimization features that improve storage utilization and performance. The two key features are compression and deduplication. Compression Nutanix systems currently offer the following two types of compression policies: Inline: The system compresses data synchronously as it is written to optimize capacity and to maintain high performance for sequential I/O operations. Inline compression only compresses sequential I/O to avoid degrading performance for random write I/O. Post-Process: For random workloads, data writes to the SSD tier uncompressed for high performance. Compression occurs after cold data migrates to lower-performance storage tiers. Post-process compression acts only when data and compute resources are available, so it does not affect normal I/O operations. Nutanix recommends that you carefully consider the advantages and disadvantages of compression for your specific applications. For further information on compression, see the Nutanix Data Efficiency tech note. NC2 on AWS Deployment Models NC2 on AWS supports several deployment models that help deploying NC2 in varying customer environments. The following are the most common deployment models: Single Availability Zone Deployment Multiple Availability Zone Deployment Multicluster Deployment Several AWS components get installed as part of NC2 on AWS deployment. For more information, see AWS Components Installed. Nutanix cluster is deployed in AWS in approximately 30 minutes. If there are any issues with provisioning the Nutanix cluster, see Notification Center on the NC2 console. For more information, see Notification Center. The following table lists the time taken by each stage of cluster deployment. Table 5: Cluster Deployment Stages Deployment Stage Duration AHV Download and Installation Approximately 35 to 45 minutes. Happens in parallel on all nodes. AOS Tar Download Approximately 4 minutes Happens in parallel on all nodes. AOS Installation Approximately 15 minutes Happens in parallel on all nodes. Cluster creation Approximately 3 minutes per node Happens in sequence on all nodes. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 17 Deployment Stage Duration Restore from AWS S3 Applicable for Hibernate - Resume use case. Approximately 1.5 hours per 10 TB of data to hydrate the cluster once the nodes are available. Regardless of your deployment model, there are a few general outbound requirements for deploying a Nutanix cluster in AWS on top of the existing requirements that on-premises clusters use for support services. For more information on the endpoints the Nutanix cluster needs to communicate with for a successful deployment, see Requirements for Outbound Communication. Single Availability Zone Deployment NC2 on AWS deploys a cluster in a single availability zone by default. Deploying a single cluster in AWS is beneficial for more ephemeral workloads where you want to take advantage of performance improvements and use the same automation pipelines you use on-premises. You can use backup products compatible with AHV to target S3 as the backup destination, and, depending on the failure mode you want to recover from, you can also replicate that S3 bucket to a different Region. HYCU and Veeam are compatible with AHV. Alternatively, you can use a third-party partner, such as Veeam or HYCU to backup your data to S3. For more information, see Nutanix Cloud Clusters on Amazon Web Services Tech Note. Multiple Availability Zone Deployment If you do not have an on-premises cluster available for data protection or you want to use the low-latency links between Availability Zones, you can create a second NC2 cluster in a different Availability Zone. Figure 4: Multiple Availability Zone Deployment You can isolate your private subnets for UVMs between clusters and use the private Nutanix management subnets to allow replication traffic between them. All private subnets can share the same routing table. You must edit the inbound access in each Availability Zone’s security group as shown in the following tables to allow replication traffic. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 18 Table 6: Availability Zone 1 NC2 on AWS Security Group Settings Type Protocol Port Range Source Description Custom TCP rule TCP 9440 10.88.4.0/24 UI access Custom TCP rule TCP 2020 10.88.4.0/24 Replication Custom TCP rule TCP 2009 10.88.4.0/24 Replication Table 7: Availability Zone 2 NC2 on AWS Security Group Settings Type Protocol Port Range Source Description Custom TCP rule TCP 9440 10.88.2.0/24 UI access Custom TCP rule TCP 2020 10.88.2.0/24 Replication Custom TCP rule TCP 2009 10.88.2.0/24 Replication If Availability Zone 1 goes down, you can activate protected VMs on the cluster in Availability Zone 2. Once Availability Zone 1 comes back online, you can redeploy a Nutanix cluster in Availability Zone 1 and reestablish data protection. New clusters require full replication. Multicluster Deployment To protect your Nutanix cluster if there is an Availability Zone failure, use your existing on-premises Nutanix cluster as a disaster recovery target. Figure 5: Hybrid Deployment The following table lists the inbound ports you need to establish replication between an on-premises cluster and a Nutanix cluster running in AWS. You can create these ports on the infrastructure subnet security group that was automatically created when you deployed NC2 on AWS. The ports must be open in both directions. Table 8: Inbound Security Group Rules for AWS Type Protocol Port Range Source Description SSH TCP 22 On-premises CVM SSH into the AHV subnet node Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 19 Type Protocol Port Range Source Description Custom TCP rule TCP 2222 On-premises CVM SSH access to the subnet CVM Custom TCP rule TCP 9440 On-premises CVM UI access subnet Custom TCP rule TCP 2020 On-premises CVM Replication subnet Custom TCP rule TCP 2009 On-premises CVM Replication subnet Note: Make sure you set up the cluster virtual IP address for your on-premises and AWS clusters. This IP address is the destination address for the remote site. Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. NC2 supports Asynchronous, NearSync, and Synchronous replication. You can set your Recovery Point Objective (RPO) to be one hour with asynchronous replications. For more information, see Disaster Recovery. AWS Components Installed When deploying NC2 on AWS, several AWS components get installed. The following table lists the mandatory AWS components that are either installed when the option to create a new VPC is selected during NC2 on AWS deployment or you need to install manually when you choose to use an existing VPC. Note: You can configure Prism to be accessible from the public network and then manually configure the AWS resources, such as Load Balancer, NAT Gateway, Public IPs, and Internet Gateway for public access. Table 9: Mandatory AWS Components Installed on Deployment AWS Component Charged by AWS Description Compute (Dedicated EC2 Hosts) Bare metal Instances Yes For the lists of the supported AWS EC2 bare-metal instance, see Supported Regions and Bare-metal Instances. Networking and Security Elastic Network Interfaces No ENIs are used for UVMs on each host in the cluster. (ENIs) Each ENI can have up to 50 IP addresses with 49 usable IPs (because the host will use 1 IP). If the IPs are exhausted by the UVMs, then another ENI will be added to the host. There can be a maximum of 14 ENIs per host. Load Balancer Yes A load balancer is deployed only when deploying in new VPC from the NC2 portal and only when Prism access from the Internet is set to public. If deploying in existing VPC, you can leverage an existing Load Balancer. Charges are also applicable for additional items. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 20 AWS Component Charged by AWS Description NAT Gateway Yes A NAT gateway is deployed only when deploying in new VPC. If deploying in an existing VPC, you can leverage an existing NAT Gateway. Charges are also applicable for data traffic. Internet Gateway No Internet Gateway is deployed only when deploying in new VPC. If deploying in an existing VPC, you can leverage an existing Internet Gateway. Charges are only applicable for data traffic. Public IP Yes Public IP is deployed if Load Balancer is deployed upon deployment. Security Groups No You can create and associate AWS security groups with an EC2 instance to control outbound and inbound traffic for that instance. A default security group gets created for every AWS VPC. You can create additional security groups for each VPC and then associate them with the resources in that VPC. Access Control Lists (ACLs) No You can use the default ACL, or you can create a custom ACL with rules that are similar to the rules for your security groups. VPC No A VPC is deployed only when you choose to create a new VPC at the time of cluster creation on the NC2 portal. Charges are also applicable for data traffic. Subnets No When you deploy a Nutanix cluster in AWS by using the NC2 console, you can either choose to deploy the cluster in a new VPC and private subnet, or choose to deploy the cluster in an existing VPC and private subnet. If you opt to deploy the cluster in a new VPC, during the cluster creation process, the NC2 console provisions a new VPC and private subnet for management traffic in AWS. You must manually create one or more separate subnets in AWS for user VMs. VPC Endpoints Yes NC2 on AWS uses the VPC private S3 and EC2 endpoints when using a proxy server. These endpoints can connect to AWS Services privately from your VPC without going through the public Internet. There is no additional charge for using gateway endpoints. You are billed for hourly usage and data processing charges for using interface endpoints. For more information, see AWS VPC Private Endpoints. Storage Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 21 AWS Component Charged by AWS Description Elastic Block Store (EBS) Yes Each node in the NC2 cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are encrypted gp3 volumes and are used as boot volumes for AHV and CVM, respectively. The size of AHV EBS volume is 100 GB, and CVM EBS is 150 GB. Software, configs, and logs require 250 GB per host. Storage is needed during cluster creation also. Additional EBS volumes can be attached to i3.metal, i3en.metal, and i4i.metal bare-metal instances to increase the data storage capacity of each host. For more information, see Supported Storage Volumes. EBS Snapshots Yes Upon hibernating a cluster, snapshots of the EBS volumes on each host are taken. S3 Yes An S3 bucket is created at the time of cluster creation, which remains empty until the Hibernate feature is used. When the Hibernate feature is used, all data from your NC2 cluster is placed in the S3 bucket. Once the cluster is resumed, data is hydrated back onto hosts but also stays in the S3 bucket as a backup. Note: The S3 bucket used must not be publicly accessible. If you intend to protect your clusters using the Cluster Protect feature, you need two more S3 buckets. For more information, see Prerequisites for Cluster Protect. Charges are also applicable for data traffic. The following table lists the optional AWS components that can be used with the NC2 on AWS deployment. Table 10: Optional AWS Components AWS Component Charged by AWS Description Network Connectivity VPN Yes A VPN or Direct Connect is needed for connectivity between on-premises and AWS. Direct Connect Yes Charges are also applicable for data traffic. Transit Gateway Yes Charges are also applicable for data traffic. Network Services AWS DNS Used by clusters for VMs by default. You can configure AHV to use own DNS. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 22 AWS Component Charged by AWS Description Route 53 Yes An NC2 cluster uses the default AWS DNS server from the VPC where the cluster is deployed to resolve the FQDN of EC2 and S3 endpoints. If you want to use your own DNS server, you can use the Route 53 service to forward DNS requests to that DNS server. You can view all the resources allocated to a cluster running on AWS. For more information on how to view the cloud resources created by NC2, see Viewing Cloud Resources. Tags for AWS Resources Tags help with resource identification and management. Default Tags When deploying resources in AWS, NC2 applies four predefined tags to every AWS resource it creates, as listed in the following table. The values of these tags differ based on certain parameters such as cluster ID or cluster UUID. Besides the default tags, there might be additional tags specific to certain resources. For example, EC2 instances might have a tag for node ID. Table 11: Pre-defined Tags Key Value nutanix:clusters:owner nutanix-clusters nutanix:clusters:gateway https://gateway-external-api.cloud.nutanix.com nutanix:clusters:cluster-uuid nutanix:clusters:cluster-id User-Defined Tags During cluster creation: You can apply user-defined tags to cluster resources by entering Key and Value pairs in the User Defined Tags field, located in the General tab, under Advanced Settings of the NC2 console. See Creating a Cluster on page 72. Post cluster Creation: You can add, delete, or update tags by selecting the existing cluster and navigating to Settings > Cluster Configuration > User Defined Tags. Tags applied in the User Defined Tags field are only assigned to resources created by NC2. Manually created resources, such as a BYO-VPC, need to be tagged manually in AWS. Manual tag changes made directly in AWS are not reflected in the NC2 portal. For resources shared among multiple clusters (for example, a VPC used by several clusters), the initially applied tag remains unchanged and cannot be modified while the resource is shared. Viewing Cloud Resources You can view all the resources allocated to a cluster running on AWS. To view the cloud resources created by NC2, perform the following: Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 23 Procedure 1. Sign in to the NC2 console: a. Sign in to https://my.nutanix.com using your My Nutanix credentials. b. On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace list that you have used for NC2. For more information on workspaces, see Workspace Management. c. Go to Cloud Services > Nutanix Cloud Clusters (NC2). d. Click Launch. You are redirected to the NC2 console. 2. In the Clusters page, click the name of the cluster. 3. On the left navigation pane, click Cloud Resources. The Cloud Resources page displays all the resources associated with the cluster. Figure 6: Viewing Cloud Resources Blockstore Support with SPDK When you install a new NC2 on AWS cluster using a minimum AOS version of 7.0, Blockstore is enabled, by default, on storage devices such as NVMe SSD. Storage Performance Development Kit (SPDK) is automatically enabled on a cluster when Blockstore is active on the cluster with NVMe. Note: Blockstore support with SPDK is not enabled when you upgrade to AOS 7.0 for an existing cluster. This feature is only available for new clusters running AOS 7.0. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 24 AOS uses SPDK to have zero-copy, direct parallel access to NVMe devices, unlocking its complete potential. Blockstore combined with SPDK makes the AOS data path inside the CVM very lean, delivering performance benefits to applications. The combination of Blockstore and SPDK provides the following benefits: Ultra-low latency Higher performance Greater host CPU efficiency With Blockstore, AOS replaces the Linux-kernel file system with userspace libraries that allow direct access to storage devices within the AOS VMs. This approach enables direct communication with the underlying storage media using SPDK APIs and avoids in-kernel system calls. For more information on Blockstore support with SPDK and how to verify if Blockstore with SPDK is enabled, see the Acropolis Advanced Administration Guide. NC2 Architecture The bare-metal instance runs the AHV hypervisor and the hypervisor, like any on-premises deployment, runs a Controller Virtual Machine (CVM) with direct access to NVMe instance storage hardware. AOS Storage uses the following three core principles for distributed systems to achieve linear performance at scale: Must have no single points of failure (SPOF). Must not have any bottlenecks at any scale (must be linearly scalable). Must apply concurrency (MapReduce). Together, a group of Nutanix nodes forms a distributed system (Nutanix cluster) responsible for providing the Prism and Acropolis capabilities. Each cluster node has two EBS volumes attached. Both are encrypted gp3 volumes and are used as boot volumes for AHV and CVM, respectively. The size of AHV EBS volume is 100 GB, and CVM EBS is 150 GB. Additional EBS volumes can be attached to i3.metal, i3en.metal, and i4i.metal bare-metal instances to increase the local host storage. For more information, see Supported Storage Volumes. All services and components are distributed across all CVMs in a cluster to provide for high-availability and linear performance at scale. Figure 7: NC2 on AWS Architecture This enables our MapReduce Framework (Curator) to use the full power of the cluster to perform activities concurrently. For example, activities such as data reprotection, compression, erasure coding, deduplication, and more. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 25 Figure 8: Cluster Deployment in a VPC Preventing Network Partition Errors AOS Storage uses the Paxos algorithm to avoid split-brain scenarios. Paxos is a proven protocol for reaching consensus or quorum among several participants in a distributed system. Before any file system metadata is written to Cassandra, Paxos ensures that all nodes in the system agree on the value. If the nodes do not reach a quorum, the operation fails in order to prevent any potential corruption or data inconsistency. This design protects against events such as network partitioning, where communication between nodes may fail or packets may become corrupt, leading to a scenario where nodes disagree on values. AOS Storage also uses time stamps to ensure that updates are applied in the proper order. Resolving Bad Disk Resources AOS Storage incorporates a Curator process that performs background housekeeping tasks to keep the entire cluster running smoothly. The multiple responsibilities of Curator include ensuring file system metadata consistency and combing the extent store for corrupt and under replicated data. Also, Curator scans extents in successive passes, computes each extent’s checksum, and compares it with the metadata checksum to validate consistency. If the checksums do not match, the corrupted extent is replaced with a valid extent from another node. This proactive data analysis protects against data loss and identifies bad sectors you can use to detect disks that are about to fail. Maintaining Availability: Node and Rack Failure The cluster orchestrator running in the cloud service is responsible for maintaining your intended capacity under rack and node failures. By default, NC2 runs EC2 bare-metal instances in the Default tenancy model where EC2 instances are spread across different partitions in a Placement Group. While node failure resiliency can be achieved with both the default and dedicated host tenancy, rack failure resiliency is only supported with the default tenancy. AWS does not support partition placement groups with dedicated hosts; therefore, Nutanix does not guarantee resiliency against AWS rack partition failures. If you want to use a dedicated host tenancy, you must implement your own rack failure resiliency strategy to prevent data loss. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 26 Instances in the cluster are deployed using a partition placement group with seven partitions. A placement group is created for each instance type and the instances are maintained well balanced within the placement group. The placement group + partition number is translated into a rack ID of the node. This enables AOS Storage to place meta data and data replicas in different fault domains. Figure 9: NC2 on AWS –- Partition Placement (Multi) Setting up a cluster with redundancy factor 2 (RF2) protects data against a single rack failure and setting it up with RF3 protects against a two-rack failure. Also, to protect against multiple correlated failures within a data center and an entire AZ failure, Nutanix recommends you set up sync replication to a second cluster in a different AZ in the same Region or an Async replication to an AZ in a different Region. AWS data transfer charges may apply. AWS deploys each node of the Nutanix cluster on a separate AWS rack (also called AWS partition) for fault tolerance. If a cluster loses rack awareness, an alert is displayed in the Alerts dashboard of the Prism Element web console and the Data Resiliency Status dashboard displays a Critical status. A cluster might lose rack awareness if you: Update the cluster capacity. For example, if you add or remove a node. Manually replace a host or the replace host action is automatically triggered by the NC2 console. Change the Replication Factor (RF), that is from RF2 to RF3. Create a cluster with either 8 or 9 nodes and configure RF3 on the cluster. Deploy a cluster in an extremely busy AWS region. This is a rare scenario in which the cluster might get created without rack awareness due to limited availability of resources in that region. For more details on the Nutanix metadata awareness conditions, visit the AOS Storage section in the Nutanix Bible. The Strict Rack Awareness feature helps to maintain rack awareness in the scenarios listed above. To maintain the rack awareness for Clusters on AWS, you need to enable the Strict Rack Awareness feature and maintain at least three racks for RF2 or at least five racks for RF3. This feature is disabled by default. You can run the following nCLI command to enable Strict rack Awareness: Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 27 ncli cluster enable-strict-domain-awareness If you want to disable Strict rack Awareness, run the following nCLI command: ncli cluster disable-strict-domain-awareness Contact Nutanix Support for assistance if you receive an alert in the Prism Element web console that indicates your cluster has lost rack awareness. Maintaining Availability: AZ Failure AWS Availability Zones are distinct locations within an AWS Region that are engineered to be isolated from failures in other Availability Zones. They provide inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. Similarly, the storage is also designed to be ephemeral in nature. Although rare, failures can occur that affect the availability of instances that are in the same AZ. If you host all your instances in a single AZ that is affected by such a failure, none of your instances are available. Deployment of a production cluster in an AZ without protection either by using disaster recovery to on-premises or Nutanix Disaster Recovery results in data loss if there are AZ failures. You can use Nutanix Disaster Recovery to protect your data to another on-premises cluster, or another NC2 cluster in a different Availability Zone. For more information, see Nutanix Disaster Recovery Guide. Note: If your cluster is running in a single AZ without protection either by using disaster recovery to on-premises or Nutanix Disaster Recovery beyond 30 days, the Nutanix Support portal displays a notification indicating that your cluster is not protected. The notification includes a list of all the clusters that are in a single AZ without protection. Hover over the notification for more details and click Acknowledge. Once you acknowledge the notification, the notification disappears and appears only if another cluster exceeds 30 days in a single availability zone without protection. NC2 supports Asynchronous, NearSync, and Synchronous replication. For more information, see Disaster Recovery. NC2 Security Approach Nutanix takes a holistic approach to security and mandates the following to deploy a secure NC2 infrastructure: An AWS account with the IAMFullAccess and AWSCloudFormationFullAccess permissions. Note: NC2 requires these permissions only to create the CloudFormation template and does not use them for any other purpose. The CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod creates the Nutanix- Clusters-High-Nc2-Cluster-Role-Prod and Nutanix-Clusters-High-Nc2-Orchestrator-Role- Prod IAM roles. You can view the permissions and policies attached to these IAM roles by reviewing the CloudFormation script from https://s3.us-east-1.amazonaws.com/prod-gcf-567c917002e610cce2ea/ aws_cf_clusters_high.json. For more information on the IAM roles and associated permissions, see AWS Account and IAM Role Requirements. If you have already run the CloudFormation template, you must run it again so that any new permissions added to the IAM roles come into effect. While updating the CloudFormation stack, you must not change any parameters, Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 28 including GatewayAccountId and GatewayExternalId. For more information, see Updating CloudFormation Stack. Note: Do not use the AWS root user for any deployment or operations related to NC2. NC2 on AWS does not use AWS Secrets Manager for maintaining any stored secrets. All customer- sensitive data is stored on customer-managed cluster. Local NVMe storage on the bare-metal is used for storing customer-sensitive data. Nutanix does not have any visibility into customer-sensitive data stored locally on the cluster. Any data sent to Nutanix concerning cluster health is stripped of any Personal Identifiable Information (PII). Access control and user management in the NC2 console. Note: Nutanix recommends following the policy of least privilege for all access granted while deploying NC2. For more information, see NC2 User Management. For more information about how security is implemented in a Nutanix Cluster environment, see Network Security using AWS Security Groups. When deploying Prism Central 2024.2, Flow Network Security (FNS) version 4.x.x is bundled with it. FNS version 4.x.x supports current generation or Next-Gen: FNS current generation works on native networking. FNS Next-Gen is the latest generation designed for environments utilizing Flow Virtual Networking. If you enable FNS for the first time, FNS Next-Gen is set by default. Suppose you are using an earlier version of Prism Central (before the release of Prism Central 2024.2) and have configured FNS to operate in the current generation. In that case, you will still be able to use the current generation even after you upgrade to Prism Central 2024.2 as long as you are using FNS 4.x.x. However, if you want to use Flow Virtual Networking capabilities, you must migrate to FNS Next-Gen. If you are a new user on Prism Central 2024.2 with FNS Next-Gen set as the default and want to switch to FNS current generation to use NC2 on AWS with native networking, refer to the KB 000016799 article. Data Encryption To help reduce cost and complexity, Nutanix supports a native local key manager (LKM) for all clusters with three or more nodes. The LKM runs as a service distributed among all the nodes. You can activate LKM from Prism Element to enable encryption without adding another silo to manage. If you are looking to simplify your infrastructure operations, you can also use one-click infrastructure for your key manager. Organizations often purchase external key managers (EKMs) separately for both software and hardware. However, because the Nutanix LKM runs natively in the CVM, it is highly available and there is no variable add-on pricing based on the number of nodes. Every time you add a node, you know the final cost. When you upgrade your cluster, the key management services are also upgraded. When upgrading the infrastructure and management services in lockstep, you are ensuring your security posture and availability by staying in line with the support matrix. Nutanix software encryption provides native AES-256 data-at-rest encryption, which can interact with any KMIP- compliant or TCG-compliant external KMS server (Vormetric, SafeNet, and so on) and the Nutanix native KMS, introduced in AOS version 5.8. The system uses Intel AES-NI acceleration for encryption and decryption processes to minimize any potential performance impacts. Nutanix software encryption also provides in-transit encryption. Note that in-transit encryption is currently applicable within a Nutanix cluster for data RF. Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 29 GETTING STARTED WITH NC2 Perform the tasks described in this section to get started with NC2 on AWS. Before you get started, ensure that you have registered for NC2 on AWS from the My Nutanix portal. See NC2 Licensing and Billing on page 125 for more information. For more information on AOS and other software compatibility details, visit Release Notes and Software Compatibility. After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user VMs. For more information, see User VM Network Management and Network Security using AWS Security Groups. Requirements for NC2 on AWS This guide assumes prior knowledge of the Nutanix stack and AWS EC2, VPC, and CloudFormation services and familiarity of AWS framework is highly recommended to operate significant deployments on AWS. Following are the requirements to use NC2 on AWS. AWS Account and IAM Role Requirements Configure an AWS account to access the AWS console. For more information, see AWS Documentation. Note: You must have CreateRole access to the AWS account. Run a CloudFormation script that creates IAM roles for NC2 on AWS in your AWS account. When running the CloudFormation script, you must log into your AWS account with a user role that has the following permissions: IAMFullAccess: NC2 on AWS utilizes IAM roles to communicate with AWS APIs. You must have IAMFullAccess privileges to create IAM roles in your AWS account. AWSCloudFormationFullAccess: NC2 on AWS provides you with a CloudFormation script to create two IAM roles used by NC2. You must have AWSCloudFormationFullAccess privileges to run that CloudFormation stack in your account. Note: These permissions are suggested for you to run the CloudFormation template, and NC2 does not need these for any purpose. By running the CloudFormation script provided by NC2, you will be creating the following two IAM roles for NC2 on AWS: Nutanix-Clusters-High-Nc2-Cluster-Role-Prod Nutanix-Clusters-High-Nc2-Orchestrator-Role-Prod One role allows the NC2 console to access your AWS account by using APIs, and the other role is assigned to each of your bare-metal instances. You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-Stack- Prod, on the Stacks page of the CloudFormation console. Cloud Clusters (NC2) | Getting Started With NC2 | 30 Figure 10: Viewing CloudFormation Template You can review the CloudFormation script from https://s3.us-east-1.amazonaws.com/prod- gcf-567c917002e610cce2ea/aws_cf_clusters_high.json and check the required permissions. Alternatively, you can view the CloudFormation script by clicking Open AWS Console while adding your cloud account. For more information, see Adding an AWS Cloud Account. To view the permissions and policies attached to these IAM roles, you can sign into the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/ and then choose Roles > Permissions. Figure 11: Viewing IAM Roles For more information on how to secure your AWS resources, see Security Best Practices in IAM. Note: NC2 has updated the CloudFormation template on September 18, 2024. You must run the CloudFormation template while adding your AWS cloud account. If you have already run the CloudFormation template, you must run it again so that any new permissions added to the IAM roles come into effect. For more information, see Updating CloudFormation Stack. vCPU Limits Review the supported regions and bare-metal instances. For details, see Supported Regions and Bare-metal Instances AWS supports the following vCPU limits for the bare-metal instances available for NC2 on AWS. i4i.metal - 128 vCPUs for each instance m6id.metal - 128 vCPUs for each instance m5d.metal: 96 vCPUs for each instance i3.metal: 72 vCPUs for each instance i3en.metal: 96 vCPUs for each instance Cloud Clusters (NC2) | Getting Started With NC2 | 31 z1d.metal: 48 vCPUs for each instance g4dn metal: 96 vCPUs for each instance Note: Before you deploy a cluster, check if the EC2 instance type is supported in the Availability Zone in which you want to deploy the cluster. Not all instance types are supported in all the availability zones in an AWS region. An error message is displayed if you try to deploy a cluster with an instance type that is not supported in the availability zone you selected. Configure a sufficient vCPU limit for your AWS account. Cluster creation fails if you do not have the sufficient vCPU limit set for your AWS account. To view the service quotas for your account or to request a quota increase, use the AWS Service Quotas console. For more information, see AWS Documentation. Note: Each cluster node has two EBS volumes attached. Both are encrypted gp3 volumes and are used as boot volumes for AHV and CVM, respectively. The size of AHV EBS volume is 100 GB, and CVM EBS is 150 GB. Additional EBS volumes can be attached to i3.metal, i3en.metal, and i4i.metal bare-metal instances to increase the local host storage. For more information, see Supported Storage Volumes. IMDS Requirements NC2 on AWS supports accessing the instance metadata from a running instance using one of the following methods: Instance Metadata Service Version 1 (IMDSv1) – a request/response method Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method By default, you can use either IMDSv1 or IMDSv2, or both. For more information on how to configure the instance metadata options, see AWS Documentation. My Nutanix Account Requirements Configure a My Nutanix account, that is an account to access the NC2 console. See Creating My Nutanix Account for more information. Note: When you create a My Nutanix account, a default workspace gets created for you with the Account Admin role, which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you are invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the Admin Center and Billing Center. Cluster Protect Requirements If you intend to protect your clusters using the Cluster Protect feature, you need to meet additional requirements. For more information, see Prerequisites for Cluster Protect. Networking Requirements Configure connectivity between your on-premises datacenter and AWS VPC by using either VPN or Direct Connect if you want to pair both the clusters for data protection and other reasons. See AWS Site-to-Site VPN to connect AWS VPC by using VPN. To learn more about setting up a VPN to on-premises, see the Nutanix University video. See AWS Direct Connect Resiliency Toolkit to connect AWS VPC by using Direct Connect. Cloud Clusters (NC2) | Getting Started With NC2 | 32 Allow outbound internet access on your AWS VPC so that the NC2 console can successfully provision and orchestrate Nutanix clusters in AWS. For more information on how to allow outbound internet access on your AWS VPC, see AWS VPC documentation. Configure the AWS VPC infrastructure. You can choose to create a new VPC as part of cluster creation from the NC2 portal or use an existing VPC. To learn more about setting up an AWS Virtual Private Cloud (VPC) manually, see Create AWS Resources Manually. If you deploy AWS Directory Service in a selected VPC or subnet to resolve DNS names, ensure that AWS Directory Service resolves the following FQDN successfully to avoid deployment failure. FQDN: gateway-external-api.cloud.nutanix.com Ensure that both Enable DNS resolution and Enable DNS hostnames DNS attributes are enabled for your VPC. For more information on how to view and update DNS attributes for your VPC, see AWS Documentation. NC2 on AWS uses the default AWS DNS server from the VPC where the cluster is deployed. The default AWS DNS is used to resolve the FQDN of EC2 and S3 private endpoints. If you want to use your own custom DNS server in addition to the AWS DNS server, you can use the Route 53 service to forward DNS requests from the default AWS DNS server to your own custom DNS server when those requests need to be served by your DNS server. In all scenarios, the default AWS DNS server is required to resolve the private IP addresses of the S3 and EC2 endpoints. If you use an existing VPC, you need two private subnets, one for user VMs and one for cluster management. If you choose to create a new VPC during the cluster creation workflow, then the NC2 console creates the required private subnets. Create an internet gateway and attach it to your VPC. Set the default route on your Public Route Table to the internet gateway. Create a public NAT gateway and associate it with the public subnet and assign a public elastic IP address to the NAT gateway. Set the default route for your default route table to the NAT gateway. If you use AOS 5.20, ensure that the cluster virtual IP address is set up for your on-premises and NC2 on AWS cluster after the cluster is created. In NC2 on AWS with AOS 6.x, the orchestrator automatically assigns the cluster virtual IPs. This IP address is the destination address for the remote site. The AHV hosts must be allowed to communicate with the Prism Central VMs over TCP port 9446. Keeping the port open enables the hosts to send the Prism Central VMs connection tracking data. Prism Central uses that data to show network flows. CIDR Requirements You must use the following range of IP addresses for the VPCs and subnets: VPC: between /16 and /23, including both CIDR Blocks Private management subnet: between /16 and /23, including both CIDR blocks Public subnet: between /16 and /23, including both CIDR blocks UVM subnets: between /16 and /23, including both CIDR blocks Cloud Clusters (NC2) | Getting Started With NC2 | 33 When using Flow Virtual Networking with NC2 on AWS: