Getting Started With NC2 On AWS PDF

Summary

This document provides a guide to getting started with NC2 on AWS. It outlines prerequisites, including AWS account setup and IAM role requirements, along with vCPU limits. It also covers important considerations for deploying NC2 clusters on AWS, such as storage volumes and security.

Full Transcript

# Getting Started With NC2 Perform the tasks described in this section to get started with NC2 on AWS. Before you get started, ensure that you have registered for NC2 on AWS from the My Nutanix portal. See NC2 Licensing And Billing for more information. For more information on AOS and other softwar...

# Getting Started With NC2 Perform the tasks described in this section to get started with NC2 on AWS. Before you get started, ensure that you have registered for NC2 on AWS from the My Nutanix portal. See NC2 Licensing And Billing for more information. For more information on AOS and other software compatibility details, visit [Release Notes](URL) and [Software Compatibility](URL) After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user VMs. For more information, see User VM Network Management And Network Security using AWS Security Groups. ## Requirements for NC2 on AWS This guide assumes prior knowledge of the Nutanix stack and AWS EC2, VPC, and CloudFormation services and familiarity of AWS framework is highly recommended to operate significant deployments on AWS. Following are the requirements to use NC2 on AWS. ## AWS Account and IAM Role Requirements - Configure an AWS account to access the AWS console. For more information, see [AWS Documentation](URL). - You must have CreateRole access to the AWS account. - Run a CloudFormation script that creates IAM roles for NC2 on AWS in your AWS account. When running the CloudFormation script, you must log into your AWS account with a user role that has the following permissions: - IAMFullAccess: NC2 on AWS utilizes IAM roles to communicate with AWS APIs. You must have IAMFullAccess privileges to create IAM roles in your AWS account. - AWSCloudFormationFullAccess: NC2 on AWS provides you with a CloudFormation script to create two IAM roles used by NC2. You must have AWSCloudFormationFullAccess privileges to run that CloudFormation stack in your account. These permissions are suggested for you to run the CloudFormation template, and NC2 does not need these for any purpose. By running the CloudFormation script provided by NC2, you will be creating the following two IAM roles for NC2 on AWS: - Nutanix-Clusters-High-Nc2-Cluster-Role-Prod - Nutanix-Clusters-High-Nc2-Orchestrator-Role-Prod One role allows the NC2 console to access your AWS account by using APIs, and the other role is assigned to each of your bare-metal instances. You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod, on the Stacks page of the CloudFormation console. You can review the CloudFormation script from [https://s3.us-east-1.amazonaws.com/prod-gcf-567c917002e610cce2ea/aws_cf_clusters_high.json](URL) and check the required permissions. Alternatively, you can view the CloudFormation script by clicking *Open AWS Console* while adding your cloud account. For more information, see [Adding an AWS Cloud Account](URL).To view the permissions and policies attached to these IAM roles, you can sign into the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](URL) and then choose *Roles - > Permissions*. For more information on how to secure your AWS resources, see Security Best Practices in IAM. - NC2 has updated the CloudFormation template on September 18, 2024. You must run the CloudFormation template while adding your AWS cloud account. If you have already run the CloudFormation template, you must run it again so that any new permissions added to the IAM roles come into effect. For more information, see Updating CloudFormation Stack. ## vCPU Limits Review the supported regions and bare-metal instances. For details, see Supported Regions And Bare-metal Instances. AWS supports the following vCPU limits for the bare-metal instances available for NC2 on AWS. - i4i.metal - 128 vCPUs for each instance - m6id.metal - 128 vCPUs for each instance - m5d.metal: 96 vCPUs for each instance - i3.metal: 72 vCPUs for each instance - i3en.metal: 96 vCPUs for each instance - z1d.metal: 48 vCPUs for each instance - g4dn metal: 96 vCPUs for each instance - Before you deploy a cluster, check if the EC2 instance type is supported in the Availability Zone in which you want to deploy the cluster. - Not all instance types are supported in all the availability zones in an AWS region. An error message is displayed if you try to deploy a cluster with an instance type that is not supported in the availability zone you selected. - Configure a sufficient vCPU limit for your AWS account. Cluster creation fails if you do not have the sufficient vCPU limit set for your AWS account. - To view the service quotas for your account or to request a quota increase, use the AWS Service Quotas console. For more information, see [AWS Documentation](URL). - Each cluster node has two EBS volumes attached. Both are encrypted gp3 volumes and are used as boot volumes for AHV and CVM, respectively. The size of AHV EBS volume is 100 GB, and CVM EBS is 150 GB. - Additional EBS volumes can be attached to 13.metal,i3en.metal, and i4i.metal bare-metal instances to increase the local host storage. For more information, see Supported Storage Volumes. ## IMDS Requirements NC2 on AWS supports accessing the instance metadata from a running instance using one of the following methods: - Instance Metadata Service Version 1 (IMDSv1) – a request/response method - Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method By default, you can use either IMDSv1 or IMDSv2, or both. For more information on how to configure the instance metadata options, see [AWS Documentation](URL). ## My Nutanix Account Requirements Configure a My Nutanix account, that is, an account to access the NC2 console. See [Creating My Nutanix Account](URL) for more information. - When you create a My Nutanix account, a default workspace gets created for you with the Account Admin role, which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you are invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the Admin Center and Billing Center. ## Cluster Protect Requirements If you intend to protect your clusters using the Cluster Protect feature, you need to meet additional requirements. For more information, see Prerequisites for Cluster Protect. ## Networking Requirements - Configure connectivity between your on-premises datacenter and AWS VPC by using either VPN or Direct Connect if you want to pair both the clusters for data protection and other reasons. - See AWS Site-to-Site VPN to connect AWS VPC by using VPN. - To learn more about setting up a VPN to on-premises, see the Nutanix University video. - See [AWS Direct Connect Resiliency Toolkit](URL) to connect AWS VPC by using Direct Connect. - Allow outbound internet access on your AWS VPC so that the NC2 console can successfully provision and orchestrate Nutanix clusters in AWS. - For more information on how to allow outbound internet access on your AWS VPC, see [AWS VPC documentation](URL). - Configure the AWS VPC infrastructure. You can choose to create a new VPC as part of cluster creation from the NC2 portal or use an existing VPC. - To learn more about setting up an AWS Virtual Private Cloud (VPC) manually, see [Create AWS Resources Manually](URL). - If you deploy AWS Directory Service in a selected VPC or subnet to resolve DNS names, ensure that AWS Directory Service resolves the following FQDN successfully to avoid deployment failure. FQDN: gateway-external-api.cloud.nutanix.com Ensure that both Enable DNS resolution and Enable DNS hostnames DNS attributes are enabled for your VPC. - For more information on how to view and update DNS attributes for your VPC, see [AWS Documentation](URL). - NC2 on AWS uses the default AWS DNS server from the VPC where the cluster is deployed. The default AWS DNS is used to resolve the FQDN of EC2 and S3 private endpoints. If you want to use your own custom DNS server in addition to the AWS DNS server, you can use the Route 53 service to forward DNS requests from the default AWS DNS server to your own custom DNS server when those requests need to be served by your DNS server. In all scenarios, the default AWS DNS server is required to resolve the private IP addresses of the S3 and EC2 endpoints. - If you use an existing VPC, you need two private subnets, one for user VMs and one for cluster management. If you choose to create a new VPC during the cluster creation workflow, then the NC2 console creates the required private subnets. - Create an internet gateway and attach it to your VPC. Set the default route on your Public Route Table to the internet gateway. - Create a public NAT gateway and associate it with the public subnet and assign a public elastic IP address to the NAT gateway. Set the default route for your default route table to the NAT gateway. - If you use AOS 5.20, ensure that the cluster virtual IP address is set up for your on-premises and NC2 on AWS cluster after the cluster is created. In NC2 on AWS with AOS 6.x, the orchestrator automatically assigns the cluster virtual IPs. This IP address is the destination address for the remote site. - The AHV hosts must be allowed to communicate with the Prism Central VMs over TCP port 9446. Keeping the port open enables the hosts to send the Prism Central VMs connection tracking data. Prism Central uses that data to show network flows. ## CIDR Requirements You must use the following range of IP addresses for the VPCs and subnets: - VPC: between /16 and /23, including both CIDR Blocks - Private management subnet: between /16 and /23, including both CIDR blocks - Public subnet: between /16 and /23, including both CIDR blocks - UVM subnets: between /16 and /23, including both CIDR blocks - When using Flow Virtual Networking with NC2 on AWS: - Prism Central subnet: between /16 and /28, including both CIDR blocks - Flow Virtual Networking subnet: between /16 and /24, including both CIDR blocks - UVM subnet sizing would depend on the number of UVMs that would need to be deployed. NC2 supports the network CIDR sizing limits enforced by AWS. ## Ports and Endpoints Requirements NC2 uses several ports and protocols for its operations. See [Ports and Endpoints Requirements](URL) for information about the ports and endpoints that are used for outbound and inbound communication, and other components, such as Microservices Infrastructure. ## Supported Regions and Bare-metal Instances Nutanix Cloud Clusters (NC2) running on AWS supports certain EC2 bare-metal instances in various AWS regions. NC2 supports the following AWS tenancy models: - **Default:** This is the default EC2 tenancy option where EC2 bare-metal instances can migrate across different hosts during an instance power-cycle operation. These bare-metal instances can be spread across different partitions using AWS Placement Groups. When using the default tenancy type, you can optionally deploy Microsoft Windows Server License Included (LI) Instances. This option allows you to deploy bare-metal instances where the cost of the Windows Server licensing is included in the Amazon EC2 compute costs. - **Dedicated Host**: In this tenancy option, EC2 bare-metal instances that run on a dedicated physical host do not migrate to another physical host even after an instance power cycle. When you choose the dedicated tenancy model, NC2 allocates a new dedicated host and then provisions the bare-metal instances on these new dedicated hosts. For more information on the guidelines for selecting the right tenancy model and the pros and concerns of each type of tenancy, see [EC2 Instance Tenancy Types](URL). - NC2 might not support some bare-metal instance types in certain regions due to limitations in the number of partitions available. NC2 supports EC2 bare-metal instances in regions with three or more partitions. The support for g4dn.metal instance type is only available on clusters with AOS 6.1.1 and 5.20.4 or later releases. - You can use a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal, m5d.metal, and m6id.metal instance types while creating a new cluster or expanding the cluster capacity of an already running cluster. The combination of these instance types is subject to bare-metal support from AWS in the region where the cluster is being deployed. For more details, see [Creating a Heterogeneous Cluster](URL). - You can only create homogenous clusters with g4dn.metal instances; it cannot be used to create a heterogeneous cluster. The following table lists the AWS EC2 bare-metal instance types supported by Nutanix. | Metal Types | Details | |---|---| | i4i.metal | 64 physical cores, 1024 GiB Memory, 27.28 TiB NVMe SSD storage | | m6id.metal | 64 physical cores, 512 GiB memory, 7.42 TiB NVMe SSD storage | | m5d.metal | 48 physical cores, 384 GiB memory, 3.27 TiB NVMe SSD storage | | i3.metal | 36 physical cores, 512 GiB memory, 13.82 TiB NVMe SSD storage | | i3en.metal | 48 physical cores, 768 GiB memory, 54.57 TiB NVMe SSD storage | | z1d.metal | 24 physical cores, 384 GiB memory, 1.64 TiB NVMe SSD storage | | g4dn.metal | 48 physical cores, 384 GiB Memory, 128 GiB GPU Memory, 1.64 TiB NVMe SSD storage | For more information, see [Hardware Platform Spec Sheets](URL). Select *NC2 on AWS* from the Select your preferred Platform Providers list. The following table lists the detailed information for each bare-metal instance type supported in each AWS region. - While all of these AWS regions support EC2 instances with the default tenancy, some regions do not support EC2 instances with the dedicated host tenancy. | Region Name | i4i.metal | m6id.metal | m5d.metal | i3.metal | i3en.metal | z1d.metal | g4dn.metal | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | US-East (N.Virginia) | Yes | Yes | Yes | Yes | Yes | Yes | Yes| | US-East (Ohio) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | US-West (N.California) | Yes | No | Yes | Yes | Yes | Yes | Yes | | US-West (Oregon) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Africa (Cape Town) | Yes | No | Yes | Yes | Yes | No | Yes** | | Asia Pacific (Hong Kong) | Yes | No | Yes | Yes | Yes** | No | Yes | | Asia Pacific (Jakarta) | Yes | No | Yes | No | Yes | No | No | | Asia Pacific (Malaysia) | Yes | No | No | No | No | No | No | | Asia Pacific (Mumbai) | Yes | No | Yes | Yes | Yes | Yes | Yes | | Asia Pacific (Hyderabad) | Yes | No | Yes | No | Yes | No | No | | Asia Pacific (Seoul) | Yes | No | Yes | Yes | Yes | Yes | Yes | | Asia Pacific (Singapore) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Asia Pacific (Sydney) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Asia Pacific (Melbourne) | Yes | No | Yes | No | Yes | No | No | | Asia Pacific (Tokyo) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Asia Pacific (Osaka) | Yes | No | Yes | Yes** | Yes | No | Yes | | Canada (Central) | Yes | Yes | Yes | Yes | Yes** | No | Yes | | Canada West (Calgary) | Yes | Yes | No | No | Yes | No | No | | EU (Frankfurt) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | EU (Ireland) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | EU (London) | Yes | No | Yes | Yes | Yes | Yes | Yes | | EU (Milan) | Yes | No | Yes | Yes | Yes** | No | Yes** | | EU (Paris) | Yes | No | Yes | Yes | Yes** | No | Yes | | EU (Stockholm) | Yes | No | Yes | Yes | Yes** | No | Yes** | | EU (Spain) | No | No | Yes | No | Yes | No | No | | EU (Zurich) | Yes | No | Yes | No | Yes | No | No | | Israel (Tel Aviv) | No | No | Yes | No | Yes | No | No | | Middle East (Bahrain) | Yes | No | Yes | No | Yes | No | Yes | | Middle East (UAE) | Yes | No | Yes | No | Yes | No | No | | South America (Sao Paulo) | Yes | No | Yes | Yes | Yes | No | Yes** | | AWS GovCloud (US-East) | No | No | Yes | No | Yes | No | No | | AWS GovCloud (US-West) | No | No | Yes | Yes | Yes | No | No | * - These regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before using them with NC2. For more information on how to enable a region, see [AWS documentation](URL). Once you have enabled these regions in your AWS console, ensure they are also selected in your NC2 portal. For more information, see the instructions about adding cloud regions to the NC2 console in [Adding an AWS Cloud Account](URL). ** - This instance type and region combination does not support dedicated host tenancy. Note: An instance type may not be supported in a region because the number of partitions is less than the minimum three partitions required by NC2 or the instance type is not supported by AWS in the specified region. ## Installing NVIDIA Grid Host Driver NC2 on AWS supports AWS G4dn.metal bare-metal instances from AOS 6.1.1 (STS) and AOS 5.20.4 (LTS) release onwards. The G4dn.metal instance is the only GPU-enabled instance provided by AWS currently being supported in NC2. If you want to use NC2 on AWS with G4dn.metal instances, you must install the NVIDIA grid host driver for AHV on each node running with G4dn.metal instances. It supports NVIDIA T4 GPUs. To learn more about AWS G4dn.metal instances, see [AWS Documentation](URL). To find more details on the GPU PCIs, run the *lspci | grep "NVIDIA"* command from the AHV host. - You must manually install the NVIDIA driver on each new node when you expand the cluster size. Also, NC2 may automatically replace nodes in your cluster if there are issues with node availability. In such a scenario, the user must also install the NVIDIA driver on the new node procured by NC2. - If a GPU card is present in your cluster, LCM restricts update to AHV if it does not detect a compatible NVIDIA GRID driver in its inventory. To fetch a compatible NVIDIA GRID driver for your version of AHV, see Updating the NVIDIA GRID Driver with LCM. Follow these steps to install the NVIDIA driver on the G4dn hosts: 1. Download the NVIDIA host driver version 13.0 from the Nutanix portal at [https://portal.nutanix.com/page/downloads?product=ahv&bit=NVIDIA](URL). - For more information on the limitations on which NVIDIA drivers can be used with AHV, see [AHV Administration Guide](URL). 2. Review the detailed installation instructions on NVIDIA driver that are outlined in [Installing the NVIDIA grid driver](URL). - You must sign into the CVM in the cluster with the SSH key pair provided during the cluster creation instead of the default user credentials. - For more information about assigning and configuring a vGPU profile to a VM, see [Creating a VM (AHV)](URL). 3. Perform the following steps to install the NVIDIA guest driver into guest VMs: - Ensure that the guest driver version and build matches the host driver version and build. For more information about the build number and version number matrix, see [NVIDIA Virtual GPU (VGPU) Software Documentation](URL). - Install the NVIDIA VGPU Software Graphics Driver. For more information, see [Installing the NVIDIA VGPU Software Graphics Driver](URL) in the NVIDIA Virtual GPU Software User Guide. - Download the NVIDIA GRID drivers for the guest driver from the NVIDIA dashboard. - NVIDIA VGPU guest OS drivers for product versions 11.0 or later can be acquired using NVIDIA Licensing Software Downloads under: - All Available - Product Family = vGPU - Platform = Linux KVM - Platform Version = All Supported - Product Version = (match host driver version) - AHV-compatible host and guest drivers for older AOS versions can be found on the NVIDIA Licensing Software Downloads site under Platform = Nutanix AHV. ## Supported Storage Volumes NC2 on AWS runs on bare metal instances that include local non-volatile memory express (NVMe) solid-state drives (SSD) storage. CVM running on each node in the cluster uses local NVMe for its capacity tier. NVMe instance stores are local ephemeral disks that are physically connected to an instance for fast temporary storage. The data stored on the NVMe store is deleted when the instance is terminated, stopped, or hibernated. Each node in an NC2 cluster has two EBS volumes: AHV EBS and CVM EBS. Both are encrypted gp3 volumes and are used as boot volumes for AHV and CVM, respectively. The AHV EBS volume is 100 GB, and the CVM EBS volume is 150 GB. With NC2 on AWS running AOS 6.8 or higher, additional EBS volumes can be attached to 13.metal, i3en.metal, and i4i.metal bare-metal instances to augment the data storage capacity of each host. NC2 uses the EBS GP3 volumes, which are software volumes on general-purpose SSDs, as a secondary storage tier. Currently, NC2 treats the EBS volumes as ephemeral and deletes the EBS volume with the node to which it is attached. When a node with an EBS volume attached goes bad, the node along with attached EBS volumes will be replaced. When an EBS volume goes bad, only the EBS volume will be replaced. NC2 on AWS supports attaching a minimum of one EBS volume to 13.metal,i3en.metal, and i4i.metal bare-metal instances. The ability to attach additional EBS volumes enables scaling the storage capacity of NC2 clusters independent of the compute capacity. The following are some considerations about the EBS volume support: - EBS Volume is supported with only new clusters created when running AOS 6.8. You cannot attach EBS volumes to existing clusters. - EBS capacity can be up to 80% of the total capacity. - Currently, NC2 only supports automatic node replacements with attached EBS volumes with a similar configuration. NC2 does not support manually replacing EBS volumes. Also, you can only increase EBS storage per host and cannot decrease it. - Features, such as Hibernate / Resume, Cluster Protect, and Software data-at-rest encryption, are supported with EBS attached nodes. - EBS storage is not supported with heterogeneous clusters. - If EBS storage was added to the bare-metal instance while creating a cluster, you can only add hosts of the same bare-metal instance type with EBS storage. - You can increase the EBS storage per host if you want. When EBS storage is increased, it is increased in all bare-metal nodes; each instance type with EBS will have uniform capacity across nodes. You cannot decrease the EBS storage after the cluster is deployed. - EBS volume support is available only with NCI Ultimate and AOS Ultimate licenses. - The EBS Volume support feature includes changes to the permissions for the IAM roles required by NC2 in order to deploy a cluster with EBS volumes. If you have created the IAM roles prior to the release of EBS volume support, then you must update the CloudFormation stack to make sure you have the necessary permissions for the IAM roles. - The EBS Volumes are listed as CLOUD-SSD tier on the Hardware dashboard of Prism Element Web Console UI. However, these are displayed as SSD-PCIe when checked on the Cluster > Hardware Overview page of the Prism Central UI. The following diagram illustrates how different storages are used in NC2. The following table lists the storage volume numbers and sizes. | Instance | Local Storage | Size of EBS Volumes | EBS Storage | |:---:|:---:|:---:|:---:| | i3en.metal | 60 TB | 10 TB each | 10 to 150 TB | | i4i.metal | 30 TB | 7.5 TB each | 7.5 to 120 TB | | i3.metal | 15.2 TB | 7.5 TB each | 7.5 to 60 TB | - AWS bills you an additional charge for the EBS volumes and S3 storage for the time the cluster is hibernated. If a node turns unhealthy and you add another node to a cluster for evacuation of data or VMs, AWS also charges you for the new node. See the [AWS Pricing Calculator](URL) for information about how AWS bills you for EBS volumes. For more information on how to add EBS volumes while deploying an NC2 cluster, see [Creating a Cluster](URL). For more information on how to increase the EBS storage, see [Updating Cluster Capacity](URL). ## Limitations Following are the limitations of NC2 in this release: - A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS regions that have seven placement groups. - NC2 does not recommend using single-node clusters in production environments. - Two-node clusters are not supported. - There can be a maximum of 14 ENIs per host. Each ENI can have up to 50 IP addresses with 49 usable IPs. For more information, see [AWS Elastic Network Interfaces (ENIs) and IP Addresses](URL). - You cannot mix dedicated host tenancy and default tenancy instances in a single cluster. You cannot change the instance tenancy type after the NC2 cluster is deployed. - Dedicated hosts cannot be hibernated. Also, dedicated hosts do not support partition placement groups. - AWS EC2 dedicated hosts do not support partition placement groups. Nutanix cannot guarantee resiliency against AWS rack partition failures with Dedicated Hosts. - You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal, m5d.metal, and m6id.metal instance types. However, you can only create homogenous clusters with g4dn.metal instances; it cannot be used to create a heterogeneous cluster. See [Creating a Heterogeneous Cluster](URL) for more details. - NC2 does not support sharing of AWS subnets among multiple clusters. - Only IPv4 is supported. - Public CIDR is not supported for subnets on NC2 on AWS clusters when associated with Cloud subnets. You must only use private CIDR. - Do not use 192.168.5.0/24 CIDR for the VPC being used to deploy the NC2 on AWS cluster. All Nutanix nodes use that CIDR for communication between the CVM and the installed hypervisor. - Unmanaged networks are not supported in this release. - Broadcast and unknown unicast traffic are dropped. - The default configuration for CVMs on NC2 with AOS 6.7 or earlier is 32 GB of RAM. On NC2 with AOS 6.7.1.5, the CVM memory size is set to 48 GB. - If you choose to create a new VPC by using the NC2 console, you cannot deploy other clusters in that VPC when you are creating a cluster. However, if you choose to deploy a cluster in an existing VPC (VPC you created using the AWS console), you can deploy multiple clusters in that VPC. - For Prism Central scale-out deployments used in an NC2 on AWS environment, reconfiguration of Prism Central VM IP addresses is not supported. - IP preservation is not currently supported when VMs are recovered from a Cluster Protect backup in S3 buckets. - A proxy server cannot be used if your NC2 on AWS cluster uses Flow Virtual Networking. - A cluster cannot be recovered if your NC2 on AWS cluster uses Flow Virtual Networking. - The size of the VM recovered from MST might be larger than the original due to different block sizes on MST and AOS. - If you intend to protect your clusters using the Cluster Protect feature, understand the limitations of using this feature as listed in Limitations of Cluster Protect. - Cluster Protect and MST DR with Pilot Light Deployment features are not compatible with each other. You can configure only one of these features for your NC2 cluster. ## Non-Applicable On-Premises Configurations Following is a list of the configurations and settings that are supported in an on-premises Nutanix cluster, but are not applicable to a cluster running in AWS. ## Prism Element and Prism Central Configurations - VLAN ID: AWS does not support VLANs. Therefore, if you deploy a cluster on AWS, you do not need to provide the VLAN ID when you create or update the network in the cluster. The VLAN ID is replaced by the subnet ID, which uniquely identifies a given network in a VPC. - Network Visualization: The Network Visualization feature of the on-premises Prism Element (Prism Element) web console is a consolidated graphical representation of the network of the Nutanix cluster VMs, hosts, network components (as physical and logical interfaces), and attached first-hop switches (alternately referred to as Top-of-Rack (ToR) switch or switches in this document). In an on-premises cluster, the information about ToR Switches is configured by using a CLI command. The cluster also uses SNMP and LLDP to fetch more information from the switch. In a cluster running in AWS, you have no visibility into the actual cloud infrastructure such as the ToR switches. API support is not available to discover the cloud infrastructure components in Nutanix clusters. Given that the cluster is deployed in a single VPC, the switch view is replaced by the VPC. Any configuration options on the network switch are disabled for clusters deployed in AWS. - Uplink Configuration: The functionality to update the uplink configuration is disabled for a cluster running in AWS. - Hardware Configuration: The Switch tab in the Hardware menu of the Prism Element web console is disabled for a cluster running in AWS. - Rack Configuration: The functionality to configure racks is disabled for a cluster running in AWS. Clusters are deployed as rack-aware by default. APIs to create racks are also disabled on clusters running in AWS. - Broadcast and LLDP: AWS does not support broadcast and any link layer information based on protocols such as LLDP. - Security Dashboard: A dashboard that provides a dynamic summary of the security posture across all registered clusters is not supported for NC2. - Host NIC: Elastic Network Interfaces (ENIs) provisioned on bare-metal AWS instances are virtual interfaces provided by Nitro cards. AWS does not provide any bandwidth guarantees for each ENI, but provides an aggregate multi-flow bandwidth of 25G. Also, when clusters are deployed on AWS, ENI creation and deletion is dynamic based on UVMs and you do not need to perform these workflows. Hence, the Prism Element web console displays only single host NIC information, that is eth0, which is the primary ENI of the bare-metal instance. All the configuration and statistical attributes are associated with eth0. - Hosts Only No Blocks: Hosts are independent and not put together as a block. The block view is changed to host view in the Prism Element web console. - Field Replaceable Units: The functionality to replace and repair disks is disabled for a cluster running in AWS. - Backplane LAN: NC2 on AWS do not support RDMA based vNICs and hence the support for backplane LAN is disabled. - Firmware Upgrades: For an on-premises cluster, Life Cycle Manager (LCM) allows you to perform upgrades of the BMC, BIOS, and any hardware component firmware. However, these components are not applicable to clusters deployed in AWS. Therefore, LCM does not list these items in the upgrade inventory. - License Updates: You cannot update your NC2 licenses by using the Prism Element web console. Update your NC2 licenses by using the NC2 console. ## Cluster Operations Perform the following actions using the NC2 console: - Cluster deployment and provisioning must be performed by using the NC2 console and not by using Foundation. - Perform add node and remove node operations by using the NC2 console and not by using the Prism Element web console. ## aCLI Operations The following aCLI commands are disabled in a cluster in AWS: | Namespace | Options | |---|---| | net | create_cluster_vswitch, delete_cluster_vswitch, get_cluster_vswitch, list_cluster_vswitch, update_cluster_vswitch | | host | enter_maintenance_mode, enter_maintenance_mode_check, exit_maintenance_mode | ## nCLI Operations The following nCLI commands are disabled in a cluster in AWS: | Entity | Options | |---|---| | network | add-switch-config, edit-switch-config, delete-switch-config, list-switch, list-switch-ports, get-switch-collector-config, edit-switch-collector-config | | cluster | , edit-hypervisor-lldp-params, get-hypervisor-lldp-config, edit-param disable-degraded-state-monitoring | | disk | delete, remove-start, remove-status | | software | download, list, remove, upload | ## API Operations The following API calls are disabled or changed in a Nutanix cluster running in AWS: | API | Changes | |---|---| | POST: /hosts/{hostid}/enter_maintenance_mode | Not supported | | POST: /hosts/{hostid}/exit_maintenance_mode | Not supported | | GET /clusters | Values for the rack and block configuration are not displayed. | | POST /cluster/block_aware_fixer | Not supported | | DELETE /api/nutanix/v1/cluster/rackable_units/{uuid} | Not supported | | DELETE /api/nutanix/v3/rackable_units/{uuid} | Not supported | | DELETE /api/nutanix/v3//disks/{id} | Not supported | | GET /hosts: | Returns the instance ID of the AWS instance as compared to the serial number of the host in an on-premises cluster. Returns no values for the following attributes: ipmiAddress (string, optional), ipmiPassword (string, optional), ipmiUsername (string, optional), backplanelp (string, optional), bmcModel (string, optional): Specifies the model of bmc, present on the node, bmcVersion (string, optional): Specifies the version of bmc, present on the node, controllerVmBackplanelp (string, optional), | ## Creating My Nutanix Account A My Nutanix account allows you to subscribe to, access, and manage NC2. After creating a My Nutanix account, you can access the NC2 console through the My Nutanix dashboard. You can use NC2 for a 30-day free trial period (one common free trial period for NC2 on all supported clouds) or sign up to pay for NC2 usage beyond the free trial period. You can pay for NC2 using your Nutanix licenses or with the subscription plan. **Procedure** 1. Go to [https://my.nutanix.com](URL). 2. Click Sign up now. 3. Enter your details, including first name, last name, company name, Job title, phone number, country, email, and password. - Follow the specified password policy while creating the password. Personal domain email addresses, such as gmail.com or yahoo.com are not allowed. You must sign up with a company email address. 4. Click Submit. 5. Click the link in the email to verify your email address. A confirmation

Use Quizgecko on...
Browser
Browser