VMware ICM v8 Module 8 PDF

Summary

This document covers the VMware ICM v8 module 8 on managing virtual machines. It includes information about types of VM migrations, configuring Enhanced vMotion Compatibility and CPU constraints.

Full Transcript

Module 8 Managing Virtual Machines 8-2 Importance Managing VMs effecti vely requires skills in migrating VMs, taking snapshots, and managing the resources of the VMs. 8-3 Module Lessons l Migrating VMs w ith vSphere vMotion 2. Configuring Enhanced v Motion Compatibility 3....

Module 8 Managing Virtual Machines 8-2 Importance Managing VMs effecti vely requires skills in migrating VMs, taking snapshots, and managing the resources of the VMs. 8-3 Module Lessons l Migrating VMs w ith vSphere vMotion 2. Configuring Enhanced v Motion Compatibility 3. Migrating VMs with vSphere Storage vMotion 4. Cross vCenter Migrations 5. Creating Virtual Machine Snapshot s 6. Virtual CPU and Memory Concepts 7. Resource Controls 349 8-4 Lesson 1: Migrating VMs with vSphere vMotion 8-5 Learner Objectives Recognize the types of VM migrations that you can perform within a vCenter instance Explain how vSphere vMotion works Verify vSphere vMotion requirements Migrate virtual machines using vSphere vMotion 350 8-6 About VM Migration Migration means moving a VM from one host, datastore, or vCenter instance to another host, datastore, or vCenter inst ance. Migratio n can be cold or hot: A cold migration moves a powered-off or suspended VM. A hot migration moves a powered-on VM. vCenter performs compatibility checks before migrating suspended or powered-on VMs to ensure that the VM is compatible with the target host. 351 8-7 Migration Types The type of migration that you perform depends on the power state of the VM that you select in the inventory and the migration type that you select in the Migrate wizard. Mig rate I Photon-02 s~ lec l a migration type a, l-">WOI 'C: U!'.u 02 '"'"" - The Migrate wizard provides the following migration options: Compute resource only: Move a VM, but not its storage, t o another host. For a hot migration, vSphere vMotion is used to move the VM. Storage only: Move a VM's files or objects to a new datastore. For a hot migration, vSphere Storage vMotion is used to move the VM. Both compute resource and storage: Move a VM to another host and datastore. For a hot migration, vSphere vMotion and vSphere Storage vMotion are used to move the VM. Cross vCenter Server export: Move the VM to a host and datastore managed by a different vCenter instance that is not linked to the current SSO domain. The purpose of the migration determines which migration technique to use. For example, you might need to shut down a host for maintenance but keep the VMs running. Use vSphere vMotion to migrate the VMs instead of performing a cold or suspended VM migration If you must move a VM's files to another datastore to better balance the disk load or transition to another storage array, you use vSphere Storage vMotion. 352 Some migration techniques, such as vSphere vMotion migration, have special hardware requirements that must be met. Other techniques, such as a cold migration, do not have special hardware requirements. 353 8-8 About vSphere vMotion A vSphere vMotion migration moves a powered-on VM from one host (compute resource) to another. vSphere vMotion provides the follow ing capabilities: Improvement in overall hardware use Continuous VM operation while accommodating scheduled ES Xi host downtime vSphere DRS to balance VMs across hosts VM vm vm vm vm vm ESXi ESXi 0 0 Using vSphere vMotion. you can migrate running VMs from one ESXi host to another ESXi host with no disruption or downtime. vSphere DRS uses vSphere vMotion to migrate running VMs from one host to another to ensure that the VMs have the resources that they require. With vSphere vMotion. the entire state of the VM is moved from one host t o another. but the data storage remains in the same datastore. The state information includes the current memory content and all the information that defines and identifies the VM. The memory content includes transaction data and whatever bits of the operating system and applications are in memory. The definition and identification information stored in the state includes all the data that maps to the VM hardware elements. such as the BIOS or EFI. devices. CPU. and MAC addresses for the Ethernet cards. 354 8-9 Configuring vSphere vMotion Networks vSphere vMotion migrations require correctly configured VMkernel adapters on the source and destination hosts. Ihi sa-esxi-Ol.vclassJocal l :.\~'~"~~~ " VIrtual switches ~!.,,....)'"........ ~. t~ ) St..;orm.;:l ,- s~ '-OI"lCUk (:,l rlrlt: ;.t,;:d Compatibilit y c;ti VM-01 tl sa-esxi-04.vclassJocal (i) The vMotlon interface is no t confi gured (or is misconfigured ) o n tne ''Destination" host 'sa-esxi-04.vclas [context ]zKq7AZECAOAAA AQ/ MO Ebdn84ZAAAOOJUbGlid mlhY29yZSSzbwAA mqh EA CD9RVG1yzoCdr CANCEL ~~~~].. If validation succeeds. you can continue in t he wizard. If validation does not succeed , a list of vSphere vMotion errors and warnings displays in the Compatibility pane. With warnings, you can still perform a vSphere vMotion migration. But with errors, you cannot continue. You must exit the wizard and fix all errors before retrying the migration. If a failure occurs during the vSphere vMotion migration, the VM is not migrated and continues to run on the source host. 362 8-16 Migrating Encrypted VMs When powered-on, encrypted VMs are migrated, encrypted vSphere v Motion is automatically used. For VMs that are not encrypted, select one of the fo llowing encrypt ed vSphere vMotion menu items: Disabled Opportunistic (default): Encrypted vSphere vMotion is used if the source and destination hosts support it Required: If the source or destination host does not support encrypted vSphere vMotion, the migration fails Edi t Settings I Win10-04 X Virtual Hard wafe VM Options Advanced Parameters ) Ger e l'.al O ptio ns V ~..11"-1ame. V'iit10-04 ) V M\\' are Remote Consol e txoan d for V M...va'e Rtmot~ Con~c-le !.ettings Options v Encryption Encrypt VM Data st o r e Defau l t v (Requires Key Management Server ) Encrypted vMotion Encrypted FT Encrypted vSphere vMotion secures confidentiality, integrity, and authenticity of data that is transferred using vSphere vMotion. Encrypted vSphere vMotion supports all variants of vSphere vMotion. including migration across vCenter systems. Encrypted vSphere Storage vMotion is not supported. You cannot turn off encrypted vSphere vMotion for encrypted VMs. 363 8-17 Lab 20: vSphere vMotion Migrations Configure vSphere vMo tion networking and migrate virtual machines using vSphere vMotion: 1. Configure vSphere vMotion Networking on sa-esxi-Olvclass.local 2. Configure vSphere v Motion Networking on sa-esxi-02.vclass.local 3. Prepare Virtua l Machines for vSphere vMotion Migration 4. Migrate Virtual Machines Using vSphere vMotion 8-18 Review of Learner Objectives Recognize the types of VM migrations that you can perform within a vCenter instance Explain how vSphere vMotion works Verify vSphere vMotion requirements Migrate virtual machines using vSphere vMotion 364 8-19 Lesson 2: Configuring Enhanced vMotion Compatibility 8-20 Learner Objectives Describe the role of Enhanced vMotion Compatibility in migrations Configure EVC CPU mode on a vSphere cluster Explain how per-VM EVC CPU mode works with vSphere vMotion Configure EVC Graphics mode on a vSphere cluster or a VM 365 8-21 CPU Constraints on vSphere vMotion Migration CPU com patibility between source and target hosts is a vSphere vMotio n requirement that must be met. CPU Characteristics Exact Match Required By Reason Source Host and Target Host Clock speeds, cache sizes. No The VMkernel virtualizes these hyperthreading, and number of characteristics. cores Manufacturer (Intel o r AMD) Yes Instruction sets contain many family and generation small differences. (0pt eron4, Intel Westmere) Presence or absence of SSE3, Yes Multimedia instructions are usable SSSE3, or SSE4.1 instructions directly by applications. Virtualization hardware assist For 32-bit VMs: No The VMkernel virtualizes this characteristic. For 64-bit VMs on Intel: Yes Intel 64-bit w ith VMware implementation uses Intel VT. Depending on the CPU characteristic , an exact match bet ween the source and target host might or might not be required. For example. if hyperthreading is activated on the source host and deactivated on the destination host , the vSphere v Motion migration continues because the VMkernel handles this difference in characteristics. But, if the source host processor supports SSE4.1 instructions and the destination host processor does not support them. the hosts are considered incompatible and the vSphere vMotion migration fails. SSE4.1 instructions are application- level instructions that bypass the virtualization layer and might cause application inst ability if mismatched after a migration w ith vSphere vMotion. 366 8-22 About Enhanced vMotion Compatibility Enhanced vMotion Compatibility is a cluster feature that enables vSphere vMotion migrations between hosts without identical feature sets. The feature uses CPU baselines to configure all the processors in the cluster that are activated for Enhanced vMotion Compatibility. X Cluster Enabled for EVC Enhanced vMotion Compatibility verifies that all hosts in a cluster present the same CPU feature set to VMs, even if the CPUs on the hosts differ. Enhanced vMotion Compatibility facilitates safe vSphere vMotion migration across a range of CPU generations. With Enhanced vMotion Compatibility, you can use vSphere vMotion to migrate VMs among CPUs that otherwise are considered incompatible With Enhanced vMotion Compatibility, vCenter can enforce vSphere vMotion compatibility among all hosts in a cluster by forcing hosts to expose a common set of CPU features (baseline) to VMs. A baseline is a set of CPU f eatures that are supported by every host in the cluster. When you configure Enhanced vMotion Compatibility , you set all host processors in the cluster to present the features of a baseline processor. After the features are activated for a cluster. hosts that are added to the cluster are automatically configured to the CPU baseline. Hosts that cannot be configured to the baseline are not permitted to join the cluster. VMs in the cluster always see an identical CPU feature set, no matter on which host they run. Because the process is automatic, Enhanced vMotion Compatibility is easy to use and requires no specialized knowledge of CPU features and masks. 367 8-23 EVC Cluster Requirements for CPU Mode All hosts in the cluster must meet severa l CPU-based requirements:. Use CPUs from a single vendor, either Intel or AMD. Be acti vated for hardware virtualization: AMD-V or Intel VT. Be activated for execution-disable technology: AMD No eXecute (NX) or Intel eXecute Disable (XD). Be configured for vSphere vMotion migration. Applications in VMs must be CPU ID compatible. Before you create an Enhanced vMotion Compatibility cluster, ensure that the hosts that you intend to add to the cluster meet the requirement s. Enhanced vMotion Compatibility automatically configures hosts whose CPUs have Intel FlexMigration and AMD-V Extended Migration technologies to be compatible w ith vSphere vMotion with hosts that use older CPUs. For Enhanced vMotion Compatibility to function properly , the applications on the VMs must be written to use the CPU ID machine instruction for discovering CPU features, as recommended by the CPU vendors. vSphere cannot support Enhanced vMotion Compatibility with applications that do not f ollow the CPU vendor recommendations for discovering CPU features. To determine which EVC modes are compatible with your CPU, search the VMware Compatibility Guide at http:/ /www.vmware.com/resources/compatibility. Search for the server model or CPU family , and click the entry in the CPU Series column to display the compatible EVC modes. 368 8-24 Configuring EVC CPU Mode on an Existing Cluster You configure EVC CPU mode on an existing cluster to ensure vSphere vMotion CPU compatibility between the hosts in the cluster. VMware EVC is Disabled Co!lg-""~'""' Change EVC Mode X Ul SB·Dat~c.nter i"'J ~rOCHi~l t)'Pf'\.,..ill W pf"tN !ht! clostt-t lr.t!'lli-MI!rnm' f>i~il!!:!,Jh!ll' (Xt O:.I b cort"'2J mttiJ -~m)l!l' Glir>~;!:..;5mn C :m.- "l~m;o:l! -N~hd~tut' GtoN~!lc.n (X eon~ C. ore'" i711rll..1" ·w.,Wni!'d)' 9rirlg.. G~r,er~\ r oro ill!~.,.,'1 Brw:lge· Gtner~tron Jn l~l '!. ' k~~wt!r Gl!r,erAl t!ln !r;tti'! '5 ro ~:l,~e:i G otr.er.ot~on rr.:e!t- ·sk:; r.-~e Generi!o flit.,,t r:~ ttl ~ proC...SMll 5 FO! rum,;..nrorrnillion, see llr..:r.. !~!di.l~ f>.l~!! m- G,;-:'1..-rattC!) rr.oc~ or D*O ·,. Graphics Mode (v SGA) ;.pp;,~~ :t't.; t'a~e~:r;e !,.3t.rri': s"~ f~v $1!"5J'J"nr::o...tt\~llt>.: r~ oornp;;:;:. !e- ;:h th.;o t~:.~lur+.\ P:l O::t~.""-'~·,,eQo.. r~: O.,....riJ..£ &~ r.il - """'·~ h. ~-~;{·. t:;:-s ;: - , ' ~ , r Ns 1:'' f: i H ~ t:;::.~' !' "" ,< --·-····---···············. ,,,,,,,............................ ,_ ,... 13 tCM-Detlutor~ CJ T M«!ofo oN> > tJ ,dYSDilta > n.sdc. ~f [i D ' fl'".'..',M ~~ D ISO > 0 Patches. ,...;C :=:I- "C:;·_'_;; 'mx::._ ·W _·,,.,_1 l_;; :.: ·0';_ ' _:.: t>f;_;; o"_;; r--~ ,!£, \'M;'!(.I-')::-":X!')ti!. -..ml~ n:_;; !!'it_;; «'l.;_ l'l.;,_:;::_:.: ;; ;!ci;:_ tl4;_ ·· - &-, :,-,J-, ?._,'-:" I LJ Photon-HW > t) P/'loto n-Templale L.;(~ ] -"();_;. Jin..,-f,t;:.,~""- -: :- )- 0 VCLS4!3~ 13875-c8~--1 aac-8~ 96·141c3499.. l. d1 \,"J~n!C~CtO:· Sn!.p;:Jt·)t;l."m~" ) Ll WINl0-01 > LJ WINl0-02 :J W"10C4 ) LJ W11'1.l0-06 A VM can have one or more snapshots. For each snapshot , the following files are created : Snapshot delta file: This file contains the changes to the virtual disk's data since the snapshot was taken. When you take a snapshot of a VM, the state of each virtu al disk is preserved. The VM stops writing t o its - flat. vmdk file. Writes are redirected to >- ######- delta. vmdk (or - ######-sesparse. vmdk) instead (for which### ### is the next number in the sequence). You can exclude one or more virtual disks from a snapshot by designating them as independent disks. Configuring a virtual disk as independent is t ypically done when the virtual disk is created, but th is option can be changed whenever the VM is powered off. Disk descriptor file: - 0 0 0 0 0 #. vmdk. This file is a small text file that contains information about the snapshot. 401 Configuration state file: -. vmsn. # is the next number in the sequence, starting with 1. This file holds the active memory state of the VM at the point that the snapshot was taken, including virtual hardware, power state, and hardware version. Memory state file: -. vmern. This file is created if the option to include memory state was selected during the creation of the snapshot. It contains the entire contents of the VMs at the time that the snapshot of the VM was taken. Snapshot active memory file: -. vmem. This file contains the contents of the VM memory if the option to include memory is selected during the creation of the snapshot. The. vmsd file is the snapshot list file and is created at the time that the VM is created. It maintains snapshot information for a VM so that it can create a snapshot list in the vSphere Client. This information includes the name of the snapshot. vmsn file and the name of the virtual disk file. The snapshot state file has a.vmsn extension and is used to store the state of a VM when a snapshot is taken. A new. vmsn file is created for every snapshot that is created on a VM and is deleted when the snapshot is deleted. The size of this file varies, based on the options selected when the snapshot is created. For example, including the memory state of the VM in the snapshot increases the size of the. vmsn fi le. You can exclude one or more of the VMDKs from a snapshot by designating a v irtual disk in the VM as an independent disk. Placing a virtual disk in independent mode is typically done when the virtual disk is created. If the virtual disk was created without activating independent mode, you must power off the VM to activate it. Other files might also exist, depending on the VM hardware version. For example, each snapshot of a VM that is powered on has an associated. vmem file, which contains the guest operating system main memory, saved as part of the snapshot. 402 8-61 VM Snapshot Files Example (1) WinlO - Ol. vmsd VM with v [JJ W1n10-0 Winl0 - 01-flat.vmdk no snapshots Q You are here WinlO-Ol. vmdk This example shows the snapshot and virtual disk files that are created when a VM has no snapshots. 8-62 VM Snapshot Files Example (2) \~in10-01. vmsd VM with.., OJ Wln10-01 Winl0-01-flat.vmdk no snapshots Q You a112: here ~iinl0-01. vmdk ~ OJ Wln10-01 ~iin l 0-01-Snapshot1. vrnem First snapshot taken Win10-01-Snapshot1.vmsn v LW Securlry Fatch 1.0 (with memory state) Win 1 0-01-000001-sesparse.vmdk Q You a1e he"" Win10-01-00000 1. vmdk This example shows the snapshot and virtual disk files that are created when a VM has one snapshot. 403 8-63 VM Snapshot Files Example (3) Win10-01. vmsd VMwith BJ W o10-0' Win10 - 01-fl at.vmdk no snapshots Q You a1~ hE- n: Win10-01. vmdk BJ Win10·01 vlin1 0-01-Snapshotl. vmem First snapshot taken vlin1 0- 01 - Snapshot 1. vmsn {(} 5~0. ty OctO'"! 1.0 (with memory state) Win10-01 - 000001-sesparse. vmdk 9 Y~u ar~ r,ere Win10 - 01 - 000001.vmdk BJ Win10 -01 Win1 0- 01 - Snapshot2.vmsn Second snapshot taken.. (errrJ$\I(jf",~ 01'.1\il~tOf~ UPI:l)l~5.... 1'.~ SewrltyPoitch \ ,0 ?: Yououehers S.,llf>S"l1'.:1'1>1""~"'~"""" You can perform the f ollowing actions from the Manage Snapshots window: Edit t he snapshot: Edit t he snapshot name and description. Delete the snapshot: Remove the snapshot from the Snapshot Manager, consolidate the snapshot files to the parent snapshot disk, and merge with the VM base disk. Delete all snapshots: Commit al l the intermediate snapshots before the current-state icon (You are here) to the VM and remove all snapshots for that VM. Revert to a snapshot: Restore, or revert to, a particular snapshot. The snapshot that you restore becomes the current snapshot. When you revert to a snapshot , you return al l these items to the state that they were in at the time that you took the snapshot. If you want the VM to be suspended, powered on, or powe~ed off when you start it, ensure t hat the VM is in the correct state w hen you take the snapshot. Deleting a snapshot (DELETE or DELETE ALL) consolidates the changes between snapshots and previous disk states. Deleting a snapshot al so writes to the parent disk all data fro m the delta disk that contains the information about the deleted snapshot. When you delete the base parent snapshot, all changes merge with the base VMDK. 405 8-65 Deleting VM Snapshots (1) If you delete a snapshot one or more levels above the. You are here level. the snapshot state is deleted. In this example, the snap01 data is committed into the parent (base disk). and the foundation for snap02 is retained..____ 0 You are here. To play the animation, go to https:/ /vmware.bravais.com/s/WhbcXR4sSwk2VI7MeaXD. 406 8-66 Deleting VM Snapshots (2) If you delete the latest snapshot , the changes are committed t o its parent. The snap02 data is committed into snap01 data, and the snap02 - delta. vmdk file is deleted...____ 0 You are here. To play the animation, go t o https:/ /vmware.bravais.com/s/IOJYYOzMTv7pvxBq NcOp. 407 8-67 Deleting VM Snapshots (3) If you delete a snapshot one or more levels below the You are here level. subsequent snapshots are deleted. and you can no longer return to those states. The snap02 data is deleted. Delete this 1--- 0 You are here. / snapshot. snap02 Delta (2GB) To play the animation. go to https:/ /vmware.bravais com/s/NiOxPT3iycem08WYXKom. 408 8-68 Deleting All VM Snapshots The delete-all-snapshots mechanism uses storage space efficiently. The size of the base disk does not increase. SnapOl is committed to the base disk before snap02 is committed. snap02 Delta (2 GB) To play the animation. go to https:/ /vmware.bravais.com/s/L3il0HlrywEhlgr5p7RP. All snapshots before the You are here point are committed all the way up to the base disk. All snapshots after You are here are discarded. Like a single snapshot deletion, changed blocks in the snapshot overwrite their counterparts in the base disk. 409 8-69 About Snapshot Consolidation Snapshot consolidation is a method for committing a chain of delta disks to the base disks when the Snapshot Manager shows that no snapshot s exist, but the delta disk files remain on the datastore. Snapshot consolidation resolves problems that might occur with snapshots: The snapshot descriptor file is committed correctly, and the Snapshot window shows that all the snapshots are deleted. The snapshot files (-delta. vmdk or - sesparse. vmdk) still exist in the VM's folder on the datastore. Snapshot files can continue to expand until they reach the size of the - flat. vmdk f ile or until the datastore runs out of space Snapshot consolidation is a way to clean unneeded delta disk files from a datastore. If no snapshots are registered for a VM, but delta disk files exist, snapshot consolidation commits the chain of the delta disk fi les and removes them. If consolidation is not performed , the delta disk files might expand to the point of consuming all the remaining space on the VM 's datastore or the delta disk file reaches its configured size. The delta disk cannot be larger than the size configured for the base disk. 410 8-70 Discovering When to Consolidate Snapshots On the Monitor tab under All Issues for the VM, a warning notifies you that a consolidation is required. ~ f'tiClton· L.:bTcmp ,ne ~ ~ Photcm· Tc: mpl~te tuunam.l o\!Jrm~"' AH Issues f1 Um.lJ1 02 {]'. i.Jn;,j,.~ it Lonu ·06 {~U~· I1 {il lmU~·I3 With snapshot consolidation, vCent er displays a warning when the descriptor and the snaps hot files do not match. After the warning displays, you can use the vSphere Client to commit the snapshots. 411 8-71 Consolidating Snapshots After the snapshot consolidation warning appears. you can use the vSphere Client to consolidate the snapshots. All snapshot delta disks are committed to the base disks. - vSphere Clrent 0, < Q Li n ux-11 : ACTIO"S [IJJ Surnm.ary Mon tor Co'1figure Pern issron s ::>atastores '~ Photon-LibTemplat e... ~. Photo n-Temp1at... @J Ac t1 on s- Linu x-11 v Ll Lab VMS Power o tdatton is needed ~ Linux-0 2 f~ Take Snap shot... :=ta '17> Manage Snap shot s ~ Linux-12 3!. Migrate.. ttus 6'b Linux-13 [ffJ Linux-20 Clone Con solidat e @ Photon-11 ffp Photon-12 Fa Jit To lerance - For a list of best practices for using snapshots in a vSphere environment, see VMware knowledge base article 1025279 at http:/ /kb.vmware.com/kb/1025279. Delet e A ll Snapshots -ool : 412 8-72 Lab 22: Working with Snapshots Take VM snapshots. revert a VM to a different snapshot. and delete snapshots: 1. Take Snapshots of a Virtual Machine 2. Add Files and Take Another Snapshot of a Virtual Machine 3. Revert the Virtual Machine to a Snapshot 4. Delete a Snapshot 5. Delete All Snapshots 8-73 Review of Learner Objectives Take a snapshot of a virtual machine Manage multiple snapshots Delete virtual machine snapshots Consolidate snapshots 413 8-7 4 Lesson 6: Virtual CPU and Memory Concepts 8-75 Learner Objectives Describe CPU and memory concepts in relation to a virtualized environment Recognize techniques for addressing memory resource overcommitment Identify additional technologies that improve memory use Describe how VMware Virtual SMP works Explain how the VMkernel uses hyperthreading 414 8-76 Memory Virtualization Basics vSphere has the following layers of memory: Guest OS virtual memory is presented to applications by the operating system Guest OS physical memory is presented to t he virtual machine by the VMkernel Host machine memory that is managed by the VMkernel provides a contiguous, addressable memory space that is used by the VM Virtual Machine ~----------------------------- Application I Guest OS Virtual Memory l ~ l Operating System ------------ I Guest OS Physical Memory ------------- I ESXi Host I ESXi Host Machine Memory When running a virtual machine. the VM kernel creates a contiguous addressable memory space for the VM. This memory space has the same properties as the virtual memory address space presented to applications by the guest operating syst em. This memory space allows the VMkernel to run multiple VMs simultaneously while protecting the memory of each VM from being accessed by others. From the perspective of an application running in the VM. the VMkernel adds an extra level of address translation that maps the guest physical address to the host physical address. 415 8-77 VM Memory Overcommitment Memory is overcommitted when the combined configured memory footprint of all powered-on VMs exceeds that of the host memory sizes. When memory is overcommitted: VMs do not always use their full allocated memory To improve memory use. an ESXi host reclaims memory from idle VMs to allocate to VMs that need more memory VM memory can be swapped out to the. vswp file VM memory overhead can be swapped out to the vmx - *. vswp file Host machine memory = 32 GB Total memory of powered-on VMs = 36GB On On On Off ~ L:J F ~ ~i 1--1i.)~- ---· joool joool............... joool............... 12GB 12GB 12GB 12GB +.. ___ _, ·-- ~L VM1 Iil VM2 VM3.vswp.vswp.vswp vmx-.vswp vmx- *. vswp vmx-.vswp The total configured memory sizes of all VMs might exceed the amount of available physical memory on the host. However, this condition does not necessarily mean that memory is overcommitted. Memory is overcommitted when the working memory size of all VMs exceeds that of the ES Xi host's physical memory size. Because of the memory management techniques used by the ES Xi host, your VMs can use more virtual RAM than the available physical RAM on the host. For example, you can have a host with 32 GB of memory and three VMs running with 12 GB of memory each. In that case, the 416 memory is overcommitted. If all three VMs are idle, the combined consumed memory is below 32 GB. However, if all VMs are actively consuming memory, then their memory footprint might exceed 32 GB and the ESXi host becomes o vercommitted. An ESXi host can run out of memory if VMs consume all reservable memory in an overcommitted memory environment. Although the powered-on VMs are not affected, a new VM might fail to power on because of lack of memory. Overcommitment makes sense because, typically, some VMs are lightly loaded whereas others are more heavily loaded, and relative activity levels vary over time. VM memory from this file is swapped out to disk when host machine memory is overcommitted. Extra memory from a VM is gathered into a swap file with the. vswp extension. The host uses the vmx- *. vswp swap file to gather and track memory overhead. Memory overhead refers to memory used by the VMX (VM Executable) process. 417 8-78 Memory Overcommit Techniques An ESXi host uses memory overcommit techniques to allow the overcommitment of memory while possibly avoiding the need to page memory out to disk. Methods Used by the ESXi Host Details Transparent page sharing This method economizes the use of physical memory pages. In this method, pages with identical contents are stored only once. Ballooning This method uses the VMware Tools balloon driver to deallocate memory from virtual machines. The ballooning mechanism becomes active when memory is scarce, sometimes forcing VMs to use their own paging areas. Memory compression This method reduces a VM's memory footprint by storing memory in a compressed format. Host-level SSD swapping The ESXi host can swap out memory to locally-attached solid- state drives. VM memory paging to disk Using VMkernel swap space is the last resort because of poor performance. The VMkernel uses various techniques to dynamically reduce the amount of physical RAM that is required for each VM. Each technique is described in the order that the VMkernel uses it: Page sharing: ESXi can use a proprietary technique to transparently share memory pages between VMs, eliminating redundant copies of memory pages. Although pages are shared by default within VMs, as of vSphere 6.0, pages are no longer shared by default among VMs. Ballooning: If the host memory begins to get low and the VM's memory use approaches its memory target, ESXi uses ballooning to reduce that VM's memory demands. Using the VMware-supplied vrnmemctl module installed in the guest operating system as part of VMware Tools, ESXi can cause the guest operating system to relinquish the memory pages it considers least valuable. Ballooning provides performance closely matching that of a native system under similar memory constraints. To use ballooning, the guest operating system must be configured with sufficient swap space. Memory compression: If the VM's memory use approaches the level at which host-level swapping is required, ESXi uses memory compression to reduce the number of memory pages that it must swap out. Because the decompression latency is much smaller than the 418 swap-in latency, compressing memory pages has significantly less impact on performance than swapping out those pages. Swap to host cache: Host swap cache is an optional memory reclamation technique that uses local flash storage to cache a virtual machine's memory pages. By using local flash storage, the virtual machine avoids the latency associated with a storage network that might be used if it swapped memory pages to the virtual swap (. vswp) file. Regular host-level swapping: When memory pressure is severe and the hypervisor must swap memory pages to disk, the hypervisor swaps to a host swap cache rather than to a. v swp file. When a host runs out of space on the host cache, a virtual machine's cached memory is migrated to a virtual machine's regular. vswp file. Each host must have its own host swap cache configured. 419 8-79 Configuring Multicore VMs You can build VMs with multiple virtual CPUs (vCPUs). The number of vCPUs that you configure for a single VM depends on the physical architecture of the ESXi host...1 Virtual Physical j j j j j j Thread - -- - Core Socket --~iiijl Single-Core ·=·.... ··-··--·-· ·-··-·. - ·- -..... Dual-Core ,.....,..'-LC"'-' 1. · IR.CJOU I -~ "· --- "· - Quad-Core. L.CPU lrtcPU I Dual-Socket System Sing le-Socket System Single-Socket System In add ition to the physical host configuration. the number of vCPUs configured for a VM also depends on the guest operating system, the needs and scalabil ity limits of the applications, and the specific use case for the VM itself. The VMkerne l includes a CPU scheduler that dynamically schedules vCPUs on the physical CPUs of the host system. The VMkernel scheduler considers socket-core-thread t opology when making scheduling decisions. Intel and AMD processors combine multiple processor cores into a si ngle integrated circuit, called a socket in this discussion. A socket is a single package with one or more physical CPUs. Each core has one or more logical CPUs (LCPU in the diagram) or threads. With logical CPUs, the core can schedule one thread of execution. On the slide, the fir st system is a single-core, dual-socket system w ith two cores and, therefore, two logical CPUs. When a vCPU of a single-vCPU or multi-vCPU VM must be scheduled , the VMkernel maps the vCPU to an available logical processor. 420 8-80 About Hyperthreading With hyperthreading, a core can execute two threads or sets of instructions at the same time: Hyperthreading provides more logical CPUs on which vCPUs can be scheduled Hyperthreading is activated by default To activate hyperthreading: Verify that the host system supports hyperthreading Activate hyperthreading in the system BIOS Ensure that hyperthreading for the ESXi host is turned on Dual-Core Single-Socket System with Hyperthreading If hyperthreading is activated, ESXi can schedule two threads at the same time on each processor core (physical CPU). The drawback of hyperthreading is that it does not double the power of a core. So, if both threads of execution need the same on-chip resources at the same time, one thread has to wait. Still, on systems that use hyperthreading technology, performance is improved. An ESXi host that is activated for hyperthreading should behave almost exactly like a standard system. Logical processors on the same core have adjacent CPU numbers. Logical processors 0 and 1 are on the first core, logical processors 2 and 3 are on the second core, and so on. 421 Consult the host system hardware documentation to verify w hether t he BIOS includes support for hyperthreading. Then, activate hyperthreading in the system BIOS. Some manufacturers call this option Logical Processor and others call it Enable Hyperthreading. Use the vSphere Client to ensure that hyperthreading for your host is turned on. To access the hyperthre ading option. go to the host's Summary tab and select CPUs under Hardware. 422 8-81 CPU Load Balancing The VMkernel balances processor time to guarantee that the load is spread smoothly across processor cores in the system. Hyperthreaded Dual-Core Dual-Socket System The CPU scheduler can use each logical processor independently to execute VMs, providing capabilities that are similar to traditional symmetric multiprocessing (SMP) systems. The VMkernel intelligently manages processor time to guarantee that the load is spread smoothly across processor cores in the system. Every 2 milliseconds to 40 milliseconds (depending on the socket-core-thread topology), the VMkernel seeks to migrate vCPUs from one logical processor to another to keep the load balanced. The VMkernel does its best to schedule VMs with multiple vCPUs on two different cores, rather than on two logical processors on the same core. But, if necessary, the VMkernel can map two vCPUs from the same VM to threads on the same core. If a logical processor has no work, it is put into a halted state. This action frees its execution resources, and the VM running on the other logical processor on the same core can use the full execution resources of the core. Because the VMkernel scheduler accounts for this halt time, a VM running with the full resources of a core is charged more than a VM running on a half core. This approach to processor management ensures that the server does not violate the ESXi resource allocation rules. 423 8-82 Review of Learner Objectives Describe CPU and memory concepts in relation to a virtualized environment Recognize techniques for addressing memory resource overcommitment Identify additional technologies that improve memory use Describe how VMware Virtual SMP works Explain how the VMkernel uses hyperthreading 424 8-83 Lesson 7: Resource Controls 8-84 Learner Objectives Assign share values for CPU and memory resources Describe how virtual machines compete for resources Define CPU and memory reservations and limits 425 8-85 Reservations, Limits, and Shares Beyond the CPU and memory configured for a VM, you can apply resource allocation settings to a VM to control the amount of resources granted: A reservation specifies the guaranteed minimum allocation for a VM A limit specifies an upper bound for CPU or memory that can be allocated to a VM A share is a value that specifies the relative priority or importance of a VM's access to a given resource Available Capacity Limit Shares re used to ~..ontpete in th "s ra'lge. Reservation 0 MHz/MB Because VMs simultaneously use the resources of an ESXi host, resource contention can occur. To manage resources efficiently, vSphere provides mechanisms to allow less, more, or an equal amount of access to a defined resource. vSphere also prevents a VM from consuming large amounts of a resource. vSphere grants a guaranteed amount of a resource to a VM whose performance is not adequate or that requires a certain amount of a resource to run properly. When host memory or CPU is overcommitted, a VM 's allocation target is somewhere between its specified reservation and specified limit, depending on the VM's shares and the system load. vSphere uses a share-based allocation algorithm to achieve efficient resource use for all VMs and to guarantee a given resource to the VMs that need it most. 426 8-86 Resource Allocation Reservations: RAM RAM reservations: Memory reserved to a VM is guaranteed never to swap or balloon. If an ES Xi host does not have enough unreserved RAM to support a VM with a reservation. the VM does not power on. Reservations are measured in MB, GB, or TB. The default is 0 MB. Task Conso le T lnlio"::.r y o...,utdFc!l Slo! I U~ Q) Tt>c "'C~t t1i:!~ 0.a>l f'iH1'1l'::'-'¥ lo!.·~

Use Quizgecko on...
Browser
Browser