If larger than 2TB volumes are required for a VM that is very easily accommodated with in guest volume managers and device concatenation of multiple 2TB disks, or using an alternative to VMFS. However realistically this can only go so far. In this debate I’ve been suggesting that for now “most” applications can be supported with the 2TB virtual disk limit. The same 2TB limit applies to virtual mode RDMs also. As of vSphere 5 VMware supports 64TB VMFS5 datastores, and 64TB Physical Mode (Pass-through) Raw Device Maps (RDM’s), but the largest single VMDK file supported on a VMFS5 volume is still 2TB-512b (hereon after referred to as 2TB). Recently I have been having an interesting debate with some of my VCDX peers on the merits and reasons for having larger than 2TB virtual disk support in vSphere. The Case for Larger than 2TB Virtual Disks There are many considerations around performance, and I will cover some of the implications when you start to scale up volume size, but for particular performance design considerations I’d like to recommend you read my article titled Storage Sizing Considerations when Virtualizing Business Critical Applications. But rather the argument for supporting larger than 2TB individual virtual disks and large volumes. So how does vSphere 5 and 5.1 compare and what are the key considerations and gotchas? What are the implications for business critical applications? Read on to find out.īefore we get started I’d like to say this article isn’t going to cover performance of large volumes.
INCREASE VIRTUAL DISK SIZE VMWARE ESXI 5 WINDOWS
With the new filesystem in Windows Server 2012 (ReFS) the maximum volume size increases to 256TB ( NTFS was limited to 16TB 4K cluster size). A significant new feature available now in Hyper-V / Windows 2012 is a new disk format VHDX, which has a maximum size of 64TB. VMware just released vSphere 5.1 and Microsoft has recently released Windows Server 2012 and the new version of Hyper-V. Hypervisor competition is really starting to heat up.