Friday, August 20, 2010

Increase the default maximum number of NFS mounts on ESX/ESXi host

Default configuration allows for only eight NFS mounts per ESX host.



To change the default value for maximum NFS mounts on an ESX host:

  • Connect a vSphere Client to a vCenter Server or a host.

  • Select the host from the inventory panel and click Advanced Settings on the Configuration tab.

  • In the Advanced Settings dialog box, select NFS and set NFS.MaxVolumes to 32 (64 for versions 4.x).

  • Select Net and set Net.TcpIpHeapSize to 30 (32 for versions 4.x).

  • Select Net and set Net.TcpIpHeapMax to 120 (128 for versions 4.x).

  • Reboot the ESX host.

Note: These settings enable up to 32 (64 for versions 4.x) mounts on the ESX host.

Thursday, August 19, 2010

OS cutomization

For guest OS cutomization to work when deploying from a template, the vCenter server needs to have the sysprep files for that operating system to be able to customize its settings.

If you do not you will get the following error when deploying from a template:
Warning: Windows customization resources were not found on the server

Use the download links below to get the sysprep files from Microsoft and extract the contents to the correct folder within
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep

Sysprep 1.1 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\1.1
Copy contents of the "tools" folder to the "1.1".
Windows 2000 SP4 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\2k
Copy contents of i386\deploy.cab to "2k".

Windows 2003 SP2 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\svr2003
Copy contents of system32\deploy.cab to "svr2003".
Windows 2003 SP2 x64 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\svr2003-64
Copy contents of SP2QFE\deploy.cab to "svr2003-64".
Windows XP SP3 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\xp
Copy contents of CAB file to "xp".
Windows XP SP3 x64 Download
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep\xp-64
Copy contents of SP2QFE\deploy.cab to "xp-64".


Note: Windows Server 2008, Windows Server 2008 R2, Windows Vista and Windows 7 use a new method for imaging which does not require sysprep files on the vCenter server.

Wednesday, August 11, 2010

VMware vSphere 4.1 released

Whats new in vSphere 4.1:
Scripted Install for ESXi. Scripted installation of ESXi to local and remote disks allows rapid deployment of ESXi to many machines. You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting.
vSphere Client Removal from ESX/ESXi Builds. For ESX and ESXi, the vSphere Client is available for download from the VMware Web site. It is no longer packaged with builds of ESX and ESXi.
Boot from SAN. vSphere 4.1 enables ESXi boot from SAN (BFN). iSCSI, FCoE, and Fibre Channel boot are supported.
Hardware Acceleration with vStorage APIs for Array Integration (VAAI). ESX can offload specific storage operations to compliant storage hardware. With storage hardware assistance, ESX performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.
Storage Performance Statistics. vSphere 4.1 offers enhanced visibility into storage throughput and latency of hosts and virtual machines, and aids in troubleshooting storage performance issues. NFS statistics are now available in vCenter Server performance charts, as well as esxtop. New VMDK and datastore statistics are included. All statistics are available through the vSphere SDK.
Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion.
iSCSI Hardware Offloads. vSphere 4.1 enables 10Gb iSCSI hardware offloads (Broadcom 57711) and 1Gb iSCSI hardware offloads (Broadcom 5709).
Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).
IPv6 Enhancements. IPv6 in ESX supports Internet Protocol Security (IPsec) with manual keying.
Load-Based Teaming. vSphere 4.1 allows dynamic adjustment of the teaming algorithm so that the load is always balanced across a team of physical adapters on a vNetwork Distributed Switch.
E1000 vNIC Enhancements. E1000 vNIC supports jumbo frames in vSphere 4.1.
Windows Failover Clustering with VMware HA. Clustered Virtual Machines that utilize Windows Failover Clustering/Microsoft Cluster Service are now fully supported in conjunction with VMware HA.
VMware HA Scalability Improvements. VMware HA has the same limits for virtual machines per host, hosts per cluster, and virtual machines per cluster as vSphere.
VMware HA Healthcheck and Operational Status. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster.
VMware Fault Tolerance (FT) Enhancements. vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled virtual machines are deployed in a cluster, allowing for cluster maintenance operations without turning off FT.
DRS Interoperability for VMware HA and Fault Tolerance (FT). FT-enabled virtual machines can take advantage of DRS functionality for load balancing and initial placement. In addition, VMware HA and DRS are tightly integrated, which allows VMware HA to restart virtual machines in more situations.
Enhanced Network Logging Performance. Fault Tolerance (FT) network logging performance allows improved throughput and reduced CPU usage. In addition, you can use vmxnet3 vNICs in FT-enabled virtual machines.
Concurrent VMware Data Recovery Sessions. vSphere 4.1 provides the ability to concurrently manage multiple VMware Data Recovery appliances.
vStorage APIs for Data Protection (VADP) Enhancements. VADP now offers VSS quiescing support for Windows Server 2008 and Windows Server 2008 R2 servers. This enables application-consistent backup and restore operations for Windows Server 2008 and Windows Server 2008 R2 applications.
vCLI Enhancements. vCLI adds options for SCSI, VAAI, network, and virtual machine control, including the ability to terminate an unresponsive virtual machine. In addition, vSphere 4.1 provides controls that allow you to log vCLI activity.
Lockdown Mode Enhancements. VMware ESXi 4.1 lockdown mode allows the administrator to tightly restrict access to the ESXi Direct Console User Interface (DCUI) and Tech Support Mode (TSM). When lockdown mode is enabled, DCUI access is restricted to the root user, while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter Server. Direct access to the host using the vSphere Client is not permitted.
Access Virtual Machine Serial Ports Over the Network. You can redirect virtual machine serial ports over a standard network link in vSphere 4.1. This enables solutions such as third-party virtual serial port concentrators for virtual machine serial console management or monitoring.
vCenter Converter Hyper-V Import. vCenter Converter allows users to point to a Hyper-V machine. Converter displays the virtual machines running on the Hyper-V system, and users can select a powered-off virtual machine to import to a VMware destination.
Enhancements to Host Profiles. You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configuration.
Unattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with Active Directory and commands to configure the connection.
Updated Deployment Environment in vSphere Management Assistant (vMA). The updated deployment environment in vMA 4.1 is fully compatible with vMA 4.0. A significant change is the transition from RHEL to CentOS.
vCenter Orchestrator 64-bit Support. vCenter Orchestrator 4.1 provides a client and server for 64-bit installations, with an optional 32-bit client. The performance of the Orchestrator server on 64-bit installations is greatly enhanced, as compared to running the server on a 32-bit machine.
Improved Support for Handling Recalled Patches in vCenter Update Manager. Update Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed.
License Reporting Manager. The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter Server database.
Power Management Improvements. ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters.
Reduced Overhead Memory. vSphere 4.1 reduces the amount of overhead memory required, especially when running large virtual machines on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT).
DRS Virtual Machine Host Affinity Rules. DRS provides the ability to set constraints that restrict placement of a virtual machine to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of virtual machines on different racks or blade systems for availability reasons.
Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.
vMotion Enhancements. In vSphere 4.1, vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 8x for an individual virtual machine migration, and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively).
ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles.
Configuring USB Device Passthrough from an ESX/ESXi Host to a Virtual Machine. You can configure a virtual machine to use USB devices that are connected to an ESX/ESXi host where the virtual machine is running. The connection is maintained even if you migrate the virtual machine using vMotion.
Improvements in Enhanced vMotion Compatibility. vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for virtual machines, more timely error detection, better error messages, and the reduced need to restart virtual machines.
vCenter Update Manager Support for Provisioning, Patching, and Upgrading EMC’s ESX PowerPath Module. vCenter Update Manager can provision, patch, and upgrade third-party modules that you can install on ESX, such as EMC’s PowerPath multipathing software. Using the capability of Update Manager to set policies using the Baseline construct and the comprehensive Compliance Dashboard, you can simplify provisioning, patching, and upgrade of the PowerPath module at scale.
User-configurable Number of Virtual CPUs per Virtual Socket. You can configure virtual machines to have multiple virtual CPUs reside in a single virtual socket, with each virtual CPU appearing to the guest operating system as a single core. Previously, virtual machines were restricted to having only one virtual CPU per virtual socket.
Expanded List of Supported Processors. The list of supported processors has been expanded for ESX 4.1. Among the supported processors is the Intel Xeon 7500 Series processor, code-named Nehalem-EX (up to 8 sockets).

Wednesday, April 7, 2010

EMC Celerra/VMware TechBook

A new VMware/Celerra TechBook version (4.0) was published by EMC which will help for configuring the EMC storage Celerra with vSphere.
Following link will help you find the techbook:
http://www.emc.com/collateral/hardware/technical-documentation/h5536-vmware-esx-srvr-using-celerra-stor-sys-wp.pdf

New relese of vSphere (vSphere 4.1)

Most of the attendees were focusing on VMware Go and vCloud Express, in the press section noticed that the version number on vCenter Server was 4.1.0 .
The update that we can find in this new version of the vSphere is mostly based on the cloud computing.VMware made several performance and usability improvements in its virtual Distributed Switch (vDS) architecture. Performance is better when making configuration changes to a vDS instance in a heavily loaded ESX/ESXi host. There is also Improved performance when adding or removing an ESX/ESXi host to or from a vDS instance.VMware also modernized vSphere, adding support for Intel's Xeon 3400 series processor. Update 1 fixes a number of bugs found in 4.0 in a dozen different areas. Some of these fixes include VMs that would fail when hardware acceleration was fully enabled and a number changes impacting Cisco Discovery Protocol (CDP) support affecting vSphere in use in some Cisco environments.

Friday, March 5, 2010

How to configure a disk to use a PVSCSI adapter/Real time Experience

I read from some of the blogs that with the vSphere release VMware showed some amazing stats in regards to the increased level of I/O that can be attained in a virtual infrastructure.They explained the test was conducted in the EMC Enterprise Flash Drives, which have an incredibly high throughput. But I done various testing with my test setup. I am not able to reach any performance improvement.I got an 12% reduction in the CPU cycle.Which is the only advantage i see after having the PVSCI adapter migration.

To configure a disk to use a PVSCSI adapter for Windows VMs:
1. Launch a vSphere Client and log in to an ESX host.
2. Select a virtual machine, or create a new one.
3. Ensure a guest operating system that supports PVSCSI is installed on the virtual machine.
Note: Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1. In these situations, the system software must be installed on a disk attached to an adapter that does support bootable disk.
4. In the vSphere Client, right-click on the virtual machine and click Edit Settings.
5. Click the Hardware tab.
6. Click Add.
7. Select Hard Disk.
8. Click Next.
9. Choose any one of the available options.
10. Click Next.
11. Specify the options your require. Options vary depending on which type of disk you chose.
12. Choose a Virtual Device Node between SCSI (1:0) to SCSI (3:15) and specify whether you want to use Independent mode.
13. Click Next.
14. Click Finish to finish the process and exit the Add Hardware wizard. A new disk and controller are created.
15. Select the newly created controller and click Change Type.
16. Click VMware Paravirtual and click OK.
17. Click OK to exit the Virtual Machine Properties dialog.
18. Power on the virtual machine.
19. Install VMware Tools. VMware Tools includes the PVSCSI driver.
20. Scan and format the hard disk.

Procedure to Install a Linux VMs with PVSCSI.

1. Install with emulated LSI SCSI controller
2. Install online updates for your Linux distribution
3. Install VMWare-Tools (uninstall open-vm-tools and kernel modules which come already with your distribution prior to install the new Tools)
4. Make sure to add pvscsiin your “INITRD_MODULES”-Line in /etc/sysconfig/kernel 5. invoke “mkinitrd” as root
6. shutdown
7. Change SCSI controller type to “paravirtual” (via vSphere client)
8. power on the virtual linux machine

Virtualization drawbacks in perfomance

Degraded performance
Virtualization requires extra hardware resources. This will lead to the degrading the performance in simple ways. Lot of planning need to reduce the performance by resource sharing.
More over the complexity in root cause in performance degrading analysis is also a drawback using virtualization.