Tuesday, September 27, 2011

Raw Device Mapping

ESX Server Configuration Guide page 135.
RDM is a mapping that acts as a proxy for a raw physical storage device and resides as file in a seperate VMFS volume. RDM allows the virtual machine to directly access and use the storage device. RDM file contains metadata for managinf and redirecting disk access to the physical device. (FC Channel and iSCSI only)

RDM provides some of the advantage of direct access to a physical device while keeping some advantages of a virtual disks in VMFS. As a result, it merges VMFS manageability with raw device access.

Mapping a raw device into a datastore, mapping a system LUN, or mapping a disk file to a physical desk volume can be used as terms to refer to RDMs.

Situations considering to use raw LUNs with RDMs.
  • When SAN snapshot or other layered applications are run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN
  • In any MSCS clustering scenario that spans physical hosts - virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as files on a shared VMFS.
Think of a RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping amkes LUNs appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine configuration. The RDM contains a reference to the raw LUN.
Using RDM, you can
  • Use vMotion to migrate virtual machines using raw LUNs.
  • Add raw LUNs to virtual machines using the vSphere Client.
  • Use file system features such as distributed file locking, permissions,and naming
When you can not see LUNs provisioned to the host, you might need to set "config.vpxd.filter.rdmFilter" advanced vCenter option to "False".
Refer KB at http://kb.vmware.com/kb/1010513
Two compatibility modes are available for RDMs
  • Virtual compatibility mode: Allows a RDM to act exactly like a virtual disk file, including the use of snapshots.
  • Physical compatibility mode: Allows direct access of the SCSI device for those applications that need lower level control.
Benefits of Raw Device Mapping
  • User-Friendly Persistent Names
  • Dynamic Name Resolution
  • Distributed File Locking
  • File Permissions
  • File System Operations
  • Snapshots
  • vMotion
  • SAN Management Agents
  • N-Port ID Virtualization(NPIV)
Limitations of Raw Device Mapping
  • Not available for block devices or certain RAID devices: RDM uses a SCSI serial number to identify the mapped device. Because block devices and some direct-attach RAID devices do not export serial numbers, they can not be used with RDMs.
  • Available with VMFS-2 and VMFS-3 volumes only.
  • No snapshots in physical compatibility mode.
  • No partition mapping: RDM requires the mapped device to be a whole LUN.









NPIV: - N-Port ID Virtualization

Fibre Channel SAN configuration guide page 53.

ANSI T11 standard that describes how a single Fibre Channel HBA port can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-attached Nport to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre Channel fabric.

NPIV enables a single FC HBA port to register several unique WWNs with the fabric, each of which can be assigned to an individual virtual machines.

SAN objects, such as switches, HBA, storage devices, or virtual machines can be assigned World Wide Name (WWN) identifiers. WWNs uniquely identify such objects in the Fibre Channel fabric.

When virtual machines have WWN assignments


  • They use them for all RDM traffic, so the LUNs pointed to by any of the RDMs on the virtualm machine must not be masked against its WWNs.
  • The virtual machine's configuration file (.vmx) is updated to include a WWN pair (consisting of a World Wide Port Name, WWPN, and a World Wide Node Name, WWNN)
  • As the virtual machine is powered on, the VMkernel instantiates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA that it has its own unique identifier - a WWN pair that was assigned to the virtual machine. Each VPORT is specific to the virtual machine.
  • VPORT is destroyed on the host and it no longer appears to the FC fabric when the virtual machine is powered off.
  • When avirtual machine is migrated from one ESX/ESXi to another, the VPORT is closed on the first host and opened on the destination host.
  • The number of VPORTs that are instantiated equals the number of physical HBAs present on the host. A VPORT is created on each physical HBA that a physical pathis found on. Each physical pathis used to determine the virtual path that will be used to access the LUN.
  • HBAs that are not NPIV-aware are skipped in the discovery process because VPORTs cannot be instantiated on them.
When virtual machine do not have WWN assignment

  • When virtual machine do not have WWN assignment- no NPIV, they access storage LUNs with the WWNs of their host's physical HBAs.
By using NPIV, SAN administrator can monitor and route storage access on a per virtual machine basis.

Requirements for using NPIV.


  1. NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host's physical HBAs.
  2. HBAs on your ESX/ESXi host must support NPIV.
  3. Use HBAs of the same type, eiter all Qlogic or all Emulex. VMware does not support hetrogeneous HBAs on the same host accessing the same LUNs.
  4. If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.
  5. Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIVenabled virtual machines running on that host.
  6. The switches in the fabric must be NPIV-aware.
  7. When configuring a LUN fro NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and target ID.
  8. Use the vSphere Client to manipulate virtual machine with WWNs
NPIV Capabilities

  • NPIV supports vMotion. Assigned WWN will be retained. When the destination host does not support NPIV, VMkernel reverts to using a physical HBA to reroute the I/O.
  • If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.

Limitations

  • NPIV technology is an extention to the FC protocol, it requires a FC switch and does not work on direct attached FC disks.
  • WWN assigned to the virtual machine can not be retained when clone or template the virtual machine.
  • NPIV does not support Storage vMotion.
  • Disabling and then re-enabling the NPIV capability on a FC switch while virtual machines are running can cause a FC link to fail and I/O to stop.
Assign WWNs to the virtual machines
You can assign a WWN to a new virtual machine with an RDM disk when you create this virtual machine. You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical HBAs on the host.

Procedure.

  1. Open the New Virtual Machine wizard.
  2. select Custom, and click Next.
  3. Follow all steps required to create a custom virtual machine.
  4. On the Select a Disk page, select Raw Device Mapping and click Next.
  5. From a list of SAN disks or LUNs, select a raw LUN you want your virtual machine to access directly.
  6. Select a datastore for the RDM mapping file. Datastore for RDM mapping file can be different or the same datastore where virtual machine files resides. For vMotion of NPIV enabled virtual machine, RDM file must resides on the same datastore where virtual machine configuration file resides.
  7. Follow the steps required to create a virtual machine with the RDM.
  8. On the Ready to Complete page, select the Edit the virtual machine settings before completion check box and click Continue. The Virtual Machine Properties box opens.
  9. Assign WWNs to the virtual machine.

  • Click the Options tab, and select Fibre Channel NPIV.
  • Select Generate new WWNs.
  • Specify the number of WWNNs and WWPNs. A minimum of 2 WWPNs are needed to support failover with NPIV. Typically only 1 WWNN is created for each virtual machine.
10. Click Finish.
Register newly generated WWN in the fabric so that the virtual machine is able to login to the switch and assign storage LUNs to the WWN.
Modify WWN assignments.
Typically, you do not need to change existing WWN assignments on your virtual machine. In certain circumstances, for example, when manually assigned WWNs are causing conflicts on the SAN, you might need to change or remove WWNs.
Prerequisites.

  • Make sure to power off the virtual machine if you want to edit the existing WWNs.
  • Ensure SAN administrator has provisioned the storage LUN ACL to allow the virtual machine's ESX/ESXi host to access it.
Procedure

  1. Open the Virtual Machine Properties diaglog box by clicking the Edit Settings link for the selected virtual machine.
  2. Click the Options tab and select Fibre Channel NPIV.
  3. Edit the WWN assignment by selecting on of the following options.

  • Temporarily disable NPIV for this virtual machin
  • Leave unchanged
  • Generate new WWNs
  • Remove WWN assignment
4. Click OK to save changes.

Monday, September 26, 2011

VMDirectPath vs. Paravirtual SCSI

Below posting was referenced.
http://professionalvmware.com/2009/08/vmdirectpath-paravirtual-scsi-vsphere-vm-options-and-you/

Paravirtual SCSI is feature supported by guest OS. So it requires guest OS modified to support paravirtual SCSI adaptor. See http://kb.vmware.com/kb/1010398 for supplement information on Paravirtual SCSI.

VMdirectPath device can be connected up to 2 (for ESX4.0) or 4 devices(from ESX 4.1). Refer http://kb.vmware.com/kb/1010789 This feature requires CPUs that support directed I/O such as Intel's VT-D technology and AMD's AMD-Vi originally called IOMMU-input/output memory management unit. Refer http://www.intel.com/technology/itj/2006/v10i3/2-io/7-conclusion.htm for more on VT-D.

Thursday, September 22, 2011

VMware Communities: ESXi 4.1 HA configure error!!!

VMware Communities: ESXi 4.1 HA configure error!!!

I have tried to configure HA cluster with ESX4.1 and ESXi4.1 together. And there was a problem when adding ESXi4.1 to existing cluster.
The above VMware community post was really helpful.
In conclusion, it was RAM size issue of ESXi4.1. When it managed by vCenter Server it needs to have more than 2GB, say 3GB.

Friday, September 16, 2011

VMware virtual lab environment.....

VMware virtual lab environment has been set up. While installaing ESX4.0, KB article id 161 referenced to make network work properly.
With Thinkpad E420 running VMware workstation 7.1.4 on Fedora15 -Intel i3 CPU & 8GB memory, 3 VMs have been created.
1 VM for Windows 2008 server acting as domain controller and DNS server. Will be used as NFS server for shared storage when testing HA cluster.
1 VM for vCenterServer.
1 VM for ESX4.0
Another VM for ESXi 4.0 will be created and HA cluster will be configured.
From the base configuration with vSphere 4.0, step by step upgrading to vSphere5.0 and various functionality will be tested.

Thursday, September 15, 2011

Photo on a newspaper, etnews.com

Was on an Internet newspaper, etnews.com, when an article on Nextro system opening on 14 Sep. 2011 at Suhyup Bank was posted.
Photo looks like taken the day before opening, in fact it was a few days before. And I was in it.

Tuesday, September 6, 2011

OSX Lion on VMware Workstation VM

VMware workstation 7.1.4 has been installed on my Fedora15 laptop and trying to install OSX Lion because I have failed to upgrade Snow Leopard VM to Lion. Several web postings have been referenced so far but not yest completed. Just followed instructions but it's a little bit tricky.
Try again tonight hoping good result.......

...At last, it works!!
Sorry for not functioning sound card.