Thursday, September 29, 2011

Masking a LUN from ESX and ESXi using MASK_PATH plug-in

Refer http://kb.vmware.com/kb/1009449

1. See what plug-ins currently installed.
[root@esx1 ~]# esxcfg-mapth -G
MASK_PATH
NMP

The output indicates that there are, at a minimum, 2 plug-ins: the VMware Native Multipath Plug-in (NMP) and the MASK_PATH plug-in, which is used for masking LUNs. There may be other plug-ins if third party software (such as EMC PowerPath) is installed.

2. List all the claimrules currently on the ESX
[root@esx1 ~]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 65535 runtime vendor NMP vendor=* model=*

There are 2 MASK_PATH entries: one for runtime class and th other for file class.
The runtime class is the rules currently running in the PSA. The file class is a reference to the rules defined in the /etc/vmware/esx.conf.

[root@esx1 vmware]# grep "PSA/MP/claimrule" /etc/vmware/esx.conf
/storage/PSA/MP/claimrule[0101]/match/model = "Universal Xport"
/storage/PSA/MP/claimrule[0101]/match/vendor = "DELL"
/storage/PSA/MP/claimrule[0101]/plugin = "MASK_PATH"
/storage/PSA/MP/claimrule[0101]/type = "vendor"

These are identical, but they could be different if you are in the process of modifying the /etc/vmware/esx.conf.

3.Add a rule to hide a LUN with the command:
# esxcli corestorage claimrule add --rule -t location -A hba_adapter -C Channel -T Target -L LUN -P MASK_PATH

Find the naa device of the datastore whith the command:
[root@esx1 vmware]# esxcfg-scsidevs --vmfs
mpx.vmhba0:C0:T0:L0:5 /dev/sda5 4e7361ce-9cb305f4-f136-000c29f15a51 0 Storage1
t10.F405E46494C45400259395867725D24567F444D253651786:1 /dev/sdd1 4e77768a-5e318913-6036-000c29f15a51 0 iSCSI_Shared

In this document, I will mask iSCSI_shared datastore. Check all of the paths that naa device has (vmhba33:C0:T0:L0):

[root@esx1 vmware]# esxcfg-mpath -L| grep F405E46494C45400259395867725D24567F444D253651786
vmhba33:C0:T0:L0 state:active t10.F405E46494C45400259395867725D24567F444D253651786 vmhba33 0 0 0 NMP active san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1
[root@esx1 vmware]#

iSCSI_Shared datastore has only 1 path to the device.

Add the rule with the command:
[root@esx1 vmware]# esxcli corestorage claimrule add --rule 192 -t location -A vmhba33 -C 0 -L 0 -P MASK_PATH

The claim rules are evaluated in numerical order starting from 0.
n Rules 0–100 are reserved for internal use by VMware.
n Rules 101–65435 are available for general use. Any third party multipathing plugins installed on your system use claim rules in this range.
n Rules 65436–65535 are reserved for internal use by VMware.


4.Verify that the rule has taken with the command:
[root@esx1 vmware]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 192 file location MASK_PATH adapter=vmhba33 channel=0 target=* lun=0
MP 65535 runtime vendor NMP vendor=* model=*

See there is rule 192 for file class.

5.Reload claimerules with the command:
[root@esx1 vmware]# esxcli corestorage claimrule load

6.Re-examine claim rules and verify both the file and runtime class.
[root@esx1 vmware]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 192 runtime location MASK_PATH adapter=vmhba33 channel=0 target=* lun=0
MP 192 file location MASK_PATH adapter=vmhba33 channel=0 target=* lun=0
MP 65535 runtime vendor NMP vendor=* model=*


[root@esx1 vmware]# esxcfg-mpath -L
vmhba32:C0:T0:L0 state:active mpx.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local ide.vmhba32 ide.0:0
vmhba0:C0:T0:L0 state:active mpx.vmhba0:C0:T0:L0 vmhba0 0 0 0 NMP active local pscsi.vmhba0 pscsi.0:0
vmhba33:C0:T0:L0 state:active t10.F405E46494C45400259395867725D24567F444D253651786 vmhba33 0 0 0 NMP active san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1
vmhba33:C0:T0:L1 state:active t10.F405E46494C45400A645A56644F6D23626A435D207248425 vmhba33 0 0 1 NMP active san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1

7.Unclaim all paths to a device then run the loaded claimrules on each of the paths to reclaim them.
[root@esx1 vmware]# esxcli corestorage claiming reclaim -d t10.F405E46494C45400259395867725D24567F444D253651786

8.Verify that the masked device is no longeer used by the ESX host.
[root@esx1 vmware]# esxcfg-scsidevs -m
mpx.vmhba0:C0:T0:L0:5 /dev/sda5 4e7361ce-9cb305f4-f136-000c29f15a51 0 Storage1
[2011-09-29 15:43:22 'VmFileSystem' warning] Skipping extent: t10.F405E46494C45400259395867725D24567F444D253651786:1. Not a known device: t10.F405E46494C45400259395867725D24567F444D253651786


[root@esx1 vmware]# esxcfg-mpath -L
vmhba32:C0:T0:L0 state:active mpx.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local ide.vmhba32 ide.0:0
vmhba0:C0:T0:L0 state:active mpx.vmhba0:C0:T0:L0 vmhba0 0 0 0 NMP active local pscsi.vmhba0 pscsi.0:0
vmhba33:C0:T0:L0 state:dead (no device) vmhba33 0 0 0 MASK_PATH dead san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1
vmhba33:C0:T0:L1 state:active t10.F405E46494C45400A645A56644F6D23626A435D207248425 vmhba33 0 0 1 NMP active san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1
[root@esx1 vmware]#

Unclaiming MASKED LUN

[root@esx1 vmware]# esxcfg-mpath -L
vmhba32:C0:T0:L0 state:active mpx.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local ide.vmhba32 ide.0:0
vmhba0:C0:T0:L0 state:active mpx.vmhba0:C0:T0:L0 vmhba0 0 0 0 NMP active local pscsi.vmhba0 pscsi.0:0
vmhba33:C0:T0:L0 state:dead (no device) vmhba33 0 0 0 MASK_PATH dead san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.openf iler:tsn.b6998f7991b4,t,1
vmhba33:C0:T0:L1 state:active t10.F405E46494C45400A645A56644F6D23626A435D207248425 vmhba33 0 0 1 NMP active san iqn.1998-01.com.vmware:esx1-0f77 0b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1


[root@esx1 vmware]# esxcli corestorage claimrule delete --rule 192


[root@esx1 vmware]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 192 runtime location MASK_PATH adapter=vmhba33 channel=0 target=* lun=0
MP 65535 runtime vendor NMP vendor=* model=*


[root@esx1 vmware]# esxcli corestorage claimrule load


[root@esx1 vmware]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 65535 runtime vendor NMP vendor=* model=*


[root@esx1 vmware]# esxcli corestorage claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0


[root@esx1 vmware]# esxcfg-mpath -L
vmhba32:C0:T0:L0 state:active mpx.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local ide.vmhba32 ide.0:0
vmhba0:C0:T0:L0 state:active mpx.vmhba0:C0:T0:L0 vmhba0 0 0 0 NMP active local pscsi.vmhba0 pscsi.0:0
vmhba33:C0:T0:L0 state:dead (no device) vmhba33 0 0 0 (unclaimed) dead san iqn.1998-01.com.vmware:esx1-0f770b14 00023d000001,iqn.2006-01.com.ope nfiler:tsn.b6998f7991b4,t,1
vmhba33:C0:T0:L1 state:active t10.F405E46494C45400A645A56644F6D23626A435D207248425 vmhba33 0 0 1 NMP active san iqn.1998-01.com.vmware:esx1-0f77 0b14 00023d000001,iqn.2006-01.com.openfiler:tsn.b6998f7991b4,t,1
[root@esx1 vmware]#


Wednesday, September 28, 2011

Managing Duplicate VMFS Datastores - Resignaturing

http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf page 119

When a LUN contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature. Each VMFS datastore created in a LUN has a unique UUID that is stored in the file system superblock. When the LUN is replicated or snapshotted, the resulting LUN copy is identical, byte-for-byte, with the original LUN. As a result, if the original LUN contains a VMFS datastore with UUID X, the LUN copy appears to contain an identical VMFS datastore, or a VMFS datastore copy, with exactly the same UUID X. ESX can determine whether a LUN contains the VMFS datastore copy, and either mount the datastore copy with its original UUID or change the UUID, thus resignaturing the datastore.

Mounting VMFS Datastores with Existing Signatures
You might not have to resignature a VMFS datastore copy. You can mount a VMFS datastore copy without changing its signature. For example, you can maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you can mount the datastore copy and power on the virtual machines at the secondary site.

IMPORTANT You can mount a VMFS datastore copy only if it does not collide with the original VMFS datastore that has the same UUID. To mount the copy, the original VMFS datastore has to be offline.

When you mount the VMFS datastore, ESX allows both reads and writes to the datastore residing on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid across system reboots. Because ESX does not allow you to resignature the mounted datastore, unmount the datastore before resignaturing.

Mount a VMFS Datastore with an Existing Signature
If you do not need to resignature a VMFS datastore copy, you can mount it without changing its signature.

Prerequisites
Before you mount a VMFS datastore, perform a storage rescan on your host so that it updates its view of LUNs presented to it.

Procedure


  1. Log in to the vSphere Client and select the server from the inventory panel.
  2. Click the Configuration tab and click Storage in the Hardware panel.
  3. Click Add Storage.
  4. Select the Disk/LUN storage type and click Next.
  5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  6. Under Mount Options, select Keep Existing Signature.
  7. In the Ready to Complete page, review the datastore configuration information and click Finish.
What to do next
If you later want to resignature the mounted datastore, you must unmount it first.

Resignaturing VMFS Copies
Use datastore resignaturing to retain the data stored on the VMFS datastore copy. When resignaturing a VMFS copy, ESX assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original. The default format of the new label assigned to the datastore is snap-snapID-oldLabel, where snapID is an integer and oldLabel is the label of the original datastore. When you perform datastore resignaturing, consider the following points:


  • Datastore resignaturing is irreversible.
  • The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy.
  • A spanned datastore can be resignatured only if all its extents are online.
  • The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
  • You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
Resignature a VMFS Datastore Copy
Use datastore resignaturing if you want to retain the data stored on the VMFS datastore copy.

Prerequisites
To resignature a mounted datastore copy, first unmount it. Before you resignature a VMFS datastore, perform a storage rescan on your host so that the host updates its view of LUNs presented to it and discovers any LUN copies.


Procedure

  1. Log in to the vSphere Client and select the server from the inventory panel.
  2. Click the Configuration tab and click Storage in the Hardware panel.
  3. Click Add Storage.
  4. Select the Disk/LUN storage type and click Next.
  5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  6. Under Mount Options, select Assign a New Signature and click Next.
  7. In the Ready to Complete page, review the datastore configuration information and click Finish.

What to do next
After resignaturing, you might have to do the following:


  • If the resignatured datastore contains virtual machines, update references to the original VMFS datastore in the virtual machine files, including .vmx, .vmdk, .vmsd, and .vmsn.
  • To power on virtual machines, register them with vCenter Server.

Configure vCenter Server Storage Filters

Turn off vCenter Server Storage Filters
When you perform VMFS datastore management operations, vCenter Server uses default storage filters. The filters help you to avoid storage corruption by retrieving only the storage devices, or LUNs, that can be used for a particular operation. Unsuitable LUNs are not displayed for selection. You can turn off the filters to view all LUNs.
Before making any changes to the LUN filters, consult with the VMware support team. You can turn off the filters only if you have other methods to prevent LUN corruption.

Procedure
1 In the vSphere Client, select Administration > vCenter Server Settings.
2 In the settings list, select Advanced Settings.
3 In the Key text box, type a key.


KeyFilter Name
config.vpxd.filter.vmfsFilterVMFS Filter
config.vpxd.filter.rdmFilterRDM Filter
config.vpxd.filter.SameHostAndTransportsFilterSame Host and Transports Filter
config.vpxd.filter.hostRescanFilterHost Rescan Filter

NOTE If you turn off the Host Rescan Filter, your hosts continue to perform a rescan each time you present a new LUN to a host or a cluster.
4 In the Value text box, type False for the specified key.
5 Click Add.
6 Click OK.
You are not required to restart the vCenter Server system.

vCenter Server Storage Filtering
The storage filters that the vCenter Server provides help you avoid storage device corruption and performance degradation that can be caused by an unsupported use of LUNs. These filters are available by default.

Storage Filters

Filter: VMFS Filter
Description:
Filters out any storage devices, or LUNs, that are already used by another VMFS datastore on any host managed by the vCenter Server. Prevents LUN sharing by multiple datastores or a datastore and RDM combination.
Key: config.vpxd.filter.vmfsFilter

Filter: RDM Filter
Description :
Filters out any LUNs that are already referenced by another RDM on any host managed by the vCenter Server. Prevents LUN sharing by a datastore and RDM combination. In addition, the filter prevents virtual machines from accessing the same LUN through different RDM mapping files.

If you need virtual machines to access the same raw LUN, they must share the same RDM mapping file. For details on this type of configuration, see Setup for Failover Clustering and Microsoft Cluster Service.

Key: config.vpxd.filter.rdmFilter
Filter: Same Host and Transports Filter
Description:
Filters out LUNs ineligible for use as VMFS datastore extents due a host or storage type incompatibility. Prevents you from adding the following LUNs as extents:
  • l LUNs not exposed to all hosts that share the original VMFS datastore.
  • l LUNs that use a storage type different from the one the original VMFS datastore uses. For example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device.
Key: config.vpxd.filter.SameHostAndTransportsFilter
Filter: Host Rescan Filter
Description:
Automatically rescans and updates storage devices after you perform datastore management operations. The filter helps provide a consistent view of all storage devices and VMFS datastores on all hosts managed by the vCenter Server.
Key: config.vpxd.filter.hostRescanFilter

Tuesday, September 27, 2011

Raw Device Mapping

ESX Server Configuration Guide page 135.
RDM is a mapping that acts as a proxy for a raw physical storage device and resides as file in a seperate VMFS volume. RDM allows the virtual machine to directly access and use the storage device. RDM file contains metadata for managinf and redirecting disk access to the physical device. (FC Channel and iSCSI only)

RDM provides some of the advantage of direct access to a physical device while keeping some advantages of a virtual disks in VMFS. As a result, it merges VMFS manageability with raw device access.

Mapping a raw device into a datastore, mapping a system LUN, or mapping a disk file to a physical desk volume can be used as terms to refer to RDMs.

Situations considering to use raw LUNs with RDMs.
  • When SAN snapshot or other layered applications are run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN
  • In any MSCS clustering scenario that spans physical hosts - virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as files on a shared VMFS.
Think of a RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping amkes LUNs appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine configuration. The RDM contains a reference to the raw LUN.
Using RDM, you can
  • Use vMotion to migrate virtual machines using raw LUNs.
  • Add raw LUNs to virtual machines using the vSphere Client.
  • Use file system features such as distributed file locking, permissions,and naming
When you can not see LUNs provisioned to the host, you might need to set "config.vpxd.filter.rdmFilter" advanced vCenter option to "False".
Refer KB at http://kb.vmware.com/kb/1010513
Two compatibility modes are available for RDMs
  • Virtual compatibility mode: Allows a RDM to act exactly like a virtual disk file, including the use of snapshots.
  • Physical compatibility mode: Allows direct access of the SCSI device for those applications that need lower level control.
Benefits of Raw Device Mapping
  • User-Friendly Persistent Names
  • Dynamic Name Resolution
  • Distributed File Locking
  • File Permissions
  • File System Operations
  • Snapshots
  • vMotion
  • SAN Management Agents
  • N-Port ID Virtualization(NPIV)
Limitations of Raw Device Mapping
  • Not available for block devices or certain RAID devices: RDM uses a SCSI serial number to identify the mapped device. Because block devices and some direct-attach RAID devices do not export serial numbers, they can not be used with RDMs.
  • Available with VMFS-2 and VMFS-3 volumes only.
  • No snapshots in physical compatibility mode.
  • No partition mapping: RDM requires the mapped device to be a whole LUN.









NPIV: - N-Port ID Virtualization

Fibre Channel SAN configuration guide page 53.

ANSI T11 standard that describes how a single Fibre Channel HBA port can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-attached Nport to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre Channel fabric.

NPIV enables a single FC HBA port to register several unique WWNs with the fabric, each of which can be assigned to an individual virtual machines.

SAN objects, such as switches, HBA, storage devices, or virtual machines can be assigned World Wide Name (WWN) identifiers. WWNs uniquely identify such objects in the Fibre Channel fabric.

When virtual machines have WWN assignments


  • They use them for all RDM traffic, so the LUNs pointed to by any of the RDMs on the virtualm machine must not be masked against its WWNs.
  • The virtual machine's configuration file (.vmx) is updated to include a WWN pair (consisting of a World Wide Port Name, WWPN, and a World Wide Node Name, WWNN)
  • As the virtual machine is powered on, the VMkernel instantiates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA that it has its own unique identifier - a WWN pair that was assigned to the virtual machine. Each VPORT is specific to the virtual machine.
  • VPORT is destroyed on the host and it no longer appears to the FC fabric when the virtual machine is powered off.
  • When avirtual machine is migrated from one ESX/ESXi to another, the VPORT is closed on the first host and opened on the destination host.
  • The number of VPORTs that are instantiated equals the number of physical HBAs present on the host. A VPORT is created on each physical HBA that a physical pathis found on. Each physical pathis used to determine the virtual path that will be used to access the LUN.
  • HBAs that are not NPIV-aware are skipped in the discovery process because VPORTs cannot be instantiated on them.
When virtual machine do not have WWN assignment

  • When virtual machine do not have WWN assignment- no NPIV, they access storage LUNs with the WWNs of their host's physical HBAs.
By using NPIV, SAN administrator can monitor and route storage access on a per virtual machine basis.

Requirements for using NPIV.


  1. NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host's physical HBAs.
  2. HBAs on your ESX/ESXi host must support NPIV.
  3. Use HBAs of the same type, eiter all Qlogic or all Emulex. VMware does not support hetrogeneous HBAs on the same host accessing the same LUNs.
  4. If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.
  5. Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIVenabled virtual machines running on that host.
  6. The switches in the fabric must be NPIV-aware.
  7. When configuring a LUN fro NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and target ID.
  8. Use the vSphere Client to manipulate virtual machine with WWNs
NPIV Capabilities

  • NPIV supports vMotion. Assigned WWN will be retained. When the destination host does not support NPIV, VMkernel reverts to using a physical HBA to reroute the I/O.
  • If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.

Limitations

  • NPIV technology is an extention to the FC protocol, it requires a FC switch and does not work on direct attached FC disks.
  • WWN assigned to the virtual machine can not be retained when clone or template the virtual machine.
  • NPIV does not support Storage vMotion.
  • Disabling and then re-enabling the NPIV capability on a FC switch while virtual machines are running can cause a FC link to fail and I/O to stop.
Assign WWNs to the virtual machines
You can assign a WWN to a new virtual machine with an RDM disk when you create this virtual machine. You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical HBAs on the host.

Procedure.

  1. Open the New Virtual Machine wizard.
  2. select Custom, and click Next.
  3. Follow all steps required to create a custom virtual machine.
  4. On the Select a Disk page, select Raw Device Mapping and click Next.
  5. From a list of SAN disks or LUNs, select a raw LUN you want your virtual machine to access directly.
  6. Select a datastore for the RDM mapping file. Datastore for RDM mapping file can be different or the same datastore where virtual machine files resides. For vMotion of NPIV enabled virtual machine, RDM file must resides on the same datastore where virtual machine configuration file resides.
  7. Follow the steps required to create a virtual machine with the RDM.
  8. On the Ready to Complete page, select the Edit the virtual machine settings before completion check box and click Continue. The Virtual Machine Properties box opens.
  9. Assign WWNs to the virtual machine.

  • Click the Options tab, and select Fibre Channel NPIV.
  • Select Generate new WWNs.
  • Specify the number of WWNNs and WWPNs. A minimum of 2 WWPNs are needed to support failover with NPIV. Typically only 1 WWNN is created for each virtual machine.
10. Click Finish.
Register newly generated WWN in the fabric so that the virtual machine is able to login to the switch and assign storage LUNs to the WWN.
Modify WWN assignments.
Typically, you do not need to change existing WWN assignments on your virtual machine. In certain circumstances, for example, when manually assigned WWNs are causing conflicts on the SAN, you might need to change or remove WWNs.
Prerequisites.

  • Make sure to power off the virtual machine if you want to edit the existing WWNs.
  • Ensure SAN administrator has provisioned the storage LUN ACL to allow the virtual machine's ESX/ESXi host to access it.
Procedure

  1. Open the Virtual Machine Properties diaglog box by clicking the Edit Settings link for the selected virtual machine.
  2. Click the Options tab and select Fibre Channel NPIV.
  3. Edit the WWN assignment by selecting on of the following options.

  • Temporarily disable NPIV for this virtual machin
  • Leave unchanged
  • Generate new WWNs
  • Remove WWN assignment
4. Click OK to save changes.

Monday, September 26, 2011

VMDirectPath vs. Paravirtual SCSI

Below posting was referenced.
http://professionalvmware.com/2009/08/vmdirectpath-paravirtual-scsi-vsphere-vm-options-and-you/

Paravirtual SCSI is feature supported by guest OS. So it requires guest OS modified to support paravirtual SCSI adaptor. See http://kb.vmware.com/kb/1010398 for supplement information on Paravirtual SCSI.

VMdirectPath device can be connected up to 2 (for ESX4.0) or 4 devices(from ESX 4.1). Refer http://kb.vmware.com/kb/1010789 This feature requires CPUs that support directed I/O such as Intel's VT-D technology and AMD's AMD-Vi originally called IOMMU-input/output memory management unit. Refer http://www.intel.com/technology/itj/2006/v10i3/2-io/7-conclusion.htm for more on VT-D.

Thursday, September 22, 2011

VMware Communities: ESXi 4.1 HA configure error!!!

VMware Communities: ESXi 4.1 HA configure error!!!

I have tried to configure HA cluster with ESX4.1 and ESXi4.1 together. And there was a problem when adding ESXi4.1 to existing cluster.
The above VMware community post was really helpful.
In conclusion, it was RAM size issue of ESXi4.1. When it managed by vCenter Server it needs to have more than 2GB, say 3GB.

Friday, September 16, 2011

VMware virtual lab environment.....

VMware virtual lab environment has been set up. While installaing ESX4.0, KB article id 161 referenced to make network work properly.
With Thinkpad E420 running VMware workstation 7.1.4 on Fedora15 -Intel i3 CPU & 8GB memory, 3 VMs have been created.
1 VM for Windows 2008 server acting as domain controller and DNS server. Will be used as NFS server for shared storage when testing HA cluster.
1 VM for vCenterServer.
1 VM for ESX4.0
Another VM for ESXi 4.0 will be created and HA cluster will be configured.
From the base configuration with vSphere 4.0, step by step upgrading to vSphere5.0 and various functionality will be tested.

Thursday, September 15, 2011

Photo on a newspaper, etnews.com

Was on an Internet newspaper, etnews.com, when an article on Nextro system opening on 14 Sep. 2011 at Suhyup Bank was posted.
Photo looks like taken the day before opening, in fact it was a few days before. And I was in it.

Tuesday, September 6, 2011

OSX Lion on VMware Workstation VM

VMware workstation 7.1.4 has been installed on my Fedora15 laptop and trying to install OSX Lion because I have failed to upgrade Snow Leopard VM to Lion. Several web postings have been referenced so far but not yest completed. Just followed instructions but it's a little bit tricky.
Try again tonight hoping good result.......

...At last, it works!!
Sorry for not functioning sound card.