Monday, October 24, 2011

How to configure NPIV (N_Port ID Virtualization)


Step By Step NPIV configuration

For maximum redundancy for the paths create the instance on dual VIOS. We will consider an scenario having Power6/7 Server, with 2 PCI Dual/Single port 8 GB Fibre Card with VIOS level – 2,2 FP24 installed and VIOS is in shutdown state.
First we need to create Virtual fibre channel adapter on each VIOS which we will later on map to physical fibre adapter after logging into VIOS similarly as we do for Ethernet

Please Note: - Create the all lpar clients as per requirements and then configure the Virtual fiber adapter on VIOS. Since we are mapping one single physical fiber adapter to different hosts, hence we need to create that many virtual fiber channel adapter. Dynamically virtual fiber channel adapter can be created but don’t forget to add in profile else you lost the config on power-off.

1.      1. Create Virtual fibre channel adapter on both VIOS server.
          HMC--> Managed System-->Manage Profile-->Virtual Adapter    
 Let say I have define the virtual fiber adapter for AIX client Netwqa  with adapter ids 33 & client adapter id 33


Similarly on Vios2 for multipath redundancy:-


If you have any more LPARs which you want to configure for NPIV, repeat the above mentioned steps with those LPAR details. 
  2. Mapping defined virtual fiber channel adapter to Physical HBA ports
Now activate VIOS Lpar. Logon to VIOS server and check the status of physical Fibre channel port. Or if Vios are already running then run cfgmgr or config manager to get the defined virtual FC adapter on Vios servers.
$ lsnports
Name   physloc                                            fabric tports aports swwpns awwpns
fcs0    U5802.001.008A824-P1-C9-T1     0        64        64      2048     2048
fcs1    U5802.001.008A824-P1-C9-T2     0        64        64      2048     2048
fcs2    U5877.001. 0083832-P1-C9-T1     0        64        64      2048     2048
fcs3    U5877.001.0083832-P1-C9-T2      0        64        64      2048     2048

If the value for the ‘fabric’ parameter shows as ‘0’ that means that HBA port is not connected to a SAN switch supporting the NPIV feature. Please connect fibre cable between Physical fibre channel adapter and San switches. If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch supporting the NPIV feature

Above commands displays
            Name:- Display Name and
physloc :- location of physical adapter.
aports:- Display number of available physical ports (aports)
awwpns:- Display total numbers of WWPNs that physical port support.
After connecting fibre channel cable, execute lsnport again you should get fabric=1

$ lsnports
Name   physloc                                       fabric tports aports swwpns awwpns
fcs0    U5802.001.008A824-P1-C9-T1     1        64        64      2048     2048
fcs1    U5802.001.008A824-P1-C9-T2     1        64        64      2048     2048
fcs2    U5877.001. 0083832-P1-C9-T1     1        64        64      2048     2048
fcs3    U5877.001.0083832-P1-C9-T2      1        64        64      2048     2048
Run the ‘lsdev –vpd | grep vfchost’ command to know which device represents the Virtual FC adapter on any specific slot. Or run`lsmap -npiv –all`to list number of FC adapter and their mapping to physical adapter
Here we are interested in vfchost2 as I am showing the example for connecting vfchost2.
Check Status and Flags:-

Status:LOGGED_IN, Flags: a<LOGGED_IN,STRIP_MERGE>
-> The vfchost adapter is mapped to a physical adapter, and the associated client is up and running.
Status: NOT_LOGGED_IN, Flags:1<NOT_MAPPED,NOT_CONNECTED>
-> The vfchost adapter is not mapped to a physical adapter

Status: NOT_LOGGED_IN, Flags:4<NOT_LOGGED>
-> The vfchost adapter is mapped to a physical adapter, but the associated client is not running. If you suspect a problem, check for VFC_HOST errors.

ClntName:- will only be displayed when your mapped vio client is booted and running state.

ClntOS : Name will only be displayed when your mapped vio client is booted and running state

Now we need to map the device ‘vfchost2’ to the physical HBA port ‘fcs1’ using the ‘vfcmap -vadapter vfchost2 -fcp fcs1’command. Once it is mapped, check the status of the mapping using the command ‘lsmap -vadapter vfchost2 -npiv’. Please note that the status of the port is showing as ‘NOT_LOGGED_IN’, this is because the client configuration is not yet complete and hence it cannot login to the fabric.

$ vfcmap -vadapter vfchost2 -fcp fcs1

List the adapter using below ` lsmap –vadapter vfchost2 –npiv`.
Since Aix client is not configured and mapped that’s why status is not Logged_IN, it will not display the ClntName and ClntOS along with VFC client name and DRC

Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs, repeat the steps for all those LPARs in both the VIOS LPARs.

  3. .Aix Client Configuration
Create Virtual FC client adapter on Aix lpar by navigating HMC and below tabs:-
HMC à VIO Client (NETWQA) à manage ProfileàVirtual Adapterà Action à Create as
Create the second Virtual FC Client Adapter with the slot number details as shown in below figure. Make sure the slot numbers match with the slot numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter.
Now activate the AIX LPAR and install AIX in it, note that the minimum version required to support the NPIV feature is AIX 5.3 TL9 or AIX 6.1 TL2. Once the AIX installation is complete, depending on the SAN Storage box, you need to install the necessary subsystem driver and configure it.  If Aix is already running then issue `cfgmgr` command.
Install SDDPCM driver for multipathing depending upon the storage you have.

You can now check the status of the Virtual FC Server Adapter ports in both the VIOS to check whether the ports are successfully logged in to the SAN fabric.
VIOS2
4.  Allocating San Storage:-
You can now assign the storage to the Aix  lpar. Do proper zoning between san storage and wwpn of Aix client FC virtual adapter. Use below command to check the WWPN of virtual Fibre channel adapter on AIX client.
#lscfg -vpl fcs*
Or below commands as shown below:-
You can also get the wwpn number from AIX client profile through HMC as shown below:-

NOTE: When viewing the properties of the Virtual FC Client Adapter from the HMC, it will show two WWNs for each Virtual FC Client Adapter as shown above. The second WWN shown here is not used until there is a live migration activated on this LPAR through Live Partition Mobility. When a live migration happens for this LPAR, the new migrated hardware will be accessing SAN storage using the second WWN of the Virtual FC Client Adapter, so you have to make sure the second WWN is also configured in Zoning and Access Control.

Use lspath or pcmpath query adapter , ‘datapath query adapter’, ‘datapath query device’, ‘lsvpcfg’ , pcmpath query essmap etc commands to check the mutlipathing and hdisk configured properly.

It will show the output as shown below. You can see that there are 4 separate paths for the disk ‘hdisk2’ which is through two separate virtual FC adapters as I have connected my DS storage to fiber switch through 4 cables for each fiber card.
**Zoning on SAN Switch is out of scope for this document; if you want to know how to do zoning you can drop a comment or mail me.

Limitations:-
§  NPIV is only supported on 8Gb FC adapters on p6 hosts. The FC switch needs to support NPIV, but does not need to be 8 Gb (the 8 Gb adapter can negotiate down to 2 and 4 Gb).
§  Maximum number of 64 NPIV adapters per physical adapter (see lsnports)
§  16 virtual fibre channel adapters per client
§  No support for IP over FC (FCNET)
§  Optical devices attached via virtual fibre channel are not supported at this time
Diagnostics no supported for virtual fibre channel adapters

Important NPIV Commands
$lsnports
Display information about physical ports on physical fibre ports
$lsmap –npiv –all
Display Virtual fibre channel adapter created in VIO Server and there status
$lsmap –npiv –vadapter vfchost0
Display attributes for virtual fibre channel adapter
$vfcmap –vadapter vfchost0 –fcp fcs0
Map virtual fibre adapter with physical fibre adapter
$ vfcmap –vadapter vfchost0 –fcp
Unmaps Virtual fibre channel adapter
$ portcfgnpivport ------ > On IBM brocade san switch
0 - Disable the NPIV capability on the port
1 - Enable the NPIV capability on the port
Usage :- $portcfgnpivport 10 1
Unable NPIV functionality on Port 10 of san switch
Also configure Fibre card to dyntrk = yes and fc_err_recov :fast_fail on Aix Lpar

What is NPIV (N_Port ID Virtualization)

N_Port ID Virtualization or NPIV is a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in Storage Area Network design. An NPIV-capable fibre channel HBA can have multiple N_Port IDs, each with a unique identity and world wide port name.

In Simple words we virtualize Physical fibre card similarly like we do for network card in VIOS. A Single Fibre channel port can be share across multiple Vio client with unique WWN numbers means you can connect each Vio Client to independent physical storage on a SAN.

On System p the Virtual I/O Server (VIOS) allows sharing of physical resources between LPARs including virtual SCSI, virtual Fibre Channel (NPIV) and virtual networking. NPIV allows System p logical partitions (LPARs) to have dedicated N_Port IDs, giving the OS a unique identity to the SAN, just as if it had a dedicated physical HBA(s).
For availing maximum redundancy for the SAN paths of the LPARs, the paths has to go through two different HBAs and those HBAs should be assigned to two different VIO Servers.

Pre-requisites/Basic Requirements

• IBM POWER6 or above processor based hardware
• Firmware level: EM340_039
• 8Gb PCIe Dual-port FC Adapter (Feature Code #5735)
• SAN switch supporting the NPIV feature
• Virtual I/O Server version 2.1.0.10-FP-20.1 or later
• AIX 5.3 Technology Level 09 or above (required filesets devices.vdevice.vfc-client and devices.vdevice.IBM.vfc-client)
• AIX 6.1 Technology Level 02 or above (required filesets devices.vdevice.vfc-client and devices.vdevice.IBM.vfc-client)
• SDD 1.7.2.0 + PTF 1.7.2.2
• SDDPCM 2.2.0.0 + PTF v2.2.0.6 or 2.4.0.0 + PTF v2.4.0.1
• For HMC managed systems HMC release 7.3.4.0 with MH01152 or later is required

NPIV Model:



To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1 or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.

There is always a one-to-one relationship between virtual fibre channel adapters on the client logical partitions and the virtual fibre channel adapters on the Virtual I/O Server logical partition. That is, each virtual fibre channel adapter on a client logical partition must connect to only one virtual fibre channel adapter on the Virtual I/O Server logical partition, and each virtual fibre channel on the Virtual I/O Server logical partition must connect to only one virtual fibre channel adapter on a client logical partition.
On systems that are managed by the Hardware Management Console (HMC), you can dynamically add and remove virtual fibre channel adapters to and from the Virtual I/O Server logical partition and each client logical partition. You can also view information about the virtual and physical fibre channel adapters and the worldwide port names (WWPNs) by using Virtual I/O Server commands.
To enable N_Port ID Virtualization (NPIV) on the managed system, you create the required virtual fibre channel adapters and connections as follows:
---You use the HMC to create virtual fibre channel adapters on the Virtual I/O Server logical partition and associate them with virtual fibre channel adapters on the client logical partitions.
---You use the HMC to create virtual fibre channel adapters on each client logical partition and associate them with virtual fibre channel adapters on the Virtual I/O Server logical partition. When you create a virtual fibre channel adapter on a client logical partition, the HMC generates a pair of unique WWPNs for the client virtual fibre channel adapter.
---You connect the virtual fibre channel adapters on the Virtual I/O Server to the physical ports of the physical fibre channel adapter by running the vfcmap command on the Virtual I/O Server.

The HMC generates WWPNs based on the range of names available for use with the prefix in the vital product data on the managed system. This six digit prefix comes with the purchase of the managed system and includes 32,000 pairs of WWPNs. When you remove a virtual fibre channel adapter from a client logical partition, the hypervisor deletes the WWPNs that are assigned to the virtual fibre channel adapter on the client logical partition. The HMC does not reuse the deleted WWPNs when generating WWPNs for virtual fibre channel adapters in the future. If you run out of WWPNs, you must obtain an activation code that includes another prefix with another 32,000 pairs of WWPNs. To avoid configuring the physical fibre channel adapter to be a single point of failure for the connection between the client logical partition and its physical storage on the SAN, do not connect two virtual fibre channel adapters from the same client logical partition to the same physical fibre channel adapter. Instead, connect virtual fibre channel adapters from multiple client logical partitions to one physical fibre channel adapter.

Check the Next Article: How to configure NPIV .

Sunday, October 16, 2011

When root password was last updated in Aix server

 This is cumbersome to know when root password was lasted updated in Aix system especially at times of audit. Calculating the days/time as per the info in /etc/security/password which is really a madness. Here is the solution of how to check when root password was last updated.

1) Check lastupdate in /etc/security/passwd    or pwdadm -q root
root:
        lastupdate = 1316984479

 
2) Then run this command
perl -le 'print scalar localtime 1316984479'


Mon Sep 26 02:31:19 2011

 
That's it!

How to disable TCB on running Aix Server


It has been thinking of many Aix admins that Trusted Computing Base if enabled cant be disabled on the running system; you need to reinstall the OS to deactivate it. What a Joke!!! Everything is becoming dynamic and we are still standing on same level. Let's step forward:-

 MYTH of TCB can't be disbaled if once enabled. How to disable TCB on fly

If TCB is enabled in AIX can be disabled without rebooting or rather say reinstalling the OS. Here is the process: -
 
Don't need any application downtime.

*Playing with ODM is dangerous, so keep your hands safe ;)))

1) Take Odm backup
Top of Form

/usr/lib/objrepos, /usr/share/lib/objrepos and /etc/objrepos recursively
Bottom of Form

2) Check the TCB in odm

# odmget -q attribute=TCB_STATE PdAt

PdAt:
        uniquetype = ""
        attribute = "TCB_STATE"
        deflt = "tcb_enabled"
        values = ""
        width = ""
        type = ""
        generic = ""
        rep = ""
        nls_index = 0
#

3) Disable TCB
odmget -q attribute=TCB_STATE PdAt | sed 's/tcb_enabled/tcb_disabled/' | odmchange -o PdAt -q attribute=TCB_STATE

4) Now TCB is disbaled

# odmget -q attribute=TCB_STATE PdAt

PdAt:
        uniquetype = ""
        attribute = "TCB_STATE"
        deflt = "tcb_disabled"
        values = ""
        width = ""
        type = ""
        generic = ""
        rep = ""
        nls_index = 0
#

5) If you want to enable TCB again

odmget -q attribute=TCB_STATE PdAt | sed 's/tcb_disabled/tcb_enabled/' | odmchange -o PdAt -q attribute=TCB_STATE