#AIX#SDD#SDDPCM#Persistent Reserve – Reservation issue while migration of server

With new day come new challenges and new opportunities for new learnings.

Want to share with you guys the challenges I faced while migrating server from old hardware to new hardware.

  1. Rootvg of the server was on physical disk, alt_clone to SAN disk was not working (could be the limitation of sdd)
  2. Took the mksysb and restored it on new hardware with new profile, removed sdd and installed sddpcm there.
  3. Application Vg was on SAN disks so I thought It wont cause any problem and move would be just running series of mkvdiskhostmap/rmvdishostmap commands on SVC. I brought down the old server and then I mapped the application vg disks on the new lpar but it did not show up with any pvid, ideally it should have come up with pvids which it was reflecting on the old server. I further tried to set the  pvid on the disk manually by running chdev -l <diskname> -a pv=yes but it threw error. I then tried to look at the vgda information of the disk  using readvgda -o <diskname> and it did not show any error which made me scared at first it striked to me disk data could have corrupted but then all the disk could not get bad at the same time. After lot of troubleshooting I thought it could be related to disk reservation. I instantly ran the command to check the reservation on of the disk:

devrsrv -f -l hdisk35 and it gave me below o/p

                  ==================================================
Device Name                     :  hdisk35
Device Open On Current Host?    :  NO
ODM Reservation Policy          :  NO RESERVE
Device Reservation State        :  PR EXCLUSIVE
Device Reservation State Information

          I quickly checked the same command on other server and found the device reservation state field was reflecting No

          Reserve which confirmed my doubt. I ran   devrsrv -c query -l hdisk35
Device Reservation State Information
==================================================
Device Name                     :  hdisk35
Device Open On Current Host?    :  NO
ODM Reservation Policy          :  NO RESERVE

           Which cleared the PR Exclusive reserve and I was able to see the pvid for all the disk.

           I tried importvg -y <vgname> pvid  and voilla it worked.

           This link explain the scsi reserves in  detailed manner.

           So in my case possibility could be that it was set to PR-Exclusive on the old server.

           Please do share your insight.

Advertisements

AIX# Using tools iptrace, snoop, tcpdump, wireshark, and nettl to trace packet

Creating, formatting, and reading packet traces is sometimes required to resolve problems with IBM® WebSphere® Edge Server. However, the most appropriate tool varies, depending on operating system.

Resolving the problem

Available for multiple operating systems
Wireshark is useful and a freely available tool that can read files and capture packets on almost any operating system.

Using iptrace on AIX®
You can use any combination of these options, you do not need to use them all:

-a Do NOT print out arps. Useful with clean up traces.
-s Limit trace to source/client IP address, if known.
-d Limit trace to destination IP, if known.
-b Capture bidirectional traffic (send and responsepackets).
-p Specify the port to be traced.
Example:

Run iptrace on AIX interface en1 to capture port 80 traffic from a single client IP to a server IP:
iptrace -a -i en1 -s clientip -b -d serverip -p 80 trace.out

This trace will capture both directions of the port 80 traffic on interface en1 between the clientip and serverip and send this to the raw file of trace.out.

Reproduce the problem, then run the following:
ps -ef|grep iptrace
kill -15

Trace tools like Wireshark can read trace.out files created by iptrace

exception: it is not possible to collect a packet capture on AIX when using IBM Load Balancer for ipv4 and ipv6
.

Using snoop on Solaris™

-v Include verbose output. Commonly used when dumping to pre-formatted output.
-o Dump in binary format. Output written to a binary file that is readable by Ethereal.
Example scenario:
snoop hme0 -v >snoop.out
snoop -o snoop.out

These commands capture all traffic on the hme0 interface. Use combinations of snoop options to meet your needs.

Warning: Using some options, packets may be corrupted by snoop.

Using tcpdump on Linux®
tcpdump has many options and a comprehensive man page.

A simple way to capture all packets to a binary file which is readable with ethereal.

Example:
tcpdump -s 2000 -w filename.out

For a simple packet trace that is formatted and readable by any text editor.
This will listen on the default interface for all port 80 traffic.

Example:
tcpdump port 80 >filename.out

This will watch only the eth1 interface.

Example:
tcpdump -i eth1 >filename.out

Using Network Monitor with Microsoft® Windows®

Start Network Monitor.
Select the interface to listen on and click start.
Once the traffic needed has been captured, click stop.
Save the resulting file which can be read by Network Monitor or ethereal.
For additional information, visit the technote, How to capture network traffic with Network Monitor

Using nettl on HP-UX
The nettl tool provides control network tracing and logging.

Scenario:
/usr/sbin/nettl -start
/usr/sbin/nettl -stop
/usr/sbin/nettl -firmlog 0|1|2 -card dev_name …
/usr/sbin/nettl -log class … -entity subsystem …
/usr/sbin/nettl -status [log |trace |all]
/usr/sbin/nettl -traceon kind … -entity subsystem …
[-card dev_name …] [-file tracename] [-m bytes] [-size portsize]
[-tracemax maxsize] [-n num_files]
/usr/sbin/nettl -traceoff -entity subsystem …

Linux# Red hat # Creating a user account to allow ftp but not login

1. Create the required user account:

# adduser ucode

2. Give the user a password:

# passwd ucode
(you will be asked for the password twice)

3. The default entry in the /etc/passwd file for this user will look
something like:

ucode:x:4347:4347::/home/ucode:/bin/bash

To stop the user ucode logging in with telnet, ssh or rlogin, change
the shell (/bin/bash) to /bin/false (and therefore no shell or login
will be given):

ucode:x:4347:4347::/home/ucode:/bin/false

4. To allow the ftp access to work, ftp expects a valid shell (what it gets
back from the getusershell(), see “Ftpd authenticates users according to
four rules” in the ftpd man page). So we need to add our “shell”
/bin/false to /etc/shells. Simply a new line:

/bin/false

You should now be able to login with ftp but no shell login.

SECURITY STEPS FOR WU-FTPD
————————–
The user ucode set up above, when logging in with ftp, is able to “cd ..”
out if their own home directory. This allows them to move into other
directories which is not a safe thing to allow to happen. To set up
the user to only be allowed in their home directory and lower, change
the “guestgroup” entry in the /etc/ftpaccess file. The line will look
something like:

guestgroup ftpchroot ucode

As user “ucode” is also in the ucode group, the FTP access will be
restricted. Remember, if you have any other user id in the “ucode” group,
they will also have restricted FTP access.

If you want to allow a user to be in the “ucode” group (for write access to
ucode’s directories maybe), then create a new group (such as restrictftp)
and only add the user “ucode” to this group in the /etc/group file.

SECURITY STEPS FOR VSFTPD
————————-
You can restrict a ftp user to only access their own home directory by setting
various options in the vsftpd configuration file. You can make a small list of
those allowed access anywhere, the more secure method; or you can allow
everyone free access but restrict just a number of people to their home
directories (I think less secure).

To use the method where you use a list of those ALLOWED full access do:

1. Edit the /etc/vsftpd/vsftpd.conf to include the lines:

chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list

2. In the file /etc/vsftpd.chroot_list put in the names of the users you
WANT to allow access all over the file systems. For example if you want
only bryk and fred to have full access add their names to the file:

bryk
fred

All other users will ONLY have access to their home directories.

To use the method where ALL users have full access and a few have restricted
access to their home directories do:

1. Edit the /etc/vsftpd/vsftpd.conf to include the lines:

#chroot_local_user=YES (NOTE: the hash, this is important)
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list

2. In the file /etc/vsftpd.chroot_list put in the names of the users you
WANT to restrict to their home directories only. For example if you want
only bryk and ucode to be restricted:

bryk
ucode

All other users will have full access other directories on the system.

The first method is more secure.

SAN # IBM SVC vs EMC VPLEX

 


Why IBM SVC and EMC VPLEX are not the same.

Whilst EMC VPLEX and IBM SAN Volume Controller (SVC) may, after a cursory glance, appear to be similar technologies that have similar use cases it is not so black and white, the reality is that these solutions were conceived with very different aims in mind and with very different architectures.

IBM SVC

IBM SVC was designed as a single site storage virtualisation solution that enabled less capable storage arrays to be pooled behind the virtualisation architecture and enabled with additional write caching and non-disruptive data mobility features. IBM SVC development has more recently added features such as auto-tiering and thin provisioning, transforming SVC into a storage controller type solution with embedded features (e.g. V7000), and a standalone solution that can enable storage commoditisation use cases across heterogeneous storage arrays.

The architecture is based on I/O groups of two SVC Nodes in an active/passive state, with an SVC Cluster consisting of up to four I/O Groups and eight SVC Nodes. An SVC Cluster can be configured in a ‘split node’ architecture where the two Node I/O group is split across physically separate sites using fibre channel and ISLs.

SVC1

EDIT 25.10.2012: In fairness to a comment posted by IBM I have added the picture below specifically depicting an IBM Split Node SVC solution.

SVC2

EMC VPLEX

EMC’s heritage and its position in the market place has been based on the development of storage platforms and inherent software features such as thin-provisioning, snaps, clones and FAST within those platforms. EMC VPLEX was conceived as a storage virtualisation and multi-site federation solution that could be used to further augment those existing technologies and enable greater levels of both data and application, availability and mobility.

An EMC VPLEX Cluster consists of up to eight VPLEX Directors participating in a single I/O group. Each Director has four front-end and four back-end fibre channel ports with completely separate connectivity for inter-Director, inter-Engine and inter-Site communication. A completely separate, resilient VPLEX cluster can be created within a second datacentre and connected to the VPLEX Cluster within the first datacentre over either Fibre Channel or Ethernet with no requirement for merged fabrics. Once connected VPLEX enables the creation of local and distributed virtual volumes and in the case of the later, these volumes are concurrently read/write accessible across both datacentres in a true active/active architecture in which resources from both datacentres are utilised for production I/O.

Whilst VPLEX Local does not necessarily have all of the storage commoditisation features associated with IBM SVC, it does enable the key virtualisation use cases, of storage abstraction, storage pooling, data mobility and storage mirroring. For storage commoditisation use cases, EMC proposes the use of Federated Tiered Storage to enable the use of Tier 1 EMC storage software features across third-party arrays; which can then also, if required, additionally benefit from the use of VPLEX.

VPLEX1

EMC VPLEX Local IBM SVC
All storage array, features and functionality are available behind VPLEX. SVC eliminates the ability to use back-end array features and functionality, due to write caching.
VPLEX is an active/active architecture where a virtual volume can be active across multiple VPLEX Directors. SVC is an active /passive two Node I/O Group where a volume can only be active from one SVC Node.
VPLEX maintains backend active/active array functionality. SVC forces an active/active backend array into an active/passive solution, due to active/passive SVC Nodes.
VPLEX Directors can be added non-disruptively to an existing VPLEX Cluster and the new resources (e.g. read cache, bandwidth, IOPs) used across the Cluster. I/O Groups are discreet silos and free resources in one I/O group cannot be leveraged in another. With SVC 6.4 IBM claims non-disruptive mobility between IO groups. This is not 100% accurate as this feature is only limited to Windows and Linux, does not support clustered servers and does not support VMware. Even in case of Windows they require a reboot.
VPLEX Directors have four front-end and four back-end ports that are dedicated to front-end and back-end I/O. An SVC Node has only four HBAs per Node and they require two HBAs for cache mirroring leaving with only 1 HBA per fabric for production. Losing one of the HBAs is equivalent to losing one complete fabric.
VPLEX can use RecoverPoint CDP to enable corruption protection of virtualised volumes to non-virtualised disks that can be used to recover from even in the event of a total loss of the virtualisation technology. Alternatively snap and clone functionality at the array level can be fully utilised as VPLEX does not do write caching. Snaps and clones can be completed using SVC and are used to recover in case of an outage, but how do you recover in the case that the virtualisation layer is gone? All your snaps and clones are also gone resulting in a need to restore the data from tape.
Non-disruptive data mobility across heterogeneous storage arrays can be paused, stopped and backed out, without any risk of data loss and with little performance impact. Data mobility can have performance impacts during the actual mobility phase and the method of migration can lead to data loss during the migration in the event of a storage or site failure.
VPLEX Metro Split-Node SVC Cluster
Independent, resilient VPLEX Clusters are federated across datacentres, using fibre or IP with no requirement for merged fabrics. Two Node I/O groups are physically ‘split’ between datacentres leading to reduced resiliency within the datacentre. Fabrics must be merged between datacentres and multiple ISLs must be used to dual path all hosts across both SVC Nodes of the I/O Group adding fabric complexity and cost.
When using VPLEX with the minimum requirement of two Directors within a cluster, the loss of one Director does not require I/O to be redirected to the remote site as there is local redundancy. When used in a split-node configuration SVC forces a multi controller array with multiple redundancies into a two Node solution with no redundancy per site. Losing one Node is similar to losing a site, IOs will have to cross ISLs to the remote site.
A write to a distributed virtual volume from either site is written once to the remote site, irrespective of which location the production application was being run from. For each write IO, each appliance will have to do one write to the local backend, one write to the remote SVC Node and one write to the remote array, this is at least one round-trip more than VPLEX does. In case you run production on the non-preferred site you have to write to the Primary Node, mirror cache, write to local array and write to remote array making it two round trips more than VPLEX.

SAN # Brocade Switch# FOS CLI Commands

Useful Brocade SAN Switch CLI commands

Here is the list of commonly used Brocade SAN switch CLI command List

Show Commands

Command
psshow
fansshow
tempshow
sensorshow
nsshow
nsshow -t
nsshow -r
nscamshow
nsallshow
licenseshow
date
bannershow
httpcfgshow
switchname
fabricshow
userconfig –show -a
switchstatusshow
switchstatuspolicyshow
portshow
portcfgshow
configshow fabric.ops

configshow fabric.ops.pidFormat
switchuptime OR uptime
firmwareshow
version
hashow

Description
Displays the status of the power supply
Displays the status of the fans
Displays the status of the temperature readings
Displays the status of the sensor readings
Displays information in the name server
Displays information in the name server
Displays the information in the name server along with the state change registration details
Displays detailed information of all the devices connected to all the switches in the fabric (Remote Name Servers)
Displays the 24 bit address of all devices that are in the fabric
Displays all the licenses that have been added in the switch
Displays the current date set on the switch
Displays the banner that will appear when logging in unsing the CLI or webtools
Displays the JAVA version the switch expects at the management console
Displays the switchname
Displays information of all the switches in the fabric
Displays the account information like role , description , password exp date , locked status
Displays the overall status of the switch
Displays policy set for the switch regarding Marginal(Yellow) or Down(Red) error status
To show the port status
Displays the speed set for all ports on all slots and other detailed port information
Displays the parameters of the switch. Ensure all switches in a fabric have the same parameters in order to communicate
Displays the PID set for a switch Core , Native or Extended edge
Displays the uptime for the switch
Displays the firmware on the switch
Displays the current firmware version on the switch
Displays the status of local and remote CP’s. High availability , heartbeat and synchronization

Port Settings

Command
portcfgshow
portcfg rscnsupr [slot/port] –enable
portcfg rscnsupr [slot/port] –disable
portname
portdisable
portenable
portcfgpersistentdisable
portcfgpersistentenable
portshow
portcfgspeed ,
switchcfgspeed

portcfgshow
portcfgdefault
portcfglongdistance

portcfgeport

Description
Displays the port settings
A registered state change registration is suppressed when a state change occurs on the port
A registered state change registration is sent when a state change occurs on the port
To assign a name for a port
To disable a port or slot
To enable a port or slot
To disable a port , status would not change even after rebooting the switch
To enable a port , status would not change even after rebooting the switch
To show the port status
To set speed for a port Note – 0:auto negotiated 1,2,4 Gbit/sec , 1 : 1Gbit/sec , 2 : 2 Gbit/sec , 4 : 4Gbit/sec
To set speed for all the ports on the switch Note – 0:auto negotiated 1,2,4 Gbit/sec , 1 : 1Gbit/sec , 2 : 2 Gbit/sec , 4 : 4Gbit/sec
Displays the speed set for all ports on all slots and other detailed port information
To set the port settings to default
To set the long distance mode . Default is L0(Normal), as per distance will display LE <=10 kms , L0.5 <=25kms , L1 <=50 kms, L2<=100kms , LD=auto , LS = Static
Used to disable a port from being a E port

Setting commands

Command
ipaddrset
bannerset
Description
To set the ip address for the switch
To set the banner which will appear when logging in using the CLI or webtools

Time and Date Settings

Command
date
tsclockserver 10.10.1.1
tsclockserver LOCL
date mmddhhmmyy
tstimezone -5
Description
Displays the current date set on the switch
Instruction for the principal switch to synchronize time with the NTP server (specify ipaddress of the NTP server)
Instruction to stop NTP server synchronization (Local time of the switch)
To set the time of the switch when the NTP server synchronization is cancelled
To set the time zone for individual switches

License Commands

Command
licenseshow
licenseadd
licenseremove
licenseidshow
Description
Displays all the licenses that are added in the switch
To add a new license to the switch
To remove a license from the switch
Based on Switch WWN

Banner Commands

Command
bannershow
bannerset
bannerset “”
Description
Displays the banner that will appear when logging in unsing the CLI or webtools
To set the banner which will appear when logging in using the CLI or webtools
To remove the bannerset

Password commands

Command
passwd
passwdcfg –set -lowercase 3 uppercase 1 -digits 2 -punctuation 2 –minlength 10 -history 3
passwdcfg –set –minpasswordage 1
passwdcfg –set –maxpasswordage 30
passwdcfg –set -warning 23
passwdcfg –set –lockoutthreshold 5
passwdcfg –set –lockoutduration 30
passwdcfgsetdefault

Description
To change the password for that particular login
To set the password rules

To set the minimum password age in Days
To set the maximum password age in Days
To set a warning for the expiration Days remaining
To set the account lockout thresh hold
To set the account lockout duration in Minutes
To restore the password policy to Factory settings (min length – 8, history -1 , lockoutduration – 30)

User Configuration (commands to administer Accounts)

Command
userconfig –show -a / userconfig –show
userconfig –add jdoe -r admin -d “Jane Doe”
userconfig –show jdoe
userconfig –change -e no

userconfig –change -e yes

Description
Displays all the account information like role , description , password exp date , locked status
To add a new account -r = role , -d = description
Displays all the information for the account jdoe
To Disable an account , usually default a/cs like admin and user . But ensure before disabling the admin a/c there is another a/c with admin rights
To Enable an account

NPIV Commands

Command
portcfgnpivport
configure
Description
Enables NPIV functionality on a port . By default on Condor based switches
In order to increase the no of port logins ( Default is 126 , max 255)

SNMP

Command
snmpconfig
agtcfgset
snmpmibcapset
Description
snmpconfig for 5.0 above fos
snmp config for fos below 5.0
for choosing the MIB’s for the snmp settings

Zoning

Command
alicreate “Name”, “domain,port no”
alicreate “Name”,”portname1; portname2″
alidelete “Name”
aliadd “Name”, “domain,port no”
aliremove “Name”, “domain,port no”
alishow “AliName”
zonecreate “Zone Name”, “alias1; alias2″
zonedelete “ZoneName”
zoneadd “ZoneName”, “alias name”
zoneremove “ZoneName”, “alias name”
zoneshow “zoneName”
cfgcreate “Configname”, “Zone1; Zone2″
cfgdelete “ConfigName”
cfgadd “ConfigName”, “Zone3″
cfgremove “ConfigName”, “Zone3″
cfgshow “ConfigName”
cfgenable “ConfigName”
cfgsave

Description
Used to create alias
To create more than one ports under one alias
To delete alias
To add additional ports to an alias
To remove a port from the alias
To show the alias configuration on the switch
To create zones based on alias
To delete a zone
To add additional alias into the zone
To remove an alias from the zone
To show the zone configuration information
To create configurations by adding in zones
To delete a configuration
To add additional zones in the configuration
To remove a zone from the configuration
To show the details of that configuration
To enable a configuration on the switch
To have the effective configuration to be written into the flash memory

Firmware commands

Command
configupload
configdownload

configure => cfgload attributes : [y] => Ensure secure config upload / download : [y]

firmwaredownload
firmwareshow
version
fastboot
reboot

Description
Saves the switch config as an ASCII text file to an FTP server
To restore a switch configuration from ASCII text file Note – Need to disable the switch before downloading the config file
Fabric OS v 4.4 & above provides Secure File Copy Protocol (SCP) during upload or download of configurations

To download the firmware to be installed on the switch
To be run after installing the firmware on the switch
Displays the current firmware version on the switch
Needs to be run after installing the firmware . This doesnot include the post
Needs to be run after installing the firmware. This includes the post

Other commands

Command
killtelnet
configure
quitemode
quietmode 1
switchname
switchname “EXAMPLE”
configure
timeout
timeout 10
switchuptime OR uptime
switchcfgspeed
fastboot
reboot
switchstatusshow
switchstatuspolicyshow
switchstatuspolicyset

Description
To kill a particular session which is using telnet
To configure a switch
To switch off the quietmode
To suppress messages to the console
Displays the switchname
To assign a switch name
To disable/enable TELNETD
Displays the timeout time set for Telnet session on the switch
To set a specific timeout time for the Telnet session
Displays the uptime for the switch
To set speed for all the ports on the switch Note – 0:auto negotiated 1,2,4 Gbit/sec , 1 : 1Gbit/sec , 2 : 2 Gbit/sec , 4 : 4Gbit/sec
To reboot the switch without post
To reboot the switch with the post
Displays the overall status of the switch
Displays policy set for the switch regarding Marginal(Yellow) or Down(Red) error status
To change the policy set for the switch regarding Marginal(Yellow) or Down(Red) error status

SAN # EMC # What is VPLEX?

What is VPLEX?

VPLEX at its core is a storage virtualization appliance. It sits between your arrays and hosts and virtualizes the presentation of storage arrays, including non-EMC arrays.  Instead of presenting storage to the host directly you present it to the VPLEX. You then configure that storage from within the VPLEX and then zone the VPLEX to the host.  Basically, you attach any storage to it, and like in-band virtualization devices, it virtualizes and abstracts them.

There are three VPLEX product offerings, Local, Metro, and Geo:

Local.  VPLEX Local manages multiple heterogeneous arrays from a single interface within a single data center location. VPLEX Local allows increased availability, simplified management, and improved utilization across multiple arrays.

Metro.  VPLEX Metro with AccessAnywhere enables active-active, block level access to data between two sites within synchronous distances.  Host application stability needs to be considered. It is recommended that depending on the application that consideration for Metro be =< 5ms latency. The combination of virtual storage with VPLEX Metro and virtual servers allows for the transparent movement of VM’s and storage across longer distances and improves utilization across heterogeneous arrays and multiple sites.

Geo.  VPLEX Geo with AccessAnywhere enables active-active, block level access to data between two sites within asynchronous distances. Geo improves the cost efficiency of resources and power.  It provides the same distributed device flexibility as Metro but extends the distance up to 50ms of network latency. 

What are some advantages of using VPLEX? 

1. Extra Cache and Increased IO.  VPLEX has a large cache (64GB per node) that sits in-between the host and the array. It offers additional read cache that can greatly improve read performance on databases because the additional cache is offloaded from the individual arrays.

2. Enhanced options for DR with RecoverPoint. The DR benefits are increased when integrating RecoverPoint with VPLEX Metro or Geo to replicate the data using real time replication. It includes a capacity based journal for very granular rollback capabilities (think of it as a DVR for the data center).  You can also use the native bandwidth reduction features (compression & deduplication) or disable them if you have WAN optimization devices installed like those from Riverbed.  If you want active/active read/write access to data across a large distance, VPLEX is your only option.  NetApp’s V-Series and HDS USPV can’t do it unless they are in the same data center. Here’s a few more advantages:

  • DVR-like recovery to any point in time
  • Dynamic synchronous and asynchronous replication
  • Customized recovery point opbjectives that support any-to-any storage arrays
  • WAN bandwidth reduction of up to 90% of changed data
  • Non-disruptive DR testing

4. Non disruptive data mobility & reduced maintenance costs. One of the biggest benefits of virtualizing storage is that you’ll never have to take downtime for a migration again. It can take months to migrate production systems and without virtualization downtime is almost always required. Also, migration is expensive, it takes a great deal of resources from multiple groups as well as the cost of keeping the older array on the floor during the process. Overlapping maintenance costs are expensive too.  By shortening the migration timeframe hardware maintenance costs will drop, saving money.   Maintenance can be a significant part of the storage TCO, especially if the arrays are older or are going to be used for a longer period of time.  Virtualization can be a great way to reduce those costs and improve the return on assets over time.

5. Flexibility based on application IO.  The ability to move and balance LUN I/O among multiple smaller arrays non-disruptively would allows you to balance workloads and increase your ability to respond to performance demands quickly.  Note that underlying LUNs can be aggregated or simply passed through the VPLEX.

6. Simplified Management and vendor neutrality.   Implementing VPLEX for all storage related provisioning tasks would reduce complexity with multiple vendor arrays.  It allows you to manage multiple heterogeneous arrays from a single interface.  It also makes zoning easier as all hosts would only need to be zoned to the VPLEX rather than every array on the floor, which makes it faster and easier to provision new storage to a new host.

7. Increased leverage among vendors.  This advantage would be true with any virtualization device.  When controller based storage virtualization is employed, there is more flexibility to pit vendors against each other to get the best hardware, software and maintenance costs.  Older arrays could be commoditized which could allow for increased leverage to negotiate the best rates.

8. Use older arrays for Archiving. Data could be seamlessly demoted or promoted to different arrays based on an array’s age, it’s performance levels and it’s related maintenance costs.  Older arrays could be retained for capacity and be demoted to a lower tier of service, and even with the increased maintenance costs it could still save money.

9. Scale.  You can scale it out and add more nodes for more performance when needed.  With a VPLEX Metro configuration, you could configure VPLEX with up to 16 nodes in the cluster between the two sites.

What are some possible disadvantages of VPLEX?

1. Licensing Costs. VPLEX is not cheap.  Also, it can be licensed per frame on VNX but must be licensed per TB on CX series.  Your large,older CX arrays will cost you a lot more to license.

2. It’s one more device to manage.   The VPLEX is an appliance, and it’s one more thing (or things) that has to be managed and paid for.

3. Added complexity to infrastructure.  Depending on the configuration, there could be multiple VPLEX appliances at every site, adding considerable complexity to the environment.

4. Managing mixed workloads in virtual enviornments.  When heavy workloads are all mixed together on the same array there is no way to isolate them, and the ability to migrate that workload non-disruptively to another array is one of the reasons to implement a VPLEX.  In practice, however, those VMs may end up being moved to another array with the same storage limitations as where they came from.  The VPLEX could be simply temporarily solving a problem by moving that problem to a different location.

5. Lack of advanced features. The VPLEX has no advanced storage features such as snapshots, deduplication, replication, or thin provisioning.  It relies on the underlying storage array for those type of features.  As an example, you may want to utilize block based deduplication with an HDS array by placing a Netapp V-series in front of it and using Netapp’s dedupe to enable it.  It is only possible to do that with a Netapp Vseries or HDS USP-V type device, the VPLEX can’t do that.

6. Write cache performance is not improved.  The VPLEX uses write-through caching while their competitor’s storage virtualization devices use write-back caching. When there is a write I/O in a VPLEX environment the I/O is cached on the VPLEX, however it is passed all the way back to the virtualized storage array before an ack is sent back to the host.  The Netapp V-Series and HDS USPV will store the I/O in their own cache and immediately return an ack to the host.  At that point the I/Os are flushed to the back end storage array using their respective write coalescing & cache flushing algorithms.  Because of the write-back behavior it is possible for a possible performance gain above and beyond the performance of the underlying storage arrays due to the caching on these controllers.  There is no performance gain for write I/O in VPLEX environments beyond the existing storage due to the write-through cache design.

AIX # How to migrate from SDD to SDDPCM/MPIO for non-HACMP nodes

Please upgrade all the nodes to AIX5.3 TL05 CSP before migration to SDDPCM unless if it is specified not to do so.

a. Ensure the node has a valid, current mksysb.

b. Ensure that you have collected all the information about the system

If either item is not current or valid, run a mksysb or snap from cron prior to change night.

c. For each SAN volume group you are doing, record the output of below commands.

ls -al /dev/SANVG > Major No of SANVG

lsvg -p SANVG > vpaths in SANVG

d. For each SAN volume group, record the vpath number of one vpath in your SAN VG, as well as its PVID. Also record each SAN VG’s name.

e. Record the no of vpaths on server & their pvid.

datapath query device

lspv | grep vpath

f. capture a full lspv snap

lspv > /u/$homedir/lspv.snap

e. If this is a Domino notes node, check to see if there is a local /notes directory with links.

Here is an example from yyyy:
yyyy # ls -al /notes
total 8
drwxrwxr-x 2 notes notes 256 Jan 20 12:06 .
drwxr-xr-x 37 root system 4096 Jan 20 12:06 ..
lrwxrwxrwx 1 notesa notes 13 Jan 20 12:06 xxxxx -> /xxxxx

If this directory exists with similar links, record the output. The /notes directory and
links will need to be recreated after the migration.

f. For any SAN VGs with raw LVs, record their device permissions.

ls -al /dev/rlvname

1. After the apps have been completely stopped, unmount every SAN filesystem.

2. Varyoff all SANVG

varyoffvg SANVG

exportvg SANVG

e.g way of umounting all fs in SANVG
for i in $(lsvg -l SANVG | awk

Unknown macro: {print $7}
‘)
> do
> umount $i
> done

3. Stop server daemon
stopsrc -s sddsrv
4. Remove the SDD vpath devices

rmdevdl dpo -R

5. Remove fibre adapters

rmdevdl fcsX -R

6. Remove hdisk devices.

lsdev -C -t 2105* -F name | xargs -n1 rmdev -dl for 2105 devices

lsdev -C -t 2145* -F name | xargs -n1 rmdev -dl for 2145 devices

lsdev -C -t 2107* -F name | xargs -n1 rmdev -dl for 2107 devices

lsdev -C -t 1750* -F name | xargs -n1 rmdev -dl for 1750 devices

7. Remove SDD & related filesets.

Filesets to remove

SDDFSET=devices.sdd.$

Unknown macro: {OS}
.rte
AIX 5.2 > devices.sdd.52.rte
AIX 5.3 > devices.sdd.53.rte
IBM2105=ibm2105.rte
FCPFSET=devices.fcp.disk.ibm.rte
### At this point we should be clean w/o SDD and disks

8. Install filesets for MPIO & SDDPCM

Filesets to install

PCMFSET=devices.sddpcm.53.rte

MPIOFSET=devices.fcp.disk.ibm.mpio.rte

9. Verify install was successful.

10. Now reboot the server.

shutdown -Fr

11. Count the MPIO disks. Check if we have same number of MPIO disks as the number of vpaths

lsdev -Cc disk |grepi mpio|wc -l

12. Now import the SANVGs

importvg -y SANVG hdiskX > hdiskX is the disk with same pvid as one of the vpath present in SANVG before migration

13. Mount all the filesystems which were unmounted in step 1 or do LPAR reboot if applicable.

14. Run below command to veify that each mpio disk has 4 paths.
pcmpath query device

15. If this is a DB2 with raw logical volumes, after the DB2 start script has run,
list the raw LV device permissions. Compare the perms to what you recorded prior to the change.
If they have reverted back to root:system, change them according to how they were
before.

16. On Domino Notes nodes that had local /notes directories with link, recreate the links
as recorded previously