The Twelve-Factors Kubernetes

The Twelve-Factors Kubernetes

“Kubernetes is the Linux of the cloud” This quote by Kelsey Hightower during the Kubecon 2017 in Austinemphasize the rise of Kubernetes among modern cloud infrastructures.

This rise is partly driven by the developers community, but also by the web giants such as Google, Amazon, Alibaba or Red Hat who have invested a lot on this technology, and keep contributing to its improvement and smoothening its integration to their respective ecosystems. EKS for AWS, GKE for Google and AKS for Azure are good illustrations of that.

This article lists the 12 basic rules and good practices to know about when starting using Kubernetes optimally. This list interests anyone, developer or sysadmin, who uses K8s daily.

I. 1 Pod = 1 or n containers

The naming is important to make sure everyone is on the same page about what’s what. In the Kubernetes world, a Pod is the smallest computing unit deployable on a cluster. It’s made of one or more containers. The containers within a Pod share the same IP, the same storage and are co-located on the same node of the cluster.

To go further: https://kubernetes.io/docs/concepts/workloads/pods/pod/

II. Labels everywhere

Most Kubernetes resources can be labelled: the Pods, the Nodes, the Namespaces etc. This labelling is done injecting a key-value pair into the metadata. Labelling components allows for 2 things:

  • A technical use, as many inter-dependant resources use labels to identify one another. For instance, I can label part of my Nodes “zone: front” because these nodes are likely to host web applications. I then assign a affinity to my frontend Pods so that they get hosted by the nodes labelled “zone: front”.
  • An organisational use: assign labels allow to easily identify resources and request them efficiently. For instance, to retrieve all nodes in the front zone, i can run:

$> kubectl get nodes -l zone=front

To go further: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

III. Infrastructure as Code and versioning

All Kubernetes resources can be written as a YAML or JSON files. The resource creation from a file is done with the command line:

$> kubectl apply -f MYFILE.{yaml,json}

The apply command does a smart diff, so it only creates the resource if it wasn’t already there, updates it if the file was changed, and does nothing otherwise.

The use of files allows to track, version and reproduce the complete system at any time. It is therefore a commonly adopted practice to to version the K8sresource description files with the same rigor as for the code.

To go further: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-apply

IV. A Service to expose

Pods never communicate directly with one another, they got through a Service, because Pods are volatile and short-lived across the cluster. During some maintenance operations, some Pods may migrate from one node to another. These same Pods may also reboot, scale out, or even be destroyed, when upgrading for instance. In each of these cases, the Pod IP changes, as well as its name. The Service is a Kubernetes resource located in front of the Pods and allowing for some ports of the Pods on the network. Services have a fixed name and a fixed dedicated IP. Thus you can access your Pods whatever their IPs or names. The matching between Services and Pods relies on labels. When the Service matches several Pods, it load-balances the traffic with a round-robin algorithm.

To go further: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/

V. ConfigMap and secret to configure

ConfigMaps and Secrets are Kubernetes resources allowing to manage the Pods configuration. The configuration is described as a set of key-values. These configurations are then injected into the Pods as environment variables or as configuration files mounted on the the containers.
The use of these resources allows to decouple the Pods description from any configuration. Whether they are written in YAML or JSON, the configurations are versionable (except for Secrets which hold sensitive information).

To go further : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

VI. Limit and Request to control the resources utilization

In the many Pods configuration options, it is possible to define the resources used and usable by the Pod (CPU and memory):

  • requests, this configuration is applicable to CPU or memory. It defines the minimum resources the Pod will need to run. These values are used by the scheduler at the time of the node allocation. It also enables auto-scaling. The target CPU utilisation is based on the requested CPU and the Pods autoscaler – which is also a Kubernetes resource – will automatically scale up or down the number of Pods to reach it.
  • limits, just like request, this configuration is applicable to CPU and memory. It defines the maximum amount of resources usable by the Pod. Defining these parameters prevents a failing Pod to compromise the whole cluster by consuming all the resources.

If the Kubernetes cluster administrator defined resource quotas for the Namespaces, defining requests and limitsbecomes mandatory, or the Pod won’t be scheduled. When these values aren’t defined, the administrator may also define default values in a K8s resource named LimitRange.

To go further: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

VII. Think of the Pods lifecycle

It is possible to deploy a Pod by describing its configuration in a YAML/JSON file and injecting it into K8s with the kubectl client. Be careful, this method does not benefit from the Pods resilience offered by K8s by design. If the Pod crashes, it won’t be automatically replaced. It is recommended to use a Deployment. This K8s object allows to write a Pod along with its configuration but it also hides the resilience complexity. Indeed, the Deployment will generate a ReplicaSet. The only goal of the ReplicaSet is to make sure that the number of running Pods matches the desired number of Pods. It also provides the abilityto scale Pods at will. The Deployment allows to configure the deployment strategies. It is for instance possible to define a rolling-update strategy in case of a new version of a Pod’s container.

The following command allows to start Pods (for instance, a Nginx):

$> kubectl run nginx image=nginx --replicas=2

This command will generate a Deployment with a Pod running the Nginx container. The same Deployment will also generate the ReplicaSet which will ensure 2 Pods are running at any time.

To go further: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

VIII. LivenessProbe to monitor

As we saw, the ReplicaSet allows to ensure the number of running Pods matches the number of desired Pods. It restarts any failing Pods. It is also possible to configure the Pods resiliency on the functional level. For that there is the LivenessProbe option. It provides the abilityto automatically restart the Pod if the condition is not verified.

Just like LivenessProbe monitors the status of an application, the ReadinessProbe monitors when an application is available after reboot. This is useful for an application that runs tasks before it actually starts (eg. data injection).

To go further: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

IX. Latest is not a version

K8s is a container orchestrator and the name of the container to deploy is specified in the Pod configuration. The naming of an image is composed as such:

<Registry name>/<Container name>:<tag or version>

It is a common practice to increment the image version just like you increment the version of a code base, but also to assign the tag “latest” to the last build image.
Configuring a Pod to deploy an image with the tag “latest” is not a good practice for several reasons:

  • No control over the deployed version, and some possible side effects related to the new versions of the image components.
  • Latest may be buggy.
  • It is possible to configure the Pods to select an images “pull” strategy. It’s the “ImagePullPolicy” option which can have 3 values:
    • IfNotPresent: Pull image only if it isn’t locally available on the node
    • Always: Always pull image
    • Never : Never pull image

IfnotPresent is the default option, so if we use an image with the tag “latest”, Kubernetes will fetch the “latest” version image from at the first deployment. Then as it will be locally present on the node for subsequent deployments, it won’t download the image again from the registry, even if a new “latest” version image was pushed.

To go further: https://kubernetes.io/docs/concepts/configuration/overview/#container-images

X. Pods are stateless

Pods are short lived and volatile, they can be moved to other nodes in case of maintenance, deployments or reboot. They can also – and that’s a big perk of K8s like systems – scale on demand. The inbound flow to Pods is load-balanced by the Service in front of the Pods. That’s why applications hosted on K8s must use a third party provider to store data. For instance an e-commerce website storing session information as files within a container (let’s say a purchase cart) will lose data when the Pod scales or restarts.

The solution to address this issue vary depending use cases. For instance, a key-value storage service (redis, memcache) can be considered to store session data. For a file hosting application, an object storage solution such as AWS S3 will be favored.

To go further: https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

XI. Distributed Storage Volumes

As we have seen, your applications should be stateless. You may need however to deploy stateful components requiring a storage layer such as a database. Kubernetes provides the abilityto mount volumes within the Pods. It then becomes possible to mount a volume provided by AWS, Azure of Google’s storage services. The storage is then external of your cluster and remains attached to the Pod in case of a redeployment to a different node. It is also possible to mount a volume between the node where the Pod is deployed and the Pod itself. But this solution should not be considered. Indeed, your Pod and its volume attached to the host may be migrated and lose all its data.

To go further: https://kubernetes.io/docs/concepts/storage/volumes/

XII. My applications are 12 factors apps

The application code that will eventually be deployed to a Kubernetes cluster has to respect a set of rules. The 12 factors apps are a set of advice/good practices created by HerokuHeroku is a PaaS provider hosting applications as containers, and these principles are a way to best operate code meant to be containerized.

The main recommendations are:

  • Code versioning (GIT)
  • Providing a health check URL
  • Stateless application
  • Environment variable based configuration
  • Log output to standard or error output
  • Degraded mode management
  • Graceful start/stop

 

 

Advertisements

SUPERVISORD # Controlling SOLR

Supervisord.gif

Supervisord or Supervisor daemon is an open source process management system. In a nutshell: if a process crashes for any reason, Supervisor restarts it. Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

I already published a post few days back to control the node js program. I came across another situation where customer asked to control the SOLR using supervisord. The customer we managing  solr start/stop as systemd unit. So the task under hand was to control /etc/init.d/solr start as command. I had difficulty using command as /etc/init.d/solr start in /etc/supervisord.d/solr.conf. When I was trying to control the solr using this it was getting started but exact state of the process was not getting passed to supervisord and supervisord was under impression that process has not started so it kept on restarting the process again and again.
The error which I was getting on running supervisorctl status solr command:

Solr            BackOff   Exited too quickly (process log may have details)
            Then turned to 
Solr            Fatal     Exited too quickly (process log may have details)

so in my log files I  kept on getting this message and when the maximum number of tries reached it turned fatal. First process was still running but now if some one kill it or it crashes it will not restart. So the Solr was in un-managed state, which was useless case for supervisor control.

I researched bit further and found this statement in supervisord documentation .

“Programs meant to be run under supervisor should not daemonize themselves. Instead, they should run in the foreground. They should not detach from the terminal from which they are started. The easiest way to tell if a program will run in the foreground is to run the command that invokes the program from a shellprompt.”

Further instead of using /etc/init.d/solr start I used the actual command using -f for running it in foreground.

My configuration after update looked like this.

[program:solr]
user=apache
command=/usr/local/solr/bin/solr start -f
directory=/usr/local/solr/bin/
autostart=true
autorestart=true
startsecs=30
startretries=3
numprocs=1
redirect_stderr=false
stdout_logfile=/var/log/solr-out
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=10
stdout_events_enabled=true
stderr_logfile=/var/log/solr-err
stderr_logfile_maxbytes=10MB
stderr_logfile_backups=5
stderr_events_enabled=true
environment=SOLR_INCLUDE=/etc/default/solr.in.sh

Now when  I ran the supervisorctl status solr, I got the correct output. I tested it by explicitly killing the solr again and again, it got restarted as expected.

# supervisorctl status solr

solr RUNNING pid 17341, uptime 1:10:32

I hope this will be useful read for all of you.

 

 

#SAP #SAP HANA – Core Architecture

SAP HANA was initially, developed in Java and C++ and designed to run only Operating System Suse Linux Enterprise Server 11. SAP HANA system consists of multiple components that are responsible to emphasize computing power of HANA system.

  • Most important component of SAP HANA system is Index Server, which contains SQL/MDX processor to handle query statements for database.
  • HANA system contains Name Server, Preprocessor Server, Statistics Server and XS engine, which is used to communicate and host small web applications and various other components.

SAP Hana Core Architecture

Index Server

Index Server is heart of SAP HANA database system. It contains actual data and engines for processing that data. When SQL or MDX is fired for SAP HANA system, an Index Server takes care of all these requests and processes them. All HANA processing takes place in Index Server.

Index Server contains Data engines to handle all SQL/MDX statements that come to HANA database system. It also has Persistence Layer that is responsible for durability of HANA system and ensures HANA system is restored to most recent state when there is restart of system failure.

Index Server also has Session and Transaction Manager, which manage transactions and keep track of all running and closed transactions.

Index Server

Index Server − Architecture

SQL/MDX Processor

It is responsible for processing SQL/MDX transactions with data engines responsible to run queries. It segments all query requests and direct them to correct engine for the performance Optimization.

It also ensures that all SQL/MDX requests are authorized and also provide error handling for efficient processing of these statements. It contains several engines and processors for query execution −

  • MDX (Multi Dimension Expression) is query language for OLAP systems like SQL is used for Relational database. MDX Engine is responsible to handle queries and manipulates multidimensional data stored in OLAP cubes.
  • Planning Engine is responsible to run planning operations within SAP HANA database.
  • Calculation Engine converts data into Calculation models to create logical execution plan to support parallel processing of statements.
  • Stored Procedure processor executes procedure calls for optimized processing; it converts OLAP cubes to HANA optimized cubes.

Transaction and Session Management

It is responsible to coordinate all database transactions and keep track of all running and closed transactions.

When a transaction is executed or failed, Transaction manager notifies relevant data engine to take necessary actions.

Session management component is responsible to initialize and manage sessions and connections for SAP HANA system using predefined session parameters.

Persistence Layer

It is responsible for durability and atomicity of transactions in HANA system. Persistence layer provides built in disaster recovery system for HANA database.

It ensures database is restored to most recent state and ensures that all the transactions are completed or undone in case of a system failure or restart.

It is also responsible to manage data and transaction logs and also contain data backup, log backup and configuration back of HANA system. Backups are stored as save points in the Data Volumes via a Save Point coordinator, which is normally set to take back every 5-10 minutes.

Preprocessor Server

Preprocessor Server in SAP HANA system is used for text data analysis.

Index Server uses preprocessor server for analyzing text data and extracting the information from text data when text search capabilities are used.

Name Server

NAME server contains System Landscape information of HANA system. In distributed environment, there are multiple nodes with each node has multiple CPU’s, Name server holds topology of HANA system and has information about all the running components and information is spread on all the components.

  • Topology of SAP HANA system is recorded here.
  • It decreases the time in re-indexing as it holds which data is on which server in distributed environment.

Statistical Server

This server checks and analyzes the health of all components in HANA system. Statistical Server is responsible for collecting the data related to system resources, their allocation and consumption of the resources and overall performance of HANA system.

It also provides historical data related to system performance for analyses purpose, to check and fix performance related issues in HANA system.

XS Engine

XS engine helps external Java and HTML based applications to access HANA system with help of XS client. As SAP HANA system contains a web server which can be used to host small JAVA/HTML based applications.

XS Engine

XS Engine transforms the persistence model stored in database into consumption model for clients exposed via HTTP/HTTPS.

SAP Host Agent

SAP Host agent should be installed on all the machines that are part of SAP HANA system Landscape. SAP Host agent is used by Software Update Manager SUM for installing automatic updates to all components of HANA system in distributed environment.

LM Structure

LM structure of SAP HANA system contains information about current installation details. This information is used by Software Update Manager to install automatic updates on HANA system components.

SAP Solution Manager (SAP SOLMAN) diagnostic Agent

This diagnostic agent provides all data to SAP Solution Manager to monitor SAP HANA system. This agent provides all the information about HANA database, which include database current state and general information.

It provides configuration details of HANA system when SAP SOLMAN is integrated with SAP HANA system.

SAP HANA Studio Repository

SAP HANA studio repository helps HANA developers to update current version of HANA studio to latest versions. Studio Repository holds the code which does this update.

Software Update Manager for SAP HANA

SAP Market Place is used to install updates for SAP systems. Software Update Manager for HANA system helps is update of HANA system from SAP Market place.

It is used for software downloads, customer messages, SAP Notes and requesting license keys for HANA system. It is also used to distribute HANA studio to end user’s systems.

#Monitoring #Linux #processes Monitoring Processes with Supervisord

Monitoring Processes with Supervisord

If you are linux administrator at some point in lifetime you’ll likely find yourself writing a script which needs to run all the time – a “long running script”. These are scripts that shouldn’t fail if there’s an error, or ones that should restart when the system reboots.

To accomplish this, we need something to watch these scripts. Such tools are process watchers. They watch processes and restart them if they fail, and ensure they start on system boot.

What might such a script be? Well, most things we install already have mechanisms in place for process watching. For example, Upstart or Systemd. These are tools used by many systems to watch over important processes. When we install PHP5-FPM, Apache and Nginx with our package managers, they often integrate with such systems so that they are much less likely to fail without notice.

However, we might find that we need some simpler solutions. For example, I often make use of a NodeJS script to listen to web hooks (often from Github) and take actions based on them. NodeJS can handle HTTP requests and take action on them all in the same process, making it a good fit for a small, quick one-off service for listening to web hooks.

These smaller scripts might not merit working through Upstart and Systemd (although the two are worth learning about).

Here’s an example script – we’ll make a quick service in Node. This NodeJS script will live at /srv/http.js:

var http = require('http');

function serve(ip, port)
{
        http.createServer(function (req, res) {
            res.writeHead(200, {'Content-Type': 'text/plain'});
            res.write("\nSome Secrets:");
            res.write("\n"+process.env.SECRET_PASSPHRASE);
            res.write("\n"+process.env.SECRET_TWO);
            res.end("\nThere's no place like "+ip+":"+port+"\n");
        }).listen(port, ip);
        console.log('Server running at http://'+ip+':'+port+'/');
}

// Create a server listening on all networks
serve('0.0.0.0', 9000);

All this service does is take a web request and print out a message. It’s not useful in reality, but good for our purposes. We just want a service to run and monitor.

Note that the service prints out two environmental variables: “SECRET_PASSPHRASE” and “SECRET_TWO”. We’ll see how we can pass these into a watched process.

Supervisord

Supervisord is a simple and popular choice for process monitoring. Let’s check out the package on Centos:

Installation

To install Supervisord, we can simply run the following:

yum install supervisor

Installing it as a package gives us the ability to treat it as a service:

systemctl start supervisord
systemctl enable supervisord

Configuration

Configuration for Supervisord is found in /etc/. If we look at the configuration file supervisord.conf, we’ll see at the following at the bottom:

[include]
files = supervisord.d/*.ini

So, any files found in /etc/supervisord.d/ and ending in .iniwill be included. This is where we can add configurations for our services.

Now we need to tell Supervisord how to run and monitor our Node script. What we’ll do is create a configuration that tells Supervisord how to start and monitor the Node script.

Let’s create a configuration for it called webhooks.conf. This file will be created at /etc/supervisor/conf.d/webhooks.conf:

[program:nodehook]
command=/usr/bin/node /srv/http.js
directory=/srv
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/webhook/nodehook.err.log
stdout_logfile=/var/log/webhook/nodehook.out.log
user=www-data
environment=SECRET_PASSPHRASE='this is secret',SECRET_TWO='another secret'

As usual, we’ll cover the options set here:

  • [program:nodehook] – Define the program to monitor. We’ll call it “nodehook”.
  • command – This is the command to run that kicks off the monitored process. We use “node” and run the “http.js” file. If you needed to pass any command line arguments or other data, you could do so here.
  • directory – Set a directory for Supervisord to “cd” into for before running the process, useful for cases where the process assumes a directory structure relative to the location of the executed script.
  • autostart – Setting this “true” means the process will start when Supervisord starts (essentially on system boot).
  • autorestart – If this is “true”, the program will be restarted if it exits unexpectedly.
  • startretries – The number of retries to do before the process is considered “failed”
  • stderr_logfile – The file to write any errors output.
  • stdout_logfile – The file to write any regular output.
  • user – The user the process is run as.
  • environment – Environment variables to pass to the process.

Note that we’ve specified some log files to be created inside of the /var/log/webhookdirectory. Supervisord won’t create a directory for logs if they do not exist; We need to create them before running Supervisord:

sudo mkdir /var/log/webhook

Controlling Processes

Now that we’ve configured Supervisord to monitor our Node process, we can read the configuration in and then reload Supervisord, using the supervisorctl tool:

supervisorctl reread
supervisorctl update

Our Node process should be running now. We can check this by simply running supervisorctl:

$ supervisorctl
nodehook               RUNNING    pid 444, uptime 0:02:45

We can double check this with the ps command:

$ ps aux | grep node
www-data   444  0.0  2.0 659620 10520 ?  Sl   00:57   0:00 /usr/bin/node /srv/http.js

It’s running! If we check our localhost at port 9000, we’ll see the output written out by the NodeJS script, including the environment variables. The environmental variables are useful if we need to pass information or credentials to our script.

If your process is not running, try explicitly telling Supervisord to start process “nodehook” via supervisorctl start nodehook

There’s other things we can do with supervisorctl as well. Enter the controlling tool using supervisorctl:

$ supervisorctl
nodehook     RUNNING    pid 444, uptime 0:15:42

We can try some more commands:

Get a menu of available commands:

supervisor> help
# Available commands output here

Let’s stop the process:

Let’s stop the process:

supervisor> stop nodehook
nodehook: stopped

Then we can start it back up

supervisor> start nodehook
nodehook: started

We can use <ctrl+c> or type “exit” to get out of the supervisorctl tool.

These commands can also be run directly:

$ supervisorctl stop nodebook
$ supervisorctl start nodebook

Web Interface

We can configure a web interface which comes with Supervisord. This lets us see a list of all processes being monitored, as well as take action on them (restarting, stopping, clearing logs and checking output).

Inside of /etc/supervisord.conf, add this:

[inet_http_server]
port = 9001
username = user # Basic auth username
password = pass # Basic auth password

If we access our server in a web browser at port 9001, we’ll see the web interface:

Clicking into the process name (“nodehook” in this case) will show the logs for that process.

#Containerization#Kubernetes: Introduction to Kubernetes #1

What is Kubernetes? Image result for kubernetes

  • Project that was spun out of Google as an open source container orchestration platform.
  • Built from the lessons learned in the experiences of developing and running Google’s Borg and Omega.
  • Designed from the ground-up as a loosely coupled collection of components centered around deploying, maintaining and scaling workloads.

 

What does it do?

  • Known as the linux kernel of distributed systems.
  • Abstracts away the underlying hardware of the nodes and provides a uniform interface for workloads to be both deployed and consume the shared pool of resources.
  • Works as an engine for resolving state by converging actual and the desired state of the system.

 

Decouples Infrastructure and Scaling

  • All services within Kubernetes are natively Load Balanced.
  • Can scale up and down dynamically.
  • Used both to enable self-healing and seamless upgrading or rollback of applications.

Kubernetes will ALWAYS try and steer the cluster to its desired state.

  • Me: “I want 3 healthy instances of redis to always be running.”
  • Kubernetes: “Okay, I’ll ensure there are always 3 instances up and running.”
  • Kubernetes: “Oh look, one has died. I’m going to attempt to spin up a new one.”

 

What can Kubernetes REALLY do?

  • Autoscale Workloads
  • Blue/Green Deployments
  • Fire off jobs and scheduled cronjobs
  • Manage Stateless and Stateful Applications
  • Provide native methods of service discovery
  • Easily integrate and support 3rd party apps
  • Most importantly it can use the same API across bare metal and every cloud provider.

 

Who “Manages” Kubernetes?

Image result for cncf images

Governing body Cloud Native Computing Foundation (CNCF) manages the Kubernetes project. It is child entity of Linux Foundation and operates as vendor neutral governance group.

Kubernetes Terminology

#Pods

  • Atomic unit or smallest “unit of work”of Kubernetes.
  • Pods are one or MORE containers that share volumes, a network namespace, and are a part of a single context.

POD

Generally Pods have short lifespan unlike the virtual machines i.e they are called ephemeral.

 

 

 

#AIX#SDD#SDDPCM#Persistent Reserve – Reservation issue while migration of server

With new day come new challenges and new opportunities for new learnings.

Want to share with you guys the challenges I faced while migrating server from old hardware to new hardware.

  1. Rootvg of the server was on physical disk, alt_clone to SAN disk was not working (could be the limitation of sdd)
  2. Took the mksysb and restored it on new hardware with new profile, removed sdd and installed sddpcm there.
  3. Application Vg was on SAN disks so I thought It wont cause any problem and move would be just running series of mkvdiskhostmap/rmvdishostmap commands on SVC. I brought down the old server and then I mapped the application vg disks on the new lpar but it did not show up with any pvid, ideally it should have come up with pvids which it was reflecting on the old server. I further tried to set the  pvid on the disk manually by running chdev -l <diskname> -a pv=yes but it threw error. I then tried to look at the vgda information of the disk  using readvgda -o <diskname> and it did not show any error which made me scared at first it striked to me disk data could have corrupted but then all the disk could not get bad at the same time. After lot of troubleshooting I thought it could be related to disk reservation. I instantly ran the command to check the reservation on of the disk:

devrsrv -f -l hdisk35 and it gave me below o/p

                  ==================================================
Device Name                     :  hdisk35
Device Open On Current Host?    :  NO
ODM Reservation Policy          :  NO RESERVE
Device Reservation State        :  PR EXCLUSIVE
Device Reservation State Information

          I quickly checked the same command on other server and found the device reservation state field was reflecting No

          Reserve which confirmed my doubt. I ran   devrsrv -c query -l hdisk35
Device Reservation State Information
==================================================
Device Name                     :  hdisk35
Device Open On Current Host?    :  NO
ODM Reservation Policy          :  NO RESERVE

           Which cleared the PR Exclusive reserve and I was able to see the pvid for all the disk.

           I tried importvg -y <vgname> pvid  and voilla it worked.

           This link explain the scsi reserves in  detailed manner.

           So in my case possibility could be that it was set to PR-Exclusive on the old server.

           Please do share your insight.

AIX# Using tools iptrace, snoop, tcpdump, wireshark, and nettl to trace packet

Creating, formatting, and reading packet traces is sometimes required to resolve problems with IBM® WebSphere® Edge Server. However, the most appropriate tool varies, depending on operating system.

Resolving the problem

Available for multiple operating systems
Wireshark is useful and a freely available tool that can read files and capture packets on almost any operating system.

Using iptrace on AIX®
You can use any combination of these options, you do not need to use them all:

-a Do NOT print out arps. Useful with clean up traces.
-s Limit trace to source/client IP address, if known.
-d Limit trace to destination IP, if known.
-b Capture bidirectional traffic (send and responsepackets).
-p Specify the port to be traced.
Example:

Run iptrace on AIX interface en1 to capture port 80 traffic from a single client IP to a server IP:
iptrace -a -i en1 -s clientip -b -d serverip -p 80 trace.out

This trace will capture both directions of the port 80 traffic on interface en1 between the clientip and serverip and send this to the raw file of trace.out.

Reproduce the problem, then run the following:
ps -ef|grep iptrace
kill -15

Trace tools like Wireshark can read trace.out files created by iptrace

exception: it is not possible to collect a packet capture on AIX when using IBM Load Balancer for ipv4 and ipv6
.

Using snoop on Solaris™

-v Include verbose output. Commonly used when dumping to pre-formatted output.
-o Dump in binary format. Output written to a binary file that is readable by Ethereal.
Example scenario:
snoop hme0 -v >snoop.out
snoop -o snoop.out

These commands capture all traffic on the hme0 interface. Use combinations of snoop options to meet your needs.

Warning: Using some options, packets may be corrupted by snoop.

Using tcpdump on Linux®
tcpdump has many options and a comprehensive man page.

A simple way to capture all packets to a binary file which is readable with ethereal.

Example:
tcpdump -s 2000 -w filename.out

For a simple packet trace that is formatted and readable by any text editor.
This will listen on the default interface for all port 80 traffic.

Example:
tcpdump port 80 >filename.out

This will watch only the eth1 interface.

Example:
tcpdump -i eth1 >filename.out

Using Network Monitor with Microsoft® Windows®

Start Network Monitor.
Select the interface to listen on and click start.
Once the traffic needed has been captured, click stop.
Save the resulting file which can be read by Network Monitor or ethereal.
For additional information, visit the technote, How to capture network traffic with Network Monitor

Using nettl on HP-UX
The nettl tool provides control network tracing and logging.

Scenario:
/usr/sbin/nettl -start
/usr/sbin/nettl -stop
/usr/sbin/nettl -firmlog 0|1|2 -card dev_name …
/usr/sbin/nettl -log class … -entity subsystem …
/usr/sbin/nettl -status [log |trace |all]
/usr/sbin/nettl -traceon kind … -entity subsystem …
[-card dev_name …] [-file tracename] [-m bytes] [-size portsize]
[-tracemax maxsize] [-n num_files]
/usr/sbin/nettl -traceoff -entity subsystem …