Search This Blog

Tuesday 26 October 2010

Mounting LVM volume after reinstalling Linux

 

Today, I reinstalled my Linux box with latest Redhat version, but I failed to mount the LVS from two application VGs, which are from SAN. Below are the steps how i resolved the issue.

For listing and checking the status of available SAN disks (pv-physical volumes). Here, all three disk are visible including vlgroup00 (its is the OS volume)

[root@test3 ]# pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda2  VolGroup00 lvm2 a-    29.88G       0
  /dev/sdb1  VolBsl     lvm2 a-    69.97G    2.72G
  /dev/sdc   OraEai     lvm2 a-   170.00G 1020.00M

Below command lists the existing volume groups (VGs) related each disk. Notice the status “Found exported volume group “ in below output, which means, VGs are not exported. Let us import it by next steps. 
[root@test3 ]# vgscan
  Reading all physical volumes.  This may take a while...
  Found exported volume group "OraEai" using metadata type lvm2
  Found exported volume group "VolBsl" using metadata type lvm2
  Found volume group "VolGroup00" using metadata type lvm2

Now let us import it.
[root@test3 ]# vgimport OraEai
  Volume group "OraEai" successfully imported
[root@sdl003 mapper]# vgimport VolBsl
  Volume group "VolBsl" successfully imported

Volumes are imported. see the status below.

[root@test3 ]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "OraEai" using metadata type lvm2
  Found volume group "VolBsl" using metadata type lvm2
  Found volume group "VolGroup00" using metadata type lvm2

Now let us see the status of logical volumes (LVs) all the volume from above two VGs are inactive. For mounting a volume, it must be ACTIVE status.
[root@test3 ]# lvscan
  inactive          '/dev/OraEai/LvOracle' [10.00 GB] inherit
  inactive          '/dev/OraEai/LvOraEai' [149.00 GB] inherit
  inactive          '/dev/OraEai/LvOraArchive' [10.00 GB] inherit
  inactive          '/dev/VolBsl/LogVol02' [48.81 GB] inherit
  inactive          '/dev/VolBsl/LogVol00' [4.53 GB] inherit
  inactive          '/dev/VolBsl/LogVol01' [3.91 GB] inherit
  inactive          '/dev/VolBsl/home_wbimbprd' [5.00 GB] inherit
  inactive          '/dev/VolBsl/var_mqsi' [5.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [25.97 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [3.91 GB] inherit

Let us activate all volume using lvchange -ay lvname. Here is the single command loop to do the same for all inactive LVs .
[root@test3 ]# for i in `lvscan|grep inactive|awk -F\' {'print $2'}`; do lvchange -ay $i; done
[root@test3 ]# lvscan
  ACTIVE            '/dev/OraEai/LvOracle' [10.00 GB] inherit
  ACTIVE            '/dev/OraEai/LvOraEai' [149.00 GB] inherit
  ACTIVE            '/dev/OraEai/LvOraArchive' [10.00 GB] inherit
  ACTIVE            '/dev/VolBsl/LogVol02' [48.81 GB] inherit
  ACTIVE            '/dev/VolBsl/LogVol00' [4.53 GB] inherit
  ACTIVE            '/dev/VolBsl/LogVol01' [3.91 GB] inherit
  ACTIVE            '/dev/VolBsl/home_wbimbprd' [5.00 GB] inherit
  ACTIVE            '/dev/VolBsl/var_mqsi' [5.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [25.97 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [3.91 GB] inherit

Great…, Now I am able to mount the volume..

[root@test3 ]# mount /dev/mapper/OraEai-LvOraArchive /oracle/eaitest/archive

Tuesday 19 October 2010

Importing and exporting zpool

For migrating Zpool from one system to other, below are the steps. 
Here, we use two systems ( white and black) as source and destination hosts. 
1. Creating a zpool named testpool on white using 5 disks. 

root@white # zpool create -f testpool  /dev/dsk/c4t6005076305FFC08C000000000000100Ad0  /dev/dsk/c4t6005076305FFC08C000000000000100Bd0  /dev/dsk/c4t6005076305FFC08C0000000000001012d0 /dev/dsk/c4t6005076305FFC08C0000000000001013d0  /dev/dsk/c4t6005076305FFC08C0000000000001014d0 
2. Creating a zfs volume named testfs
root@white # zfs create testpool/testfs

3. Making an 8GB file for testing purpose.
root@white # cd /testpool/testfs 
root@white # df . 
Filesystem             size   used  avail capacity  Mounted on 
testpool/testfs        9.8G    24K   9.8G     1%    /testpool/testfs 
root@white # mkfile 9G testfile 
root@white # df -h|grep testpool 
testpool               9.8G    25K   782M     1%    /testpool 
testpool/testfs        9.8G   9.0G   782M    93%    /testpool/testfs 
4. Now we can export the zpool from white. Its mandatory to free up all zpool devices related to testpool.
root@white # zpool export testpool

Once the zpool is exported, it disappears from df output of source server (white)

root@white # df -h|grep testpool

Below are the steps to export the filesystem to destination host (here, black). W assume that all the five disks are accessible from both hosts.

1. To list all the importable zpool and corresponding statutes apply below command. 
root@black #zpool import 
  pool: testpool 
    id: 15485920734056515199 
 state: ONLINE 
action: The pool can be imported using its name or numeric identifier. 
config: 
        testpool                                 ONLINE 
          c4t6005076305FFC08C000000000000100Ad0  ONLINE 
          c4t6005076305FFC08C000000000000100Bd0  ONLINE 
          c4t6005076305FFC08C0000000000001012d0  ONLINE 
          c4t6005076305FFC08C0000000000001013d0  ONLINE 
          c4t6005076305FFC08C0000000000001014d0  ONLINE 
Above output shows that, all the disks related to test pool is available and it can be imported. 
2. Now it can be imported.
root@black # zpool import testpool
3. Verifying imported zpool file file created from white server
root@black# df -h|grep testpool 
testpool               9.8G    25K   782M     1%    /testpool 
testpool/testfs        9.8G   9.0G   782M    93%    /testpool/testfs 
4. Example for exporting a zpool volume with different name. Here the testpool of white is exporting as testpool1 in black
root@black # zpool import testpool testpool1 
root@white # df -h|grep testpool 
testpool1              9.8G    25K   782M     1%    /testpool1 
testpool1/testfs       9.8G   9.0G   782M    93%    /testpool1/testfs 
5. Example for importing/recovering a destroyed zpool.

root@white # zpool destroy testpool1 
The zpool testpool1 is destroyed by above command for testing purpose. For listing the destroyed zpool, –D option must be used.

root@white # zpool import -D 
  pool: testpool1 
    id: 15485920734056515199 
 state: ONLINE (DESTROYED) 
action: The pool can be imported using its name or numeric identifier. 
        The pool was destroyed, but can be imported using the '-Df' flags. 
config: 
        testpool1                                ONLINE 
          c4t6005076305FFC08C000000000000100Ad0  ONLINE 
          c4t6005076305FFC08C000000000000100Bd0  ONLINE 
          c4t6005076305FFC08C0000000000001012d0  ONLINE 
          c4t6005076305FFC08C0000000000001013d0  ONLINE 
          c4t6005076305FFC08C0000000000001014d0  ONLINE 

Below is the command for importing/recovering a deleted zpool. Notice the option –D and –f with import command.

root@white # zpool import -Df testpool1 
root@white # df -h|grep testpool 
testpool1              9.8G    25K   782M     1%    /testpool1 
testpool1/testfs       9.8G   9.0G   782M    93%    /testpool1/testfs