Search This Blog

Showing posts with label zfs. Show all posts
Showing posts with label zfs. Show all posts

Saturday, 19 March 2011

Configuring Zpool with Solaris Zone

Set up details: 
Server: globalzone
Zone name: testzone
1. Creating testpool named testpool with two disks in global zone

root@global # testpool create testpool  c4t60050768018E8327B800000000000005d0s0 
c4t60050768018E8327B800000000000008d0s0 

2. Checking the status of created pool testpool 

root@global # zpool status -v testpool
  pool: testpool
 state: ONLINE
 scrub: none requested
config:


        NAME                                       STATE     READ WRITE CKSUM
        testpool                                      ONLINE       0     0     0
          c4t60050768018E8327B800000000000005d0s0  ONLINE       0     0     0
          c4t60050768018E8327B800000000000008d0s0  ONLINE       0     0     0


errors: No known data errors
3. Adding the pool to zone
root@global # zoneadm -z testpool halt
root@global # zonecfg -z testpool
zonecfg:testzone> add dataset
zonecfg:testzone:dataset> set name="testpool"
zonecfg:testzone:dataset> end
zonecfg:testzone> commit
zonecfg:testzone> exit
root@global # zoneadm -z testzone boot

4. Creating first filesystem named mqm 
root@testzone# zfs create testpool/mqm
5. Setting quota for each file sy stem
root@testzone# zfs set quota=110G testpool/apps
root@testzone# zfs set quota=1G testpool/oraapps          
root@testzone# zfs set quota= 4G testpool/mqm  
6. Changing  mount point from non-global zone
bash-3.00# zfs set mountpoint=/apps testpool/apps
bash-3.00# zfs set mountpoint=/var/mqm  testpool/mqm
bash-3.00# zfs set mountpoint=/oracle/orapp testpool/oraapps
7. Since we don’t  need to pool to mounted,  removing pool from mounting
bash-3.00# zfs set mountpoint=legacy testpool

Monday, 16 August 2010

Solaris zfs raw device

Here I am creating a zfs raw device for using as raw partition for Sybase database.
The list of available zpool.

bash-3.00# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
dbpool   19.9G  1.32G  18.6G     6%  ONLINE  -

Creating a test volume of 10MB. (Notice the -V option with zfs command, which create a tree for corresponding device under /dev/zvol. This is option is required only when you want to appear this device under /dev/zvol)

bash-3.00# zfs create -V 10M dbpool/test
Creating UFS file system for the created device
bash-3.00# newfs /dev/zvol/rdsk/dbpool/test
newfs: construct a new file system /dev/zvol/rdsk/dbpool/test: (y/n)? y
Warning: 4130 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/dbpool/test:      20446 sectors in 4 cylinders of 48 tracks, 128 sectors
        10.0MB in 1 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32,

Now the newly created zfs volume is available in below list

bash-3.00# zfs list
NAME              USED  AVAIL  REFER  MOUNTPOINT
dbpool            1.33G  18.2G    21K  legacy
dbpool/apps         24K  10.0G    24K  /app
dbpool/lotus      1.32G  18.2G  1.32G  /opt/lotus
dbpool/notesdata    21K  18.2G    21K  /notesdata
dbpool/noteslogs    21K  18.2G    21K  /noteslogs
dbpool/test         10M  18.2G    24K  -

Its ready to mount it now.
bash-3.00# mount /dev/zvol/dsk/dbpool/test /mnt
bash-3.00# df -h |grep dbpool
dbpool/lotus             20G   1.3G    18G     7%    /opt/lotus
dbpool/notesdata         20G    21K    18G     1%    /notesdata
dbpool/noteslogs         20G    21K    18G     1%    /noteslogs
dbpool/apps              10G    24K    10G     1%    /app
/dev/zvol/dsk/dbpool/test   7.5M   1.0M   5.7M    16%    /mnt

Below is the path for zfs raw device and block device.

Block Device=/dev/zvol/dsk/dbpool/test
Raw Device=/dev/zvol/rdsk/dbpool/test


Ref:
Solaris 10 ZFS Essentials