Search This Blog

Monday 16 August 2010

Solaris zfs raw device

Here I am creating a zfs raw device for using as raw partition for Sybase database.
The list of available zpool.

bash-3.00# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
dbpool   19.9G  1.32G  18.6G     6%  ONLINE  -

Creating a test volume of 10MB. (Notice the -V option with zfs command, which create a tree for corresponding device under /dev/zvol. This is option is required only when you want to appear this device under /dev/zvol)

bash-3.00# zfs create -V 10M dbpool/test
Creating UFS file system for the created device
bash-3.00# newfs /dev/zvol/rdsk/dbpool/test
newfs: construct a new file system /dev/zvol/rdsk/dbpool/test: (y/n)? y
Warning: 4130 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/dbpool/test:      20446 sectors in 4 cylinders of 48 tracks, 128 sectors
        10.0MB in 1 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32,

Now the newly created zfs volume is available in below list

bash-3.00# zfs list
NAME              USED  AVAIL  REFER  MOUNTPOINT
dbpool            1.33G  18.2G    21K  legacy
dbpool/apps         24K  10.0G    24K  /app
dbpool/lotus      1.32G  18.2G  1.32G  /opt/lotus
dbpool/notesdata    21K  18.2G    21K  /notesdata
dbpool/noteslogs    21K  18.2G    21K  /noteslogs
dbpool/test         10M  18.2G    24K  -

Its ready to mount it now.
bash-3.00# mount /dev/zvol/dsk/dbpool/test /mnt
bash-3.00# df -h |grep dbpool
dbpool/lotus             20G   1.3G    18G     7%    /opt/lotus
dbpool/notesdata         20G    21K    18G     1%    /notesdata
dbpool/noteslogs         20G    21K    18G     1%    /noteslogs
dbpool/apps              10G    24K    10G     1%    /app
/dev/zvol/dsk/dbpool/test   7.5M   1.0M   5.7M    16%    /mnt

Below is the path for zfs raw device and block device.

Block Device=/dev/zvol/dsk/dbpool/test
Raw Device=/dev/zvol/rdsk/dbpool/test


Ref:
Solaris 10 ZFS Essentials

Monday 9 August 2010

WebDav configuration with apache webserver

 

Web-based Distributed Authoring and Versioning (WebDAV) is a set of methods based on the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on World Wide Web servers.

The WebDAV protocol makes the Web a readable and writable medium. It provides a framework for users to create, change and move documents on a server (typically a web server or "web share").

 

Appache configuration for webdev:

 

1. Make sure mod_dav_fs is loaded with Apache

[root@shimna ]# apachectl -t -D DUMP_MODULES|grep  dav_fs_module
dav_fs_module (shared)
Syntax OK

The above command list all the modules loaded by Apache and grep fordav_fs_module.

 

2. Create a lock file with httpd user permission because DAVLockDB that can be written by the web server process

To find out httpd user name group name, grep for User and Group in httpd.conf as below.  Here the username is apache and group name  is apache.

[root@shimna]# egrep 'User |Group ' /etc/httpd/conf/httpd.conf|grep -v ^#

User apache
Group apache
Now this we can create the lock file.
#mkdir /var/lib/dav/
#touch /var/lib/dav/lockdb
#chown -R apache:apache /var/lib/dav/

3. Create a Apache password file for authentication purpose. Below is the steps to add a user latheefp.

[root@shimna dav]#  htpasswd -c /etc/httpasswd latheefp
New password:
Re-type new password:
Adding password for user latheefp

4.Editing Apache configuration file
This is the default line in configuration file regarding webdav.
    # Location of the WebDAV lock database.
    DAVLockDB /var/lib/dav/lockdb

 

We have to edit above depends up on our configuration like below.

<IfModule mod_dav_fs.c>
# Location of the WebDAV lock database.
DAVLockDB /var/lib/dav/lockdb
#setting an alias path to /locker/audit/web as /webdav
Alias /webdav /locker/audit/web/
<Directory /locker/audit/web/>
DAV On
<Limit PUT POST DELETE PROPFIND PROPPATCH MKCol COPY MOVE LOCK UNLOCK>
AuthName "Webdev for unixindia"
AuthType basic
#Rplace below lines accordingly
AuthUserFile /etc/httpasswd
require user latheefp
</Limit>
</Directory>

</IfModule>

 

5. Restart Apache

[root@shimna ]# /etc/init.d/httpd restart

Stopping httpd:                    [  OK  ]

Starting httpd                     [  OK  ]

 

We  have completed all required settings in Server side. Now let us see how to access it from Windows system.

 

my network place

Click on Add a network place

next

Click Next

select

Click on Choose Another Network location

shimna

Type the path to webdav directory as above and click next.

password

Provide the username and password (here its latheefp and password created using htpasswd command in server configuration)

webdev

finish

browse

Just follow above 3 steps. Provide the same username and password when you open the webdav as windows drive for the first time.

         open window

Now the remote wedav is ready to accces as windows directory.

test dir

Just created a test dir.

How to write your own Ganglia gmetric monitors

 
To define a new metric to monitor below is a an example
This metric could read and monitor the number of users currently logged in to the system and display the graph in ganglia front end.
Path for gmetric command:
Soaris: /usr/local/bin/gmetric
Aix: /opt/freeware/bin/gmetric
Linux: /usr/bin/gmetric
This command is available for all users.

Below is the command to monitoring number of users and display corresponding graph in Ganglia front end
/usr/local/bin/gmetric --name Current_Users --value `who |wc -l` --type int32 –unit current_users
Here:
/usr/local/bin/gmetric -> is the ganglia client command
--name Current_Users  -> this will be the name of graph
--value `who |wc -l` -> this is the value(this command should return a single number)
--type int32 -> type of value(since it’s a number its int32)
 
Now you can either crontab this command or  just loop it as below.
#while true; do /usr/local/bin/gmetric --name Current_Users --value `who |wc -l` --type int32 --unit current_users; sleep 10; done
Now this graph is visible on ganglia portal.
image


The Art of Capacity Planning: Scaling Web Resources

Thursday 5 August 2010

Creating more than 8 loop back devices in Linux

By default Linux  supports only 8 loop back devices which means we can mount maximum 8 loop back devices(for Eg: 8 iso images)and maximum supported loop back devices by OS is 64.  If you tried to mount the 9 th devices, you may get an error "mount could not find a spare loop device" error.  Below script create the loop back devices from 8 to 64 

for ((i=8;i<64;i++)); do
[ -e /dev/loop$i ] || mknod -m 0600 /dev/loop$i b 7 $i
done


Enjoy.... Now you can mount up to 64