Search This Blog

Thursday 25 February 2010

Autofs in Redhat/Fedora linux

This configuration is for sharing single home directory on multiple servers

Server Side Configuration

Here the directries are shared only for one host.

[root@lion /]# cat /etc/exports

/home tiger(rw,root_squash)

/home/chimmu tiger(rw,root_squash)

/data3 tiger(rw,root_squash)

/entertainment tiger(rw,root_squash)

Starting nfs server

[root@lion /]# service nfs start

Starting NFS services: [ OK ]

Starting NFS quotas: [ OK ]

Starting NFS daemon: [ OK ]

Starting NFS mountd: [ OK ]

Configuring NFS server to comeup autmatically on next reboot

[root@lion /]# chkconfig nfs on

Verifying changes are affected

[root@lion /]# chkconfig --list |grep nfs

nfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off

nfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:off

Clilent Side Configuration.

Adding entry in automaster

[root@tiger ~]# cat /etc/auto.master

/home /etc/auto.home --timeout 600

/entr /etc/auto.misc --timeout 600

/home /etc/auto.per --timeout 600

/pri /etc/auto.samba --timeout 600

[root@tiger ~]# cat /etc/auto.misc

entr lion:/entertainment

per lion:/home/per

[root@tiger ~]# cat /etc/auto.home

* -fstype=nfs,soft,intr,rsize=8192,wsize=8192,nosuid,tcp lion:/home/&

[root@tiger ~]# cat /etc/auto.per

per lion:/home/per

Starting autofs in client and making it permanent on next reboot

[root@tiger ~]# /etc/init.d/autofs start

Starting automount: [ OK ]

[root@tiger ~]# chkconfig autofs on

[root@tiger ~]# chkconfig --list|grep autofs

autofs 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Checking its working fine.

See below, latheef home directory mounted from lion. (auto.home)

[root@tiger ~]# su - latheef

-bash-4.0$ df -h .

Filesystem Size Used Avail Use% Mounted on

lion:/home/latheef 184G 91G 84G 53% /home/latheef

entertainment directory mounted from lion to here (for all users-auto.misc)

[root@tiger ~]# cd /entr/entr/

[root@tiger entr]# df -h .

Filesystem Size Used Avail Use% Mounted on

lion:/entertainment 459G 220G 216G 51% /entr/entr

Wednesday 24 February 2010

How create or check md5 of a file:

How create or check md5 of a file:
This program can be useful when developing shell scripts or Perl programs for software installation, file comparison, and detection of file corruption and tampering.
root@test1 # digest -a md5 latheef.tar
4173355258b4c8ce399686cc9a4ba868

Tuesday 23 February 2010

Ganglia monitoring tool for UNIX Data centers - Installing and configuring

Ganglia monitoring tool for UNIX Data centers
Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization.

Server side we required below packages.
Server Side configuration:
1. Installation of required packages.
Below packages and dependencies are required in ganglia server.
Packages
CSKamp_1.3.1_sparc.pkg ->coolstack apache server
ganglia-3.0.7-sol10-sparc-local ->ganglia package (required both client server side)
rrdtool-1.2.19-sol10-sparc-local -> high performance data logging and graphing system
Dependancies and libraries
libart_lgpl-2.3.19-sol10-sparc-local
libgcc-3.4.6-sol10-sparc-local
libpng-1.2.41-sol10-sparc-local
pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg
freetype-2.3.9-sol10-sparc-local
CSKruntime_1.3.1_sparc.pkg
zlib-1.2.3-sol10-sparc-local
2. Configure for the first time
When we start ganglia for first time, it is required to create default configuration file by using below command
root@server#/usr/local/sbin/gmond –default >/etc/gmond.conf
Edit configuration for listening to port for accepting client data
Edit /etc/gmond.conf of server as below. Here I give Sulay as my cluster name
Cluster {
name = "SULAY"
owner = "unixadmin"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "SULAY"
}
udp_recv_channel {
port = 8649
family = inet4
}
tcp_accept_channel {
port = 8649
}
3. Creat a file named /etc/gmetad.conf with below data
data_source "SULAY" localhost
4. Configuring apache.
By default apache coolstack is getting inslled on /opt/coolstack/apache2 path.
You have to change the default page of apache to the ganglia web page by editing below two lines on /opt/coolstack/apache2/conf/httpd.conf
#This is where apache looks for default page.
DocumentRoot "/usr/local/doc/ganglia/web"
#this is the path for default directory.
5. Restart/enable apache
root@server#svcadm restart svc:/network/http:apache22-csk
root@server#svcadm enable svc:/network/http:apache22-csk
6. Starting gmond and gmetad
root@server#/usr/local/sbin/gmond
root@server#/usr/local/sbin/gmetad
7. Verifying the running deamon, you can see both deamons are running . if you have any issue with starting , start execute above command with --debug=9 options (eg: /usr/local/sbin/gmond –debug=9) will give a verbose output.
root@server#ps -ef|grep gm
nobody 19037 1 0 Feb 02 ? 94:12 /usr/local/sbin/gmond
nobody 19090 1 1 Feb 02 ? 1416:22 /usr/local/sbin/gmetad
8. Create a startup script for starting these both applications while machine comes up.
Create this /etc/rc3.d/S99ganglia with blow contend
/usr/local/sbin/gmond
/usr/local/sbin/gmetad
9. Change the permission of this file to executable
root@server#Chmod a+x /etc/rc3.d/S99ganglia
10. Now you can browse the ganglia portal by typing the ip or host name of the ganlia server in the browser.
You will get a windows similar to below
If you click any of the graph related to individual host



Client Side configuration.
1. Install required package.
ganglia-3.0.7-sol10-sparc-local
2. Creating configuration file
/usr/local/sbin/gmond -–default >/etc/gmond.conf
3. Editing etc/gmond.conf for sending packet to server.
No need of accept and receive channel settings in client side. Edit below lines in client configuration file.
Cluster {
name = "SULAY"
owner = "unixadmin"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "SULAY"
}
udp_send_channel {
host =
port = 8649
ttl = 1
}
4. Creating startup file for client.
Create this /etc/rc3.d/S99ganglia with blow contend
/usr/local/sbin/gmond
5. Change the permission of this file to executable
Chmod a+x /etc/rc3.d/S99ganglia
6. Verifying client gmond is running
root@client# ps -ef|grep gm
nobody 5152 1 0 Feb 06 ? 3:28 /usr/local/sbin/gmond
As soon as the application started in a client, this client will be visible in ganglia portal.

Menu based ufsdump script

This is a simple script for doing below action
1. Take a backup
2. Erase Tape
3. List content of tape
4. Offline tape
============you can customise this script for adding your own backup folders=========
#!/bin/bash
while :
do
clear
echo " M A I N - M E N U"
echo "1. Erase Tape"
echo "2. Rewind Tape"
echo "3. Full backup"
echo "4. List next file count content "
echo "5. Print Tape Status "
echo "6. Eject Tape "
echo "7. Exit Menu "
echo -n "Please enter option [1 - 7]"
read opt
case $opt in
1) echo "************ Erasing Tape *************";
sleep 2
mt -f /dev/rmt/0 erase ;;
2) echo "*********** Rewinding Tape*************";
sleep 2
mt -f /dev/rmt/0 rewind;;
3) echo "**********Running Full backup**********";
sleep 2
echo "Do you want to start the full system backup *ALL DATA IN THE TAPE WILL BE ERASED* [y/n]"
read ans;
if [ "$ans" == "y" ]
then
mt -f /dev/rmt/0 rewind
ufsdump -0uf /dev/rmt/n /
if [ $? != 0 ] ; then
MSG="error during ufsdump"
fi
ufsdump -0uf /dev/rmt/0n /opt
ufsdump -0uf /dev/rmt/0n /export
ufsdump -0uf /dev/rmt/0n /global/.devices/node@1
ufsdump -0uf /dev/rmt/0n /global/.devices/node@2
mount |grep /global/backup && ufsdump -0uf /dev/rmt/0n /global/backup #this two lines will check global FS is mounted here or not (imagine it will mount only one place at a time in a cluster env)
mount |grep /global/oracle && ufsdump -0uf /dev/rmt/0n /global/oracle
mt -f /dev/rmt/0n rewind
if [ $? != 0 ] ;
then
echo "Unable to rewind the tape after finishing the backup"
fi
fi
;;
4) echo "Skipping to next file count ";
sleep 2
ufsrestore tvf /dev/rmt/0n|more ;;
5) echo "Checking for tape status ";
sleep 2
mt -f /dev/rmt/0 status ;;
6) echo "Ejecting Tape ";
sleep 2
mt -f /dev/rmt/0 offline ;;
7) echo "Thanks for using, Have a nice day";
sleep 2
exit;;
*) echo "$opt is an invaild option. Please select option between 1-4 only";
echo "Press [enter] key to continue. . .";
read enterKey;;
esac
done

Saturday 20 February 2010

Project for Solaris 10 Oracle installation

This is the simple and default project for solaris 10 oracle installation:
add below lines in /etc/project file
group.dba:100:oracle setting::dba:process.max-sem-nsems=(privileged,2048,deny);project.max-sem-ids=(priv,100,deny);project.max-shm-ids=(priv,100,deny);project.max-shm-memory=(priv,11811160064,deny)
Here:
this is effective for all users in dba system user group
100 is the ID for this project
oracle setting is the description for this prject
you can verify this valule using project - l command

Wednesday 17 February 2010

XSCF snap command in M Series server

Proceedure for generating xscf snap in MSeris servers
XSCF> snapshot -t alatheef@10.6.49.196:/export/home/alatheef
Downloading Public Key from '10.6.49.196'...
Public Key Fingerprint: a5:19:54:33:71:54:3c:ab:55:af:89:67:f9:18:bc:b1
Accept this public key (yes/no)? yes
Enter ssh password for user 'alatheef' on host '10.6.49.196':
Setting up ssh connection to alatheef@10.6.49.196...
Collecting data into alatheef@10.6.49.196:/export/home/alatheef/rabigh-sc_10.6.148.77_2010-02-17T09-29-20.zip
Data collection complete
XSCF>

Sunday 14 February 2010

Setting up IP in M4000/M5000 servers


This the simple steps to configure IP/Netmask/default route for both consoles.

setnetwork xscf#0-lan#0 -m 255.255.255.0 192.168.1.25
setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g
10.64.49.1 xscf#0-lan#0
setnetwork xscf#1-lan#0 -m 255.255.255.0 192.168.1.26
setroute -c add -n 0.0.0.0 -m 0.0.0.0 -g
192.168.1. xscf#1-lan#0
setnetwork lan#0 -m 255.255.255.0 10.64.48.50
applynetwork
rebootxscf





Solaris Internals(TM): Solaris 10 and OpenSolaris Kernel Architecture (2nd Edition)