What? Running SuSE Linux Cluster on VMware ESX? Basically if you have VMware HA and DRS enable, your virtual machine is running on cluster mode. Then, why running Linux cluster on VMware? Who care? It’s just for fun and for personal testing purpose only.

1st, create two SuSE Linux(I’m running SLES 10 SP2) on VMware running on same network said cluster1(192.168.1.1) and cluster2(192.168.1.2).

2nd, add new hard disk on VMware infrastructure client. Please remember to choose ‘Use an existing virtual disk‘.

VMware
Click here for bigger image.

3rd, install and configure Linux cluster heartbeat.

Make sure you have heartbeat RPM install
cluster1:~ # rpm -qa|grep heartbeat
sles-heartbeat_en-10.1-0.20
heartbeat-pils-2.1.3-0.9
heartbeat-stonith-2.1.3-0.9
yast2-heartbeat-2.13.13-0.3
heartbeat-2.1.3-0.9

cluster2:~ # rpm -qa|grep heartbeat
sles-heartbeat_en-10.1-0.20
heartbeat-pils-2.1.3-0.9
heartbeat-stonith-2.1.3-0.9
yast2-heartbeat-2.13.13-0.3
heartbeat-2.1.3-0.9

Install Linux cluster heartbeat RPM package if you haven’t install
cluster1:~ # rpm -ivh heartbeat-xxxxxx
cluster1:~ # rpm -ivh heartbeat-stonith-xxxxxx
and etc

Edit Authentication file (same for cluster1 and cluster2)
cluster1:~ #vi /etc/ha.d/authkeys
auth 2
2 sha1 MVM_CLUS2!


Edit HA configuration file
cluster1
cluster1:~ #vi /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
node cluster1
node cluster2
keepalive 1
warntime 30
deadtime 60
initdead 120
udpport 694
ucast eth0 192.168.1.1
ucast eth0 192.168.1.2
auto_failback off
ping_group group1 192.168.1.254
respawn hacluster /usr/lib/heartbeat/ipfail
watchdog /dev/watchdog

cluster2
cluster2:~ #vi /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
node cluster2
node cluster1
keepalive 1
warntime 30
deadtime 60
initdead 120
udpport 694
ucast eth0 192.168.1.2
ucast eth0 192.168.1.1
auto_failback off
ping_group group1 192.168.1.254
respawn hacluster /usr/lib/heartbeat/ipfail
watchdog /dev/watchdog

Edit HA Resource file (same for cluster1 and cluster2)
cluster1:~ #vi /etc/ha.d/haresources
cluster1 192.168.1.100 \
Filesystem::/dev/sdb1::/db::ext3

Make sure you can see hard disk partition on both servers
cluster1:~ # cat /proc/partitions
8 17 20964793 sdb1
cluster2:~ # cat /proc/partitions
8 17 20964793 sdb1

Configure and format hard disk partition
cluster1:~ # fdisk /dev/sdb1
press n add a new partition
press p primary partition
press 1 partition number
First cylinder, press ENTER
Last cylinder or +size or +sizeM or +sizeK, press ENTER
press p print the partition table
press w write table to disk and exit

cluster1:~ # mkfs.ext3 /dev/sdb1

Create new folder on both servers
cluster1:~ # mkdir /db
cluster2:~ # mkdir /db

Startup heartbeat services
cluster1:~ # chkconfig heartbeat on
cluster1:~ # /etc/init.d/heartbeat start
cluster2:~ # chkconfig heartbeat on
cluster2:~ # /etc/init.d/heartbeat start

You should be able to see /db mounted and IP address 192.168.1.100 is up on cluster1(active). If you turn OFF cluster1 server and you should see cluster2 running on ACTIVE mode.

Congratulations! You’re done! Enjoy!