Linux Cluster using drbd and heartbeat

Here I am discussing about how to setup a cluster of two linux servers. serverg1 and serverg2 are the two linux servers with IP address 10.254.254.54 and 10.254.254.94 respectively. Flush the iptables before the setup. I am using the floating IP address 10.254.254.55 for this cluster. Before starting the installations, Make sure that you have a partition of equal size available for this setup. Here I am creating a partition /dev/sdb1 of 100 MB size.

fdisk /dev/sdb
n
p
1
t
1
83
w
Restart the servers
mke2fs -j /dev/sdb1

dd if=/dev/zero bs=1M count=100 of=/dev/sdb1;sync

Now the partition is ready for the installation. We can start installing the packages :
yum install drbd84-utils.x86_64
yum install kmod-drbd84.x86_64
yum install heartbeat
Update the /etc/hosts file with the hostnames for easy dns resolution.

10.254.254.94 serverg2
10.254.254.54 serverg1


DRBD Configuration

Now edit /etc/drbd.conf with the below values :

global { usage-count no; }
resource r0 {
protocol C;
startup { wfc-timeout 10; degr-wfc-timeout 30; } #change timers to your need
disk { on-io-error detach; } # or panic, ...
net {
after-sb-0pri discard-least-changes;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
cram-hmac-alg "sha1";
shared-secret "my_secret_password_G";
}
syncer { rate 5M; }
on serverg1 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.254.254.54:7788;
meta-disk internal;
}
on serverg2 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.254.254.94:7788;
meta-disk internal;
}
}

copy the configuration file /etc/drbd.conf to the second server.

scp /etc/drbd.conf root@serverg2:/etc/drbd.conf

Now create the metadata on both the servers:

drbdadm create-md r0

Start drbd on both the servers:

service drbd start

Verify both the servers are secondary :

cat /proc/drbd

You can see both the nodes as secondary, that is natural. Now we need to start the initial replication from the master server(here serverg1)

drbdadm -- --overwrite-data-of-peer primary r0

watch the drbd process until it shows uptodate:

watch -n 1 cat /proc/drbd

Now format the drbd partition and mount it on the master server

mkfs.ext4 /dev/drbd0
mkdir /replication
mount /dev/drbd0 /replication

At any time, you can check the server role by using the below command :

drbdadm role r0

Primary server should return,

[root@serverg1 ~]# drbdadm role r0
Primary/Secondary

If you want to switch to the second server, run the below in the primary server:
1. umount /replication
2. drbdadm secondary r0
run the below in current seconday server:
mkdir /replication
drbdadm primary r0
mount /dev/drbd0 /replication
Once you are switched the servers, confirm the status by

df -h
drbdadm role r0

So we have gone through how to switch manually. we can automate the same using heartbeat.

Heartbeat configuration:
Create a configuration file /etc/ha.d/ha.cf on the primary server serverg1

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node serverg1
node serverg2

set auto_fallback to ‘off’ to avoid switching the server back to the primary from the slave server.
Create /etc/ha.d/authkeys file and add the below :

auth 1
1 sha1 MySecret

Change the permission to 600

chmod 600 /etc/ha.d/authkeys

Edit /etc/ha.d/haresources as below:

serverg1 drbddisk::r0 Filesystem::/dev/drbd0::/replication::ext4 IPaddr::10.254.254.55/24/eth0 mysqld
serverg1 MailTo::firstaddress@gmail.com,secondaddress@gmail.com::DRBD/HA-ALERT

It is important to keep the contents in separate lines. Now start the heartbeat service in serverg1
service heartbeat start
Now copy all the heartbeat files to the second server:

scp /etc/ha.d/ha.cf /etc/ha.d/authkeys /etc/ha.d/haresources root@serverg2:/etc/ha.d/

Now start the heartbeat service in second server(serverg2)

service heartbeat start
chkconfig --add heartbeat
chkconfig heartbeat on

Now verify the primary and secondary numbers using the command :

drbdadm role r0
df -h

If you stop the heartbeat service in one server or shutdown the server, partition will mount automatically to the other server and the services mentioned in the haresources file will start.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s