Storage Clustering using GNBD & GFS
October 6, 2008
This is a quick howto build a simple GNBD server with one GFS Partition exported to two nodes, this two nodes will use this GFS Partition as document root for apache, i will use three XEN machines for setting up and they are running CentOS 5.2. You can create this setup on RHEL 5.2 servers, no differences between CentOS and RHEL, but RHEL needs a registration on Advance Software to get some clustering packages which i don't have, so when i used RHEL i did rpmbuild for some packages not exist on RHN, i will mention them later. Also i used GFS 1 instead of GFS 2, reading from RedHat Linux-Cluster mailling list i noticed that GFS 2 is still unstable to use in production servers. About Networking, I used a private subnet (10.0.0.0/24) locally, before starting, please make sure you have a working DNS or at least put entry for your machines with their IP in /etc/hosts file, for me i did that in /etc/hosts file on the three machines as the following: 10.0.0.1 n1.sqawasmi.com 10.0.0.2 n2.sqawasmi.com 10.0.0.7 gnbd1.sqawasmi.com For Disk Partitioning, in the GNBD server i used 1G as GFS partition named xvdb1, nodes will import it. Okay here we go, the following pictures display my GNBD server named gnbd1.sqawasmi.com and exporting GFS Partition through local network to n1.sqawasmi.com and n2.sqawasmi.com, there is a Load Balancer in front of n1 and n2, it balance coming load over our two nodes.Installing RHCS needed packages:
yum install cman gnbd kmod-gnbd-xen lvm2-cluster kmod-gfs-xen gfs-utils
note that i installed kmod-gnbd-xen and kmod-gfs-xen packages which they are contain kernel modules for XEN kernel, if you are doing your setup on real machines then install kmod-gnbd and kmod-gfs instead of them. also i remember there was a difference in this packages names in CentOS and RHEL, i think in RHEL they named gfs-kmod and gnbd-kmod.
Configuring cluster.conf file:
The first step is to configure your /etc/cluster.conf file, this file is all what you want, it contains our nodes (n1,n2,gnbd1) and we will specify the fencing method that we will use, also we add our recourses (in our setup it will be our exported partition).
I used the following configuration:
<?xml version="1.0"?> <cluster name="test-cluster" config_version="1"> <cman expected_votes="1"> </cman> <fence_deamon post_join_delay="60"> </fence_deamon> <clusternodes> <clusternode name="n1.sqawasmi.com" nodeid="1"> <fence> <method name="single"> <device name="gnbd" ipaddr="n1.sqawasmi.com"/> </method> </fence> </clusternode> <clusternode name="n2.sqawasmi.com" nodeid="2"> <fence> <method name="single"> <device name="gnbd" ipaddr="n2.sqawasmi.com"/> </method> </fence> </clusternode> <clusternode name="gnbd1.sqawasmi.com" nodeid="3"> <fence> <method name="single"> <device name="gnbd" ipaddr="gnbd1.sqawasmi.com"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice name="gnbd" agent="fence_gnbd" servers="gnbd1.sqawasmi.com"/> </fencedevices> <rm> <resources> <clusterfs device="/dev/xvdb1" force_unmount="0" fsid="5391" fstype="gfs" mountpoint="/www/www-data" name="www" options=""/> </resources> </rm> </cluster>for explanation for cluster.conf scheme, refer to this page. later, to be continue... 😛
March 3rd, 2009 at 12:58 pm
Hello webmaster
I would like to share with you a link to your site
write me here preonrelt@mail.ru
April 15th, 2009 at 1:58 pm
dear sir…
can you continue this topic ?, i want to try this mechanism.
i think this is good idea for my storage problem.
very2 interesting…
thank’s
April 16th, 2009 at 7:29 pm
Sorry about being late, i was so busy lately, ill continue this asap,
also, if you have a storage problem then maybe i can give help, email me ( shaker [at] sqawasmi [dot] com)
you welcome.
May 1st, 2010 at 1:45 am
Please Shaker continue this writing, its exactly what im looking for!!!
Thanks
Cris