{"id":37,"date":"2008-05-23T14:54:05","date_gmt":"2008-05-23T12:54:05","guid":{"rendered":"http:\/\/blog.sqawasmi.com\/?p=37"},"modified":"2008-10-04T10:06:50","modified_gmt":"2008-10-04T08:06:50","slug":"drbd-primary-primary-using-gfs","status":"publish","type":"post","link":"https:\/\/blog.sqawasmi.com\/index.php\/2008\/05\/23\/drbd-primary-primary-using-gfs\/","title":{"rendered":"DRBD Primary\/Primary using GFS"},"content":{"rendered":"<p><strong>My goal by using DRBD as Primary\/Primary with GFS is to load balance a http service, my servers looks like the following:<\/strong><\/p>\n<p><strong><a href=\"http:\/\/blog.sqawasmi.com\/wp-content\/uploads\/2008\/05\/drbd-loadbalancer-gfs-primary-primary.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-38\" title=\"drbd-LoadBalancer-gfs-primary-primary\" src=\"http:\/\/blog.sqawasmi.com\/wp-content\/uploads\/2008\/05\/drbd-loadbalancer-gfs-primary-primary-300x207.jpg\" alt=\"Load Balancer - GFS - Primary-Primary\" width=\"300\" height=\"207\" srcset=\"https:\/\/blog.sqawasmi.com\/wp-content\/uploads\/2008\/05\/drbd-loadbalancer-gfs-primary-primary-300x207.jpg 300w, https:\/\/blog.sqawasmi.com\/wp-content\/uploads\/2008\/05\/drbd-loadbalancer-gfs-primary-primary.jpg 355w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/strong><\/p>\n<p><strong>i use the GFS partition as document-root for my webserver (Apache).<br \/>\nmaybe it&#8217;s better to use SAN as storage but it&#8217;s so expensive, another solutions maybe iSCSI or GNBD but also it&#8217;s need more servers which needs extra money \ud83d\ude42<br \/>\nmaybe in the future i will implement it using SAN, iSCSI or GNBD but for now it&#8217;s good with DRBD and GFS as two nodes with load balancer and it&#8217;s fast enough.<\/strong><\/p>\n<p><strong>for testing and preparing this quick howto i used Xen to create 2 virtual machine and centos 5 as OS. the partition that i want to use as GFS is named xvdb1, make sure that your partition don&#8217;t contain any data you want (it will be destroyed)<br \/>\nto destroy the partition i used this command in the two nodes:<br \/>\n<\/strong><span style=\"color: #333399;\"> dd if=\/dev\/zero of=\/dev\/xvdb1<\/span><\/p>\n<p><strong>change \/dev\/xvdb1 to your partition (make sure it doesn&#8217;t contain any needed data).<\/strong><\/p>\n<p><strong>the following commands have to be done in the two nodes, for simplicity i use the output of one machine<br \/>\n* download DRBD on node1 &amp; node2:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 ~]# mkdir downloads<br \/>\n[root@node1 ~]# cd downloads\/<br \/>\n[root@node1 downloads]# wget -c http:\/\/oss.linbit.com\/drbd\/8.2\/drbd-8.2.5.tar.gz<\/span><\/p>\n<p><strong>* untar it:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 downloads]# time tar -xzpf drbd-8.2.5.tar.gz -C \/usr\/src\/<\/span><\/p>\n<p><span style=\"color: #333399;\">real    0m0.162s<br \/>\nuser    0m0.016s<br \/>\nsys     0m0.028s<br \/>\n[root@node1 downloads]# ls \/usr\/src\/<br \/>\ndrbd-8.2.5    redhat<\/span><\/p>\n<p><strong>* before building DRBD:<br \/>\nbefore you start, make sure you have the following installed in your system:<br \/>\n&#8211; make, gcc, the glibc development libraries, and the flex scanner generator must be installed<br \/>\n&#8211; kernel-headers and kernel-devel:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 downloads]# yum list kernel-*<br \/>\nLoading &#8220;installonlyn&#8221; plugin<br \/>\nSetting up repositories<br \/>\nReading repository metadata in from local files<br \/>\nInstalled Packages<br \/>\nkernel.i686                              2.6.18-8.el5           installed<br \/>\nkernel-headers.i386                      2.6.18-8.el5           installed<br \/>\nkernel-xen.i686                          2.6.18-8.el5           installed<br \/>\nkernel-xen-devel.i686                    2.6.18-8.el5           installed<br \/>\nAvailable Packages<br \/>\nkernel-PAE.i686                          2.6.18-8.el5           local<br \/>\nkernel-PAE-devel.i686                    2.6.18-8.el5           local<br \/>\nkernel-devel.i686                        2.6.18-8.el5           local<br \/>\nkernel-doc.noarch                        2.6.18-8.el5           local<\/span><strong><br \/>\nremember that i use Xen kernel.<\/strong><\/p>\n<p><strong>* building DRBD:<br \/>\n&#8211; building DRBD kernel module:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 downloads]# cd \/usr\/src\/drbd-8.2.5\/drbd<br \/>\n[root@node1 drbd]# make clean all<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\nmv .drbd_kernelrelease.new .drbd_kernelrelease<br \/>\nMemorizing module configuration &#8230; done.<br \/>\n[root@node1 drbd]#<\/span><\/p>\n<p><strong>&#8211; checking the new kernel module:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 drbd]# modinfo drbd.ko<br \/>\nfilename:       drbd.ko<br \/>\nalias:          block-major-147-*<br \/>\nlicense:        GPL<br \/>\ndescription:    drbd &#8211; Distributed Replicated Block Device v8.2.5<br \/>\nauthor:         Philipp Reisner &lt;phil@linbit.com&gt;, Lars Ellenberg &lt;lars@linbit.com&gt;<br \/>\nsrcversion:     E325FBFE020C804C4FABA31<br \/>\ndepends:<br \/>\nvermagic:       2.6.18-8.el5xen SMP mod_unload 686 REGPARM 4KSTACKS gcc-4.1<br \/>\nparm:           minor_count:Maximum number of drbd devices (1-255) (int)<br \/>\nparm:           allow_oos:DONT USE! (bool)<br \/>\nparm:           enable_faults:int<br \/>\nparm:           fault_rate:int<br \/>\nparm:           fault_count:int<br \/>\nparm:           fault_devs:int<br \/>\nparm:           trace_level:int<br \/>\nparm:           trace_type:int<br \/>\nparm:           trace_devs:int<br \/>\nparm:           usermode_helper:string<br \/>\n[root@node1 drbd]#<\/span><\/p>\n<p><strong>&#8211; Building a DRBD RPM package<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 drbd]# cd \/usr\/src\/drbd-8.2.5\/<br \/>\n[root@node1 drbd-8.2.5]# make rpm<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\nYou have now:<br \/>\n-rw-r&#8211;r&#8211; 1 root root 142722 May 23 11:45 dist\/RPMS\/i386\/drbd-8.2.5-3.i386.rpm<br \/>\n-rw-r&#8211;r&#8211; 1 root root 232238 May 23 11:45 dist\/RPMS\/i386\/drbd-debuginfo-8.2.5-3.i386.rpm<br \/>\n-rw-r&#8211;r&#8211; 1 root root 851602 May 23 11:45 dist\/RPMS\/i386\/drbd-km-2.6.18_8.el5xen-8.2.5-3.i386.rpm<br \/>\n[root@node1 drbd-8.2.5]#<\/span><\/p>\n<p><strong>&#8211; installing DRBD:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 drbd-8.2.5]# cd dist\/RPMS\/i386\/<br \/>\n[root@node1 i386]# rpm -ihv drbd-8.2.5-3.i386.rpm drbd-km-2.6.18_8.el5xen-8.2.5-3.i386.rpm<br \/>\nPreparing&#8230;                ########################################### [100%]<br \/>\n1:drbd                   ########################################### [ 50%]<br \/>\n2:drbd-km-2.6.18_8.el5xen########################################### [100%]<\/span><\/p>\n<p><strong>* Configuring DRBD:<br \/>\n&#8211; for lower-level storage i use a simple setup, both hosts have a free (currently unused) partition named \/dev\/xvdb1   and i use internal meta data.<br \/>\n&#8211; for \/etc\/drbd.conf i use this configuration:<br \/>\n<\/strong> resource r0 {<br \/>\nprotocol C;<br \/>\nstartup {<br \/>\nbecome-primary-on both;<br \/>\n}<br \/>\nnet {<br \/>\nallow-two-primaries;<br \/>\ncram-hmac-alg &#8220;sha1&#8221;;<br \/>\nshared-secret &#8220;123456&#8221;;<br \/>\nafter-sb-0pri discard-least-changes;<br \/>\nafter-sb-1pri violently-as0p;<br \/>\nafter-sb-2pri violently-as0p;<br \/>\nrr-conflict violently;<br \/>\n}<br \/>\nsyncer {<br \/>\nrate 44M;<br \/>\n}<\/p>\n<p>on node1.test.lab {<br \/>\ndevice  \/dev\/drbd0;<br \/>\ndisk    \/dev\/xvdb1;<br \/>\naddress 192.168.1.1:7789;<br \/>\nmeta-disk internal;<br \/>\n}<br \/>\non node2.test.lab {<br \/>\ndevice    \/dev\/drbd0;<br \/>\ndisk    \/dev\/xvdb1;<br \/>\naddress 192.168.1.2:7789;<br \/>\nmeta-disk internal;<br \/>\n}<br \/>\n}<\/p>\n<p><strong>note that &#8220;become-primary-on both&#8221; startup option is needed in Primary\/Primary configuration.<\/strong><\/p>\n<p><strong>* starting DRBD for the first time:<br \/>\nthe following steps must be performed on the two nodes:<br \/>\n&#8211; Create device metadata<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# drbdadm create-md r0<br \/>\nv08 Magic number not found<br \/>\nv07 Magic number not found<br \/>\nv07 Magic number not found<br \/>\nv08 Magic number not found<br \/>\nWriting meta data&#8230;<br \/>\ninitialising activity log<br \/>\nNOT initialized bitmap<br \/>\nNew drbd meta data block sucessfully created.<\/span><\/p>\n<p><span style=\"color: #333399;\">&#8211;== Creating metadata ==&#8211;<br \/>\nAs with nodes we count the total number of devices mirrored by DRBD at<br \/>\nat http:\/\/usage.drbd.org.<\/span><\/p>\n<p><span style=\"color: #333399;\">The counter works completely anonymous. A random number gets created for<br \/>\nthis device, and that randomer number and the devices size will be sent.<\/span><\/p>\n<p><span style=\"color: #333399;\">http:\/\/usage.drbd.org\/cgi-bin\/insert_usage.pl?nu=18231616900827588600&amp;ru=15113975333795790860&amp;rs=2147483648<\/span><\/p>\n<p><span style=\"color: #333399;\">Enter &#8216;no&#8217; to opt out, or just press [return] to continue:<br \/>\nsuccess<\/span><\/p>\n<p><strong>&#8211; Attach. This step associates the DRBD resource with its backing device:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# modprobe drbd<br \/>\n[root@node1 i386]# drbdadm attach r0<\/span><\/p>\n<p><strong>&#8211; verify running DRBD:<br \/>\non node1:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node1.test.lab, 2008-05-23 11:45:23<br \/>\n0: cs:StandAlone st:Secondary\/Unknown ds:Inconsistent\/Outdated   r&#8212;<br \/>\nns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0<br \/>\nresync: used:0\/31 hits:0 misses:0 starving:0 dirty:0 changed:0<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>on node2:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]#  cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node2.test.lab, 2008-05-23 12:58:18<br \/>\n0: cs:StandAlone st:Secondary\/Unknown ds:Inconsistent\/Outdated   r&#8212;<br \/>\nns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0<br \/>\nresync: used:0\/31 hits:0 misses:0 starving:0 dirty:0 changed:0<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>&#8211; Connect. This step connects the DRBD resource with its counterpart on the peer node:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# drbdadm connect r0<br \/>\n[root@node1 i386]# cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node1.test.lab, 2008-05-23 11:45:23<br \/>\n0: cs:WFConnection st:Secondary\/Unknown ds:Inconsistent\/Outdated C r&#8212;<br \/>\nns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0<br \/>\nresync: used:0\/31 hits:0 misses:0 starving:0 dirty:0 changed:0<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>&#8211; initial device synchronization for the first time:<br \/>\nthe following step must done just on one node, i used node1:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# drbdadm &#8212; &#8211;overwrite-data-of-peer primary r0<\/span><\/p>\n<p><strong>&#8211; verify:<\/strong><\/p>\n<p><span style=\"color: #333399;\">[root@node1 i386]# cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node1.test.lab, 2008-05-23 11:45:23<br \/>\n0: cs:SyncSource st:Primary\/Secondary ds:UpToDate\/Inconsistent C r&#8212;<br \/>\nns:792 nr:0 dw:0 dr:792 al:0 bm:0 lo:0 pe:0 ua:0 ap:0<br \/>\n[&gt;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;..] sync&#8217;ed:  0.2% (2096260\/2097052)K<br \/>\nfinish: 2:11:00 speed: 264 (264) K\/sec<br \/>\nresync: used:0\/31 hits:395 misses:1 starving:0 dirty:0 changed:1<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><span style=\"color: #333399;\">[root@node2 i386]#  cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node2.test.lab, 2008-05-23 12:58:18<br \/>\n0: cs:SyncTarget st:Secondary\/Primary ds:Inconsistent\/UpToDate C r&#8212;<br \/>\nns:0 nr:1896 dw:1896 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0<br \/>\n[&gt;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;..] sync&#8217;ed:  0.2% (2095156\/2097052)K<br \/>\nfinish: 2:02:12 speed: 268 (268) K\/sec<br \/>\nresync: used:0\/31 hits:947 misses:1 starving:0 dirty:0 changed:1<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>By now, our DRBD device is fully operational, even before the initial synchronization has completed. we can now continue to configure GFS&#8230;<\/strong><\/p>\n<p><strong>&#8211; Configuring your nodes to support GFS<br \/>\nbefore we can configure GFS, we need a littel help from RHCS, the following packages is needed to be installed on the systems:<br \/>\n&#8211; &#8220;cman&#8221;  ( RedHat Cluster Maneger)<br \/>\n&#8211; &#8220;lvm2-cluster&#8221; (LVM with Cluster support)<br \/>\n&#8211; &#8220;gfs-utils&#8221; or &#8220;gfs2-utils&#8221; (GFS1 Utils or GFS2 Utils, as write of this document, i prefer GFS1)<br \/>\n&#8211; &#8220;kmod-gfs&#8221; or &#8220;kmod-gfs-xen&#8221; for Xen (GFS kernel module)<\/strong><\/p>\n<p><strong>* we must enable and start the following system services on both nodes:<br \/>\n&#8211; cman : it will run ccsd, fenced, dlm and openais.<\/strong><strong><br \/>\n&#8211; clvmd.<\/strong><strong><br \/>\n&#8211; gfs.<\/strong><\/p>\n<p><strong>starting cman:<br \/>\nbefore we can start cman, we have to conigure \/etc\/cluster\/cluster.conf i use the following configration:<\/strong><\/p>\n<p>&lt;?xml version=&#8221;1.0&#8243;?&gt;<br \/>\n&lt;cluster name=&#8221;my-cluster&#8221; config_version=&#8221;1&#8243;&gt;<br \/>\n&lt;cman two_node=&#8221;1&#8243; expected_votes=&#8221;1&#8243;&gt;<br \/>\n&lt;\/cman&gt;<br \/>\n&lt;clusternodes&gt;<br \/>\n&lt;clusternode name=&#8221;node1.test.lab&#8221; votes=&#8221;1&#8243; nodeid=&#8221;1&#8243;&gt;<br \/>\n&lt;fence&gt;<br \/>\n&lt;method name=&#8221;single&#8221;&gt;<br \/>\n&lt;device name=&#8221;human&#8221; ipaddr=&#8221;192.168.1.1&#8243;\/&gt;<br \/>\n&lt;\/method&gt;<br \/>\n&lt;\/fence&gt;<br \/>\n&lt;\/clusternode&gt;<br \/>\n&lt;clusternode name=&#8221;node2.test.lab&#8221; votes=&#8221;1&#8243; nodeid=&#8221;2&#8243;&gt;<br \/>\n&lt;fence&gt;<br \/>\n&lt;method name=&#8221;single&#8221;&gt;<br \/>\n&lt;device name=&#8221;human&#8221; ipaddr=&#8221;192.168.1.2&#8243;\/&gt;<br \/>\n&lt;\/method&gt;<br \/>\n&lt;\/fence&gt;<br \/>\n&lt;\/clusternode&gt;<br \/>\n&lt;\/clusternodes&gt;<br \/>\n&lt;fence_devices&gt;<br \/>\n&lt;fence_device name=&#8221;human&#8221; agent=&#8221;fence_manual&#8221;\/&gt;<br \/>\n&lt;\/fence_devices&gt;<br \/>\n&lt;\/cluster&gt;<\/p>\n<p><strong>after editing \/etc\/cluster\/cluster.conf we have to start it in the two nodes in the same time:<br \/>\non the node1:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# \/etc\/init.d\/cman start<br \/>\nStarting cluster:<br \/>\nLoading modules&#8230; done<br \/>\nMounting configfs&#8230; done<br \/>\nStarting ccsd&#8230; done<br \/>\nStarting cman&#8230; done<br \/>\nStarting daemons&#8230; done<br \/>\nStarting fencing&#8230; done<br \/>\n[  OK  ]<\/span><\/p>\n<p><strong>on the node2:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]# \/etc\/init.d\/cman start<br \/>\nStarting cluster:<br \/>\nLoading modules&#8230; done<br \/>\nMounting configfs&#8230; done<br \/>\nStarting ccsd&#8230; done<br \/>\nStarting cman&#8230; done<br \/>\nStarting daemons&#8230; done<br \/>\nStarting fencing&#8230; done<br \/>\n[  OK  ]<\/span><\/p>\n<p><strong>check nodes:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# cman_tool nodes<br \/>\nNode  Sts   Inc   Joined               Name<br \/>\n1   M      4   2008-05-23 14:33:25  node1.test.lab<br \/>\n2   M    316   2008-05-23 14:41:34  node2.test.lab<\/span><\/p>\n<p><strong>in the &#8216;Sts&#8217; column the &#8216;M&#8217; means that every thing is going fine, if it&#8217;s &#8216;X&#8217; then there is a problem happend..<\/strong><\/p>\n<p><strong>&#8211; starting CLVMD:<\/strong><\/p>\n<p><strong>first we need to change locking type in \/etc\/lvm\/lvm.conf to 3 in the two nodes:<br \/>\nvi \/etc\/lvm\/lvm.conf<br \/>\nchange <\/strong><span style=\"color: #333399;\">locking_type = 1<\/span><strong> to <\/strong><span style=\"color: #333399;\">locking_type = 3<\/span><strong><br \/>\nwe also need to change the filter option to let vgscan don&#8217;t see the duplicated PV (duplicate PV will happen because our xvdb1 will be the backend for drbd0) i changed filter like this<br \/>\n<\/strong><span style=\"color: #333399;\"> #filter = [ &#8220;a\/.*\/&#8221; ]<br \/>\nfilter = [ &#8220;a|xvda.*|&#8221;, &#8220;a|drbd.*|&#8221;, &#8220;r|xvdb.*|&#8221; ]<\/span><\/p>\n<p><strong>in my filter option, &#8220;a|xvda.*|&#8221; means add all xvda partition, &#8220;a|drbd.*|&#8221; means add all drbd partition, and &#8220;r|xvdb.*|&#8221; means remove (ignore) all xvdb partition (one of them is our partition which is xvdb1)<\/strong><\/p>\n<p><strong>save and exit..<br \/>\nthe first thing to do is vgscan, so it&#8217;s read the new configuration:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# vgscan<br \/>\nReading all physical volumes.  This may take a while&#8230;<br \/>\nFound volume group &#8220;VolGroup00&#8221; using metadata type lvm2<\/span><\/p>\n<p><strong>&#8211; the following commands must done in one node, i used node1 &#8211;<br \/>\nnow create our PV:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# pvcreate \/dev\/drbd0<br \/>\nPhysical volume &#8220;\/dev\/drbd0&#8221; successfully created<\/span><\/p>\n<p><strong>creating our volume group:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# vgcreate my-vol \/dev\/drbd0<br \/>\nVolume group &#8220;my-vol&#8221; successfully created<br \/>\n[root@node1 i386]# vgdisplay<br \/>\n&#8212; Volume group &#8212;<br \/>\nVG Name               my-vol<br \/>\nSystem ID<br \/>\nFormat                lvm2<br \/>\nMetadata Areas        1<br \/>\nMetadata Sequence No  1<br \/>\nVG Access             read\/write<br \/>\nVG Status             resizable<br \/>\n<strong> Clustered             yes<\/strong><br \/>\nOpen LV               0<br \/>\nMax PV                0<br \/>\nCur PV                1<br \/>\nAct PV                1<br \/>\nVG Size               2.00 GB<br \/>\nPE Size               4.00 MB<br \/>\nTotal PE              511<br \/>\nAlloc PE \/ Size       0 \/ 0<br \/>\nFree  PE \/ Size       511 \/ 2.00 GB<br \/>\nVG UUID               UaUK5v-P3aX-nmCn-Oj3F-XQox-AgxB-UsM0xS<\/span><\/p>\n<p><strong>did you noticed <span style=\"color: #333399;\"> Clustered             yes<\/span>?<\/strong><\/p>\n<p><strong>creating our lv:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# lvcreate -L1.9G &#8211;name my-lv my-vol<br \/>\nRounding up size to full physical extent 1.90 GB<br \/>\n<span style=\"color: #ff0000;\"> Error locking on node node2.test.lab: device-mapper: reload ioctl failed: Invalid argument<\/span><br \/>\nFailed to activate new LV.<\/span><\/p>\n<p><strong>creating the GFS:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# gfs_mkfs -p lock_dlm -t my-cluster:www -j 2 \/dev\/my-vol\/my-lv<br \/>\nThis will destroy any data on \/dev\/my-vol\/my-lv.<\/span><\/p>\n<p><span style=\"color: #333399;\">Are you sure you want to proceed? [y\/n] y<\/span><\/p>\n<p><span style=\"color: #333399;\">Device:                    \/dev\/my-vol\/my-lv<br \/>\nBlocksize:                 4096<br \/>\nFilesystem Size:           433092<br \/>\nJournals:                  2<br \/>\nResource Groups:           8<br \/>\nLocking Protocol:          lock_dlm<br \/>\nLock Table:                my-cluster:www<\/span><\/p>\n<p><span style=\"color: #333399;\">Syncing&#8230;<br \/>\nAll Done<\/span><\/p>\n<p><strong>start gfs service:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node1 i386]# \/etc\/init.d\/gfs start<\/span><\/p>\n<p><span style=\"color: #333399;\">mount it on the first node:<br \/>\n[root@node1 i386]# mount -t gfs \/dev\/my-vol\/my-lv \/www<br \/>\n[root@node1 i386]# df -h<br \/>\nFilesystem            Size  Used Avail Use% Mounted on<br \/>\n\/dev\/mapper\/VolGroup00-LogVol00<br \/>\n9.1G  3.4G  5.3G  40% \/<br \/>\n\/dev\/xvda1             99M   17M   78M  18% \/boot<br \/>\ntmpfs                 129M     0  129M   0% \/dev\/shm<br \/>\n\/dev\/my-vol\/my-lv     1.7G   20K  1.7G   1% \/www<br \/>\n[root@node1 i386]# ls -lth \/www\/<br \/>\ntotal 0<\/span><\/p>\n<p><strong>mount it in the second node:<br \/>\nnow you have to wait until the initial device synchronization finish, to check:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]# cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node2, 2008-05-23 12:58:18<br \/>\n0: cs:SyncTarget st:Secondary\/Primary ds:Inconsistent\/UpToDate C r&#8212;<br \/>\nns:0 nr:1970404 dw:1970404 dr:0 al:0 bm:119 lo:0 pe:0 ua:0 ap:0<br \/>\n[=================&gt;..] sync&#8217;ed: 93.4% (143276\/2097052)K<br \/>\nfinish: 0:08:57 speed: 252 (232) K\/sec<br \/>\nresync: used:0\/31 hits:976756 misses:120 starving:0 dirty:0 changed:120<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>after it finish we need to change it to primary before we can mount it:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]# drbdadm primary r0<br \/>\n[root@node2 i386]# cat \/proc\/drbd<br \/>\nversion: 8.2.5 (api:88\/proto:86-88)<br \/>\nGIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by root@node2, 2008-05-23 12:58:18<br \/>\n0: cs:Connected st:Primary\/Primary ds:UpToDate\/UpToDate C r&#8212;<br \/>\nns:0 nr:2113680 dw:2113680 dr:0 al:0 bm:128 lo:0 pe:0 ua:0 ap:0<br \/>\nresync: used:0\/31 hits:1048386 misses:128 starving:0 dirty:0 changed:128<br \/>\nact_log: used:0\/127 hits:0 misses:0 starving:0 dirty:0 changed:0<\/span><\/p>\n<p><strong>notice &#8220;<span style=\"color: #333399;\">st:Primary\/Primary<\/span>&#8221; it&#8217;s what we want! \ud83d\ude42<\/strong><\/p>\n<p><strong>now to check the volume group:<\/strong><br \/>\n<span style=\"color: #333399;\"> [root@node2 ~]# vgscan<br \/>\nReading all physical volumes.  This may take a while&#8230;<br \/>\nFound volume group &#8220;VolGroup00&#8221; using metadata type lvm2<br \/>\nFound volume group &#8220;my-vol&#8221; using metadata type lvm2<\/span><\/p>\n<p><strong>mount it!<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]# \/etc\/init.d\/gfs start<br \/>\n[root@node2 i386]# mkdir \/www<br \/>\n[root@node2 i386]# mount -t gfs \/dev\/my-vol\/my-lv \/www<br \/>\n\/sbin\/mount.gfs: can&#8217;t open \/dev\/my-vol\/my-lv: No such file or directory<\/span><\/p>\n<p><strong>oOoPps do you remember the error &#8220;<\/strong><span style=\"color: #ff0000;\">Error locking on node node2.test.lab: device-mapper: reload ioctl failed: Invalid argumen<\/span><strong>&#8221; when we created our LV in the first node? ok easy, restart clvmd in node2 and try remounting it:<\/strong><\/p>\n<p><span style=\"color: #333399;\">[root@node2 i386]# \/etc\/init.d\/clvmd restart<br \/>\nDeactivating VG my-vol:   0 logical volume(s) in volume group &#8220;my-vol&#8221; now active<br \/>\n[  OK  ]<br \/>\nStopping clvm:                                             [  OK  ]<br \/>\nStarting clvmd:                                            [  OK  ]<br \/>\nActivating VGs  2 logical volume(s) in volume group &#8220;VolGroup00&#8221; now active<br \/>\n1 logical volume(s) in volume group &#8220;my-vol&#8221; now active<br \/>\n[  OK  ]<br \/>\n[root@node2 i386]# mount -t gfs \/dev\/my-vol\/my-lv \/www<\/span><\/p>\n<p><strong>aha, lets touch some data:<br \/>\n<\/strong><span style=\"color: #333399;\"> [root@node2 i386]# touch \/www\/hi<br \/>\n[root@node2 i386]# ls -lth \/www\/<br \/>\ntotal 8.0K<br \/>\n-rw-r&#8211;r&#8211; 1 root root 0 May 23 16:35 hi<br \/>\nand from node1:<br \/>\n[root@node1 i386]# ls -lth \/www\/<br \/>\ntotal 8.0K<br \/>\n-rw-r&#8211;r&#8211; 1 root root 0 May 23 16:35 hi<\/span><\/p>\n<p><strong>cool right:? try it your self&#8230;<\/strong><\/p>\n<h3>Related Images:<\/h3>","protected":false},"excerpt":{"rendered":"<p>My goal by using DRBD as Primary\/Primary with GFS is to load balance a http service, my servers looks like the following:<br \/>\n[IMAGE]<br \/>\nLoad Balancer &#8211; GFS &#8211; Primary-Primary<\/p>\n<p>i use the GFS partition as document-root for my webserver (Apache).<br \/>\nmaybe it&#8217;s better to use SAN as storage but it&#8217;s so expensive, another solutions maybe iSCSI or GNBD but also it&#8217;s need more servers which needs extra money \ud83d\ude42<br \/>\nmaybe in the future i will implement it using SAN, iSCSI or GNBD but for now it&#8217;s good with DRBD and GFS as two nodes with load balancer and it&#8217;s fast enough.<\/p>\n<p>for testing and preparing this quick howto i used Xen to create 2 virtual machine and centos 5 as OS. the partition that i want to use as GFS is named xvdb1, make sure that your partition don&#8217;t contain any data you want (it will be destroyed)<br \/>\nto destroy the partition i used<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ngg_post_thumbnail":0,"footnotes":""},"categories":[19,4],"tags":[],"class_list":["post-37","post","type-post","status-publish","format-standard","hentry","category-drbd","category-linux-cluster"],"_links":{"self":[{"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/posts\/37","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/comments?post=37"}],"version-history":[{"count":2,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/posts\/37\/revisions"}],"predecessor-version":[{"id":53,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/posts\/37\/revisions\/53"}],"wp:attachment":[{"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/media?parent=37"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/categories?post=37"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.sqawasmi.com\/index.php\/wp-json\/wp\/v2\/tags?post=37"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}