How to configure GFS2 partition in Red Hat Cluster

I assume you are already familiar with High Availability Cluster and it's architecture.
In another article I have shared a step by step guide to configure High Availability Cluster on Linux. GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB. The current supported maximum size of a GFS2 file system for 32-bit hardware is 16 TB.

NOTE:

Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes.

The package required for setting up GFS filesystem is gfs2-utils in RHEL 5 and 6

# yum -y install gfs2-utils
NOTE:

Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 6 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat does support a number of high-performance single node file systems which are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system.

Proceed with the below steps on the shared storage.

# fdisk -l
Disk /dev/sdc: 11.1 GB, 11106516992 bytes
64 heads, 32 sectors/track, 10592 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

 

Before setting up GFS2

File system name
Determine a unique name for each file system. The name must be unique for all lock_dlm file systems over the cluster. Each file system name is required in the form of a parameter variable.
 
Journals (Using -j argument)
Determine the number of journals for your GFS2 file systems. One journal is required for each node that mounts a GFS2 file system. GFS2 allows you to add journals dynamically at a later point as additional servers mount a file system.
 
Journal Size (Using -J argument Default 128 MB)
When you run the mkfs.gfs2 command to create a GFS2 file system, you may specify the size of the journals. If you do not specify a size, it will default to 128MB, which should be optimal for most applications.
Some system administrators might think that 128MB is excessive and be tempted to reduce the size of the journal to the minimum of 8MB or a more conservative 32MB. While that might work, it can severely impact performance. Like many journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place. This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time. However, it does not take much file system activity to fill an 8MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage.
It is generally recommended to use the default journal size of 128MB. If your file system is very small (for example, 5GB), having a 128MB journal might be impractical. If you have a larger file system and can afford the space, using 256MB journals might improve performance.
 
Block Size (Using -b argument)
the mkfs.gfs2 command attempts to estimate an optimal block size based on device topology. In general, 4K blocks are the preferred block size because 4K is the default page size (memory) for Linux. If your block size is 4K, the kernel has to do less work to manipulate the buffers.
 
Size and no. of Resource Groups
When a GFS2 file system is created with the mkfs.gfs2 command, it divides the storage into uniform slices known as resource groups. It attempts to estimate an optimal resource group size (ranging from 32MB to 2GB). You can override the default with the -r option of the mkfs.gfs2 command.
 
Locking Mode (Using -p argument)
LockProtoName is the name of the  locking  protocol to use.  Acceptable locking protocols are lock_dlm (for shared storage) or if you are using GFS2 as a local filesystem (1 node only), you can specify the lock_nolock protocol.  If this option is not specified, lock_dlm protocol will be assumed.

Formatting a partition with GFS2

You can use the below command on all the nodes of the cluster for the partition you want to configure as GFS2.
Formatting filesystem: GFS2
Locking Protocol: lock_dlm
Cluster Name: cluster1
FileSystem name: GFS
Journal: 2
Partition: /dev/sdc

[root@node1 ~]# mkfs.gfs2 -p lock_dlm -t cluster1:GFS -j 2  /dev/sdc
This will destroy any data on /dev/sdc.
It appears to contain: Linux GFS2 Filesystem (blocksize 4096, lockproto lock_dlm)
Are you sure you want to proceed? [y/n] y
Device:                    /dev/sdc
Blocksize:                 4096
Device Size                10.34 GB (2711552 blocks)
Filesystem Size:           10.34 GB (2711552 blocks)
Journals:                  2
Resource Groups:           42
Locking Protocol:          "lock_dlm"
Lock Table:                "cluster1:GFS"
UUID:                      82e1e74f-74d4-213a-acb5-d371526e0d34

Fill in the mount details inside /etc/fstab on all the nodes of cluster because as soon as gfs2 service is started it will look out for GFS entry inside fstab

# less /etc/fstab
/dev/sdc                /GFS                    gfs2    defaults        0 0

 

Restart the gfs2 services

# /etc/init.d/gfs2 start
Mounting GFS2 filesystem (/GFS):                           [  OK  ]

 

Verify the setup on both the nodes of cluster

[root@node1 ~]# cd /GFS/
[root@node1 GFS]# ls
[root@node1 GFS]# touch test
[root@node2 ~]# cd /GFS/
[root@node2 GFS]# ls
test

As you see the changes are reflecting on both the nodes of cluster.
References:
Global File System 2
 
Related Articles
Overview of services used in Red Hat Cluster
Configure Red Hat Cluster using VMware, Quorum Disk, GFS2, Openfiler