• VMware

    Learn about VMware virtualization for its products like vsphere ESX and ESXi, vCenter Server, VMware View, VMware P2V and many more

  • Linux

    Step by step configuration tutorials for many of the Linux services like DNS, DHCP, FTP, Samba4 etc including many tips and tricks in Red Hat Linux.

  • Database

    Learn installation and configuration of databases like Oracle, My SQL, Postgresql, etc including many other related tutorials in Linux.

  • Life always offers you a second chance ... Its called tomorrow !!!

    Wednesday, October 08, 2014

    How to configure a Clustered Samba share using ctdb in Red Hat Cluster

    As of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On provides support for running Clustered Samba in an active/active configuration.

    This requires that you install and configure CTDB on all nodes in a cluster, which you use in conjunction with GFS2 clustered file systems.

    NOTE: Red Hat Enterprise Linux 6 supports a maximum of four nodes running clustered Samba.

    Course of action to be performed
    • Create a multinode cluster(for this article I will be using 2 nodes)
    • Create 2 logical volumes each for samba and ctdb in gfs2 format
    • Create one partition for qdisk
    • Start the cluster services
    • Install Pre-requisite Packages
    • Configure ctdb and samba
    • Start ctdb services
    ....and you are all set

    Overall pre-requisites (server)
    • 1 server for openfiler (192.168.1.8)
    • 2 servers for CentOS 6.5 (192.168.1.5 and 192.168.1.6)
    • 1 server for Conga server (192.168.1.7)
    Overall pre-requisites (rpms)
    • High Availability Management (rpm group)
    • High Availability (rpm group)
    • iSCSI Storage Client (rpm group)
    • gfs2-utils
    • ctdb
    • samba
    • samba-client
    • samba-winbind
    • samba-winbind-clients
    This was a brief overview of what we are going to perform but its going to take a while to reach the end of this course so lets start with creating our cluster

    Creating 2 node cluster

    For this article I am using CentOS 6.5 which is fully compatible with Red Hat so the packages and commands used here would be same as in Red Hat Linux.

    Node 1: 192.168.1.5 (node1.example)
    Node 2: 192.168.1.6 (node2.example)
    Mgmt Node: 192.168.1.7 (node3.mgmt)
    Openfiler: 192.168.1.8 (of.storage)


    IMPORTANT NOTE: In this article I will try to stick to the topic of configuring clustered samba so I might skip few steps for the configuration of Red Hat Cluster. In case you face any difficulty in understanding any of the cluster setup step please follow the below article
    Configure Red Hat Cluster using VMware, Quorum Disk, GFS2, Openfiler

    Use the above link to "Configure iSCSI Target using Openfiler"

    Next continue to install the below packages on both the nodes
    [root@node1 ~]# yum groupinstall "iSCSI Storage Client" "High Availability"

    [root@node2 ~]# yum groupinstall "iSCSI Storage Client" "High Availability"

    Add iSCSI targets using iSCSi initiator

    [root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.8
    Starting iscsid:                                           [  OK  ]
    192.168.1.8:3260,1 iqn.2006-01.com.openfiler:samba

    [root@node1 ~]# /etc/init.d/iscsi start
    Starting iscsi:                                            [  OK  ]

    [root@node2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.8
    Starting iscsid:                                           [  OK  ]
    192.168.1.8:3260,1 iqn.2006-01.com.openfiler:samba

    [root@node2 ~]# /etc/init.d/iscsi start
    Starting iscsi:                                            [  OK  ]

    Configure Logical Volume

    NOTE: Perform the below steps on any one of the node as the same would be reflect on other nodes of the cluster.

    As per my configuration in openfiler, I have two LUNS where /dev/sdb is for quorum disk and /dev/sdc will be used for samba share and ctdb
    # fdisk -l
    Disk
    /dev/sdb: 1275 MB, 1275068416 bytes
    40 heads, 61 sectors/track, 1020 cylinders
    Units = cylinders of 2440 * 512 = 1249280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000

    Disk /dev/sdc: 11.3 GB, 11307843584 bytes
    64 heads, 32 sectors/track, 10784 cylinders
    Units = cylinders of 2048 * 512 = 1048576 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000

    Let us create a partition from /dev/sdc
    [root@node2 ~]# fdisk /dev/sdc
    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
    Building a new DOS disklabel with disk identifier 0x36e0095b.
    Changes will remain in memory only, until you decide to write them.
    After that, of course, the previous content won't be recoverable.
    
    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
    
    WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
             switch off the mode (command 'c') and change display units to
             sectors (command 'u').
    
    Command (m for help): n
    Command action
       e   extended
       p   primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-10784, default 1): 1
    Last cylinder, +cylinders or +size{K,M,G} (1-10784, default 10784):[Press Enter]
    Using default value 10784
    
    Command (m for help): p
    
    Disk /dev/sdc: 11.3 GB, 11307843584 bytes
    64 heads, 32 sectors/track, 10784 cylinders
    Units = cylinders of 2048 * 512 = 1048576 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x36e0095b
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1               1       10784    11042800   83  Linux
    
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes): 8e
    Changed system type of partition 1 to 8e (Linux LVM)
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    [root@node2 ~]# partprobe /dev/sdc
    Create 2 Logical Volume out of /dev/sdc1
    [root@node2 ~]# pvcreate /dev/sdc1
      Physical volume "/dev/sdc1" successfully created

    [root@node2 ~]# vgcreate samba_vg /dev/sdc1
      Volume group "samba_vg" successfully created

    [root@node2 ~]# lvcreate -L 1G -n
    ctdb  samba_vg
      Logical volume "ctdb" created

    [root@node2 ~]# lvcreate -L 9G -n samba  samba_vg
      Logical volume "samba" created

    Configure GFS filesystem

    To understand the syntax of the used command follow the below article

    [root@node2 mnt]# mkfs.gfs2 -p lock_dlm -t smbcluster:samba -j 2 /dev/samba_vg/samba
    This will destroy any data on /dev/samba_vg/samba.
    It appears to contain: symbolic link to `../dm-3'

    Are you sure you want to proceed? [y/n] y

    Device:                    /dev/samba_vg/samba
    Blocksize:                 4096
    Device Size                9.00 GB (2359296 blocks)
    Filesystem Size:           9.00 GB (2359294 blocks)
    Journals:                  2
    Resource Groups:           36
    Locking Protocol:          "lock_dlm"
    Lock Table:                "smbcluster:samba"
    UUID:                      062df74f-218a-bd22-34f8-ea58e7e3316a

    [root@node2 mnt]# mkfs.gfs2 -p lock_dlm -t smbcluster:ctdb -j 2 /dev/samba_vg/ctdb
    This will destroy any data on /dev/samba_vg/ctdb.
    It appears to contain: symbolic link to `../dm-2'

    Are you sure you want to proceed? [y/n] y

    Device:                    /dev/samba_vg/ctdb
    Blocksize:                 4096
    Device Size                1.00 GB (262144 blocks)
    Filesystem Size:           1.00 GB (262142 blocks)
    Journals:                  2
    Resource Groups:           4
    Locking Protocol:          "lock_dlm"
    Lock Table:                "smbcluster:ctdb"
    UUID:                      8d243aeb-983b-9126-6678-47df6a2b93fe

    Configure Quorum disk

    NOTE: Perform this command on any one of the node and the same would be reflected on the other nodes also remember the label name used as you will have to use the same while configure qdisk for the cluster
    [root@node2 mnt]# mkqdisk -c /dev/sdb -l quorum
    mkqdisk v3.0.12.1

    Writing new quorum disk label 'quorum' to /dev/sdb.
    WARNING: About to destroy all data on /dev/sdb; proceed [N/y] ? y
    Initializing status block for node 1...
    Initializing status block for node 2...
    Initializing status block for node 3...
    Initializing status block for node 4...
    Initializing status block for node 5...
    Initializing status block for node 6...
    Initializing status block for node 7...
    Initializing status block for node 8...
    Initializing status block for node 9...
    Initializing status block for node 10...
    Initializing status block for node 11...
    Initializing status block for node 12...
    Initializing status block for node 13...
    Initializing status block for node 14...
    Initializing status block for node 15...
    Initializing status block for node 16...

    Assign password to ricci

    [root@node1 ~]# passwd ricci
    Changing password for user ricci.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.

    [root@node2 ~]# passwd ricci
    Changing password for user ricci.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.

    Update hosts file

    Update your hosts file on all the nodes including management node
    #node1
    192.168.1.5     node1.example   node1

    #node2
    192.168.1.6     node2.example   node2

    #node3
    192.168.1.7     node3.mgmt      node3

    #of
    192.168.1.8     of.storage      of

    Restart ricci services on both the nodes to reflect the new password
    [root@node2 ~]# /etc/init.d/ricci start
    Starting oddjobd:                                          [  OK  ]
    generating SSL certificates...  done
    Generating NSS database...  done
    Starting ricci:                                            [  OK  ]

    From here you will again have to follow the below article for the below steps
    Configure Red Hat Cluster using VMware, Quorum Disk, GFS2, Openfiler

    • Configuring a management node with conga
    • Accessing luci console on browser
    • Creating a Cluster using luci console
    • Adding nodes to the cluster
    • Configuring a qdisk
    • Starting the cluster

    Once all the above steps are sucessfully completed, verify your cluster state
    [root@node1 ~]# clustat
    Cluster Status for mycluster @ Wed Oct  8 11:58:42 2014
    Member Status: Quorate

     Member Name                             ID   Status
     ------ ----                             ---- ------
     node1                                       1
    Online, Local
     node2                                       2 Online
     /dev/block/8:48                             0 Online, Quorum Disk

    Install and configure samba and ctdb

    # yum -y install ctdb samba
    Create respective directories for mounting the samba and ctdb on both the nodes
    [root@node2 ~]# mkdir /mnt/ctdb
    [root@node2 ~]# mkdir /mnt/samba

    [root@node1 ~]# mkdir /mnt/samba
    [root@node1 ~]# mkdir /mnt/ctdb

    Add the below lines in /etc/fstab on all the nodes
    /dev/samba_vg/samba     /mnt/samba              gfs2    defaults        0 0
    /dev/samba_vg/ctdb      /mnt/ctdb               gfs2    defaults        0 0

    Start gfs2 services on both the nodes to mount the respective share as specified in fstab file
    [root@node2 ~]# service gfs2 start
    Mounting GFS2 filesystem (
    /mnt/samba):                     [  OK  ]
    Mounting GFS2 filesystem (/mnt/ctdb):                      [  OK  ]

    [root@node1 ~]# service gfs2 start
    Mounting GFS2 filesystem (/mnt/samba):                     [  OK  ]
    Mounting GFS2 filesystem (/mnt/ctdb):                      [  OK  ]

    Add the below entry in smb.conf of all the nodes
    IMPORTANT NOTE: Make sure smb.conf is same on all the nodes
    # vi /etc/samba/smb.conf
    [global]
            guest ok = yes
            clustering = yes
            netbios name = example

    [profiles]
            path = /profiles
            share modes = yes
            guest only = yes
            browseable = yes
            writable = yes
            guest ok = yes
            create mode = 0777
            directory mode = 0777
            ea support = yes

    Create the share path as mentioned in smb.conf for all the nodes
    # mkdir /profiles

    Configure CTDB

    Pre-requisites:
    Mount the cluster filesystem on all the linux boxes that will form your cluster.

    The CTDB configuration file is located at /etc/sysconfig/ctdb. The mandatory fields that must be configured for CTDB operation are as follows:
    CTDB_NODES
    CTDB_PUBLIC_ADDRESSES
    CTDB_RECOVERY_LOCK
    CTDB_MANAGES_SAMBA
    (must be enabled)
    CTDB_MANAGES_WINBIND (must be enabled if running on a member server)

    Uncomment and make the below changes for the above mentioned parameters inside /etc/sysconfig/ctdb as shown below for both the nodes
    # vi /etc/sysconfig/ctdb
    CTDB_RECOVERY_LOCK=
    "/mnt/ctdb/.ctdb.lock"
    CTDB_NODES=/etc/ctdb/nodes
    CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
    CTDB_MANAGES_SAMBA=yes
    CTDB_MANAGES_WINBIND=yes

    CTDB_NODES
    This file needs to be created and should contain a list of the private IP addresses that the CTDB daemons will use in your cluster. One IP address for each node in the cluster.

    This should be a private non-routable subnet which is only used for internal cluster traffic. This file must be the same on all nodes in the cluster.

    Make sure that these IP addresses are automatically started when the cluster node boots and that each node can ping each other node, as in the following example(on all the nodes):
    192.168.1.5
    192.168.1.6

    CTDB_PUBLIC_ADDRESSES
    Each node in a CTDB cluster contains a list of public addresses which that particular node can host.
    While running the CTDB cluster will assign each public address that exists in the entire cluster to one node that will host that public address.

    These are the addresses that the SMBD daemons and other services will bind to and which clients will use to connect to the cluster.

    The contents of the /etc/ctdb/public_addresses file on each node are as follows:
    NOTE: In node1 my ethernet inerface used is eth2 and eth0 in node2, the same you check by using ifconfig command
    [root@node1 ~] vi /etc/ctdb/public_addresses
    192.168.1.4/24 eth2
    192.168.1.7/24 eth2

    [root@node2 ~] vi /etc/ctdb/public_addresses
    192.168.1.4/24 eth0
    192.168.1.7/24 eth0

    CTDB_RECOVERY_LOCK
    This parameter specifies the lock file that the CTDB daemons use to arbitrate which node is acting as a recovery master.
    This file MUST be held on shared storage so that all CTDB daemons in the cluster will access/lock the same file.

    You must specify this parameter.
    There is no default for this parameter.

    IMPORTANT NOTE: Make sure the samba and winbind services are not running as once we start ctdb, it will be responsible for running all the required samba services
    # chkconfig smb off
    # chkconfig nmb off
    # chkconfig winbind off

    [root@node1 ~]# /etc/init.d/ctdb restart
    Shutting down ctdbd service:   Warning: ctdbd not running !
    Starting ctdbd service:                                    [  OK  ]

    Testing ctdb

    Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that

    The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster.
    If you run it with no options it will provide some terse usage information. The most commonly used commands are:
    •  ctdb status
    •  ctdb ip
    •  ctdb ping
    ctdb status
    The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like:
    [root@node1 ~]# ctdb status
    Number of nodes:2
    pnn:0 192.168.1.5      OK (THIS NODE)
    pnn:1 192.168.1.6      OK
    Generation:1976870623
    Size:2
    hash:0 lmaster:0
    hash:1 lmaster:1
    Recovery mode:NORMAL (0)
    Recovery master:0

    Same on the other node
    [root@node2 mnt]# /etc/init.d/ctdb restart
    Shutting down ctdbd service:   Warning: ctdbd not running !
    Starting ctdbd service:                                    [  OK  ]

    [root@node2 mnt]# ctdb status
    Number of nodes:2
    pnn:0 192.168.1.5      OK
    pnn:1 192.168.1.6      OK (THIS NODE)
    Generation:1976870623
    Size:2
    hash:0 lmaster:0
    hash:1 lmaster:1
    Recovery mode:NORMAL (0)
    Recovery master:0

    The important parts are in bold. This tells us that all the nodes are in a healthy state.
    It also tells us that recovery mode is normal, which means that the cluster has finished a recovery and is running in a normal fully operational state.

    Recovery state will briefly change to "RECOVERY" when there ahs been a node failure or something is wrong with the cluster.

    If the cluster remains in RECOVERY state for very long (many seconds) there might be something wrong with the configuration. See /var/log/log.ctdb.

    ctdb ip
    This command prints the current status of the public ip addresses and which physical node is currently serving that ip.
    [root@node1 ~]# ctdb ip
    Public IPs on node 0
    192.168.1.4 node[1] active[] available[eth0] configured[eth0]
    192.168.1.7 node[0] active[eth0] available[eth0] configured[eth0]

    [root@node2 mnt]# ctdb ip
    Public IPs on node 1
    192.168.1.4 node[1] active[eth2] available[eth2] configured[eth2]
    192.168.1.7 node[0] active[] available[eth2] configured[eth2]

    ctdb ping
    This command tries to "ping" each of the CTDB daemons in the cluster.
    [root@node1 ~]# ctdb ping -n all
    response from 0 time=0.000114 sec  (8 clients)
    response from 1 time=0.002473 sec  (8 clients)

    [root@node2 mnt]# ctdb ping -n all
    response from 0 time=0.000976 sec  (7 clients)
    response from 1 time=0.000108 sec  (9 clients)

    Verify your share

    [root@node2 mnt]# smbclient -L localhost -U%
    Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.9-169.el6_5]

            Sharename       Type      Comment
            ---------       ----      -------
           
    profiles        Disk
            IPC$            IPC       IPC Service (Samba 3.6.9-169.el6_5)
    Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.9-169.el6_5]

            Server               Comment
            ---------            -------

            Workgroup            Master
            ---------            -------

    Creating user for accessing samba share

    # useradd deepak
    # smbpasswd -a deepak
    New SMB password:
    Retype new SMB password:
    Added user deepak.

    Re-try with user deepak
    # smbclient -L localhost -U deepak
    Enter deepak's password:
    [deepak's password]
    Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.9-169.el6_5]

            Sharename       Type      Comment
            ---------       ----      -------
            profiles        Disk
            IPC$            IPC       IPC Service (Samba 3.6.9-169.el6_5)
    Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.9-169.el6_5]

            Server               Comment
            ---------            -------

            Workgroup            Master
            ---------            -------

    The same can be verified from the windows machine.

    Let me know your success and failures.

    Related Articles:
    Overview of services used in Red Hat Cluster
    Configure Red Hat Cluster using VMware, Quorum Disk, GFS2, Openfiler
    How to configure GFS2 partition in Red Hat Cluster
    How to delete an iscsi-target from openfiler and Linux
    Configuring iSCSI storage using openfiler
    How to install openfiler


    Follow the below links for more tutorials

    How to perform a local ssh port forwarding in Linux
    What are the different types of Virtual Web Hosting in Apache
    Comparison and Difference between VMFS 3 and VMFS 5
    How to configure PXE boot server in Linux using Red Hat 6
    How to secure Apache web server in Linux using password (.htaccess)
    How to register Red Hat Linux with RHN (Red Hat Network )
    How does a DNS query works when you type a URL on your browser?
    How to create password less ssh connection for multiple non-root users
    How to create user without useradd command in Linux
    How to give normal user root privileges using sudo in Linux/Unix
    How to do Ethernet/NIC bonding/teaming in Red Hat Linux
    How to install/uninstall/upgrade rpm package with/without dependencies
    Why is Linux more secure than windows and any other OS
    What is the difference between "su" and "su -" in Linux?
    What is the difference/comparison between Unix and Linux ?
    RAID levels 0, 1, 2, 3, 4, 5, 6, 0+1, 1+0 features explained in detail

    6 comments:

    1. Hi,

      I was following you'r tutorial and I don't get where the 192.168.1.4 ip comes from.

      On both of my machines I have one eth0, 192.168.1.5 and 192.168.1.6. 192.168.1.7 is conga server.

      What's the 192.168.1.4? Could you explain better?

      Thanks

      ReplyDelete
      Replies
      1. That is a virtual IP which will be used by the clients to access the nodes since you are configuring a clustered share so that is very much required.The share can be accessed by the clients on wither node even when any one of the node is alive using 192.168.1.4 but in case both the node goes down then you won't be able to access the share.

        Delete
      2. Hello Deepak, thank you for your quick reply. I have one NIC on both nodes, wich is eth0. "In vi /etc/ctdb/public_addresses", should I use:

        192.168.1.4/24 eth0
        192.168.1.7/24 eth0

        Even if the eth0 nic has the 192.168.1.5 address assigned to?

        Thanks once again.

        Delete
      3. public_addresses file has much more meaning from what I have explained but yes for internal usage you can use 192.168.1.5 unless it is being used by some other device. For better understanding on this file you can visit official samba wiki page.

        Delete
    2. /profiles location you have created on each nodes for samba share. But common location for both the servers are /mnt/samba and /mnt/ctdb.

      How does data placed in /profiles replicate to these iscsi blocks ?

      ReplyDelete
    3. /profiles location you have created on each nodes for samba share. But common location for both the servers are /mnt/samba and /mnt/ctdb.

      How does data placed in /profiles replicate to these iscsi blocks ?

      ReplyDelete