• VMware

    Learn about VMware virtualization for its products like vsphere ESX and ESXi, vCenter Server, VMware View, VMware P2V and many more

  • Linux

    Step by step configuration tutorials for many of the Linux services like DNS, DHCP, FTP, Samba4 etc including many tips and tricks in Red Hat Linux.

  • Database

    Learn installation and configuration of databases like Oracle, My SQL, Postgresql, etc including many other related tutorials in Linux.

  • Life always offers you a second chance ... Its called tomorrow !!!

    Friday, September 26, 2014

    How to resize software raid partition in Linux

    In my last article I showed you steps to configure software RAID 1 in Linux. Now in this article I will show you steps to add/remove partitions from your raid partition.

    While configuring RAID it is always advised to add a spare partition to your raid device so that in case of any hard disk failure the spare partition can be utilized

    Lets add a new virtual hard disk to our machine.

    To keep this article strictly on the roadmap I will assume that you already have created a new partition and changed the partition type from default to "Linux RAID". If not you can follow my last article where I had explained in detail to do the same.

    So we will now just add the new partition to our raid device
    # mdadm --manage /dev/md0 --add /dev/sdd1
    mdadm: added /dev/sdd1


    Once added let us look at the details of your raid device
    # mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Sep 19 23:02:52 2014
         Raid Level : raid1
         Array Size : 5233024 (4.99 GiB 5.36 GB)
      Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
     
     Raid Devices : 2
      Total Devices : 3
        Persistence : Superblock is persistent

        Update Time : Sat Sep 20 02:06:39 2014
              State : clean
     Active Devices : 2
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 1

               Name : test2.example:0  (local to host test2.example)
               UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
             Events : 76

        Number   Major   Minor   RaidDevice State
           2       8       17        0      active sync   /dev/sdb1
           1       8       33        1      active sync   /dev/sdc1

           3       8       49        -      spare   /dev/sdd1

    So now as it says there are total 3 devices out of which 2 devices are used for RAID whereas one is kept as spare device.

    Well our work is not yet done as this article is all about growing the size of the raid array so lets do that
    # mdadm --grow --raid-devices=3 /dev/md0
    raid_disks for /dev/md0 set to 3

    Once done run the below command
    # cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1
    sdd1[3] sdb1[2] sdc1[1]
          5233024 blocks super 1.2 [3/2] [UU_]
          [======>..............]  recovery = 34.3% (1800064/5233024) finish=0.2min speed=225008K/sec

    unused devices: <none>
    As you can see a recovery process has started and 3rd device can be seen under the array. Well this is not exactly a recovery process instead it is performing an action of adding the device to the group and performing mirroring.

    You can see the same in details using the below command
    # mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Sep 19 23:02:52 2014
         Raid Level : raid1
         Array Size : 5233024 (4.99 GiB 5.36 GB)
      Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
       Raid Devices : 3
      Total Devices : 3
        Persistence : Superblock is persistent

        Update Time : Sat Sep 20 02:09:22 2014
              State : clean, degraded,
    recovering
     Active Devices : 2
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 1

     Rebuild Status : 53% complete

               Name : test2.example:0  (local to host test2.example)
               UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
             Events : 88

        Number   Major   Minor   RaidDevice State
           2       8       17        0      active sync   /dev/sdb1
           1       8       33        1      active sync   /dev/sdc1
           3       8       49        2      spare rebuilding   /dev/sdd1

    Once the recovery process is completed you should be able to see 3 active devices under the raid array
    # mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Sep 19 23:02:52 2014
         Raid Level : raid1
         Array Size : 5233024 (4.99 GiB 5.36 GB)
      Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
       Raid Devices : 3
      Total Devices : 3
        Persistence : Superblock is persistent

        Update Time : Sat Sep 20 02:09:35 2014
              State : clean
     
    Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0

               Name : test2.example:0  (local to host test2.example)
               UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
             Events : 99

        Number   Major   Minor   RaidDevice State
           2       8       17        0      active sync   /dev/sdb1
           1       8       33        1      active sync   /dev/sdc1
           3       8       49        2      active sync   /dev/sdd1

    What would happen if any of the hard disk crashes?

    Let us try it out and see the output by ourself

    Now I will manually fail /dev/sdb1
    # mdadm --fail /dev/md0 /dev/sdb1
    mdadm: set /dev/sdb1 faulty in /dev/md0

    Check the detailed output once again
    # mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Sep 19 23:02:52 2014
         Raid Level : raid1
         Array Size : 5233024 (4.99 GiB 5.36 GB)
      Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
       Raid Devices : 3
      Total Devices : 3
        Persistence : Superblock is persistent

        Update Time : Sat Sep 20 02:13:31 2014
              State : clean, degraded
     Active Devices : 2
    Working Devices : 2
     
    Failed Devices : 1
      Spare Devices : 0

               Name : test2.example:0  (local to host test2.example)
               UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
             Events : 101

        Number   Major   Minor   RaidDevice State
           0       0        0        0      removed
           1       8       33        1      active sync   /dev/sdc1
           3       8       49        2      active sync   /dev/sdd1

           2       8       17        -      faulty   /dev/sdb1

    Now since we have one faulty device, we can just remove it as it is of no use
    # mdadm --remove /dev/md0 /dev/sdb1
    mdadm: hot removed /dev/sdb1 from /dev/md0

    Resize the raid array and update the raid devices to the available no. of active partitions i.e. 2
    # mdadm --grow --raid-devices=2 /dev/md0
    raid_disks for /dev/md0 set to 2

    Verify the raid device output
    # mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Fri Sep 19 23:02:52 2014
         Raid Level : raid1
         Array Size : 5233024 (4.99 GiB 5.36 GB)
      Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent

        Update Time : Sat Sep 20 02:14:39 2014
              State : clean
     Active Devices : 2
    Working Devices : 2

     Failed Devices : 0
      Spare Devices : 0

               Name : test2.example:0  (local to host test2.example)
               UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
             Events : 109

        Number   Major   Minor   RaidDevice State
           1       8       33        0      active sync   /dev/sdc1
           3       8       49        1      active sync   /dev/sdd1

    So we are back with 2 active raid devices without loosing any single data.

    I hope the article was helpful.

    Related Articles
    How to detect new hard disk attached without rebooting in Linux
    How to detect new NIC/Ethernet card without rebooting in Linux
    Taking Backup of Hard Disk
    Disk Attachment Technology FC vs SAS vs iSCSI


    Follow the below links for more tutorials

    How to configure iscsi target using Red Hat Linux
    What are the different types of Virtual Web Hosting in Apache
    Comparison and Difference between VMFS 3 and VMFS 5
    How to configure PXE boot server in Linux using Red Hat 6
    How to secure Apache web server in Linux using password (.htaccess)
    How to register Red Hat Linux with RHN (Red Hat Network )
    15 tips to enhance security of your Linux machine
    How does a DNS query works when you type a URL on your browser?
    How to create password less ssh connection for multiple non-root users
    How to create user without useradd command in Linux
    How to give normal user root privileges using sudo in Linux/Unix
    How to do Ethernet/NIC bonding/teaming in Red Hat Linux
    How to install/uninstall/upgrade rpm package with/without dependencies
    Why is Linux more secure than windows and any other OS
    What is the difference between "su" and "su -" in Linux?

    0 comments:

    Post a Comment