How to reduce LVM size in Linux step by step (online without reboot)

Do you have a node where there is a lot of free space available on some logical volume which is unused while there is a different partition which you are using is going out of space?

In such cases you need not add extra disk to extend your volume group and your logical volume, why not we take some free space from this large lvm and extend our partition, which can be performed on the same session without reboot.

IMPORTANT NOTE: The article shows the steps to reduce logical volume online but it is recommended to do these steps in runlevel one as there your partitions will not be used by any process, in any other run level there is a high risk that partition will be in used so you won't be able to perform these LVM operations.

Reducing your LVM may risk in loosing the data stored in the partition so perform these steps at your own risk.

In this article I will show you the steps to perform a LVM shrinking and using the shrinked space to extend another partition.

These steps are validated on Red Hat Enterprise Linux 7

Below is my sample setup where I have almost 90 GB free in /opt/sdf/backup partition while my /tmp partition has very less size, so lets take 10 GB from /opt/sdf/backup and add it to /tmp
# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/system-root       2.0G  1.4G  453M  76% /
devtmpfs                      3.8G     0  3.8G   0% /dev
tmpfs                         3.8G     0  3.8G   0% /dev/shm
tmpfs                         3.8G  8.9M  3.8G   1% /run
tmpfs                         3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/sda1                     120M   92M   20M  83% /boot
/dev/mapper/system-tmp        2.0G  6.8M  1.8G   1% /tmp
/dev/mapper/system-opt        2.0G  220M  1.6G  12% /opt
/dev/mapper/system-sdf        5.8G   66M  5.5G   2% /opt/sdf
tmpfs                         1.0M  4.0K 1020K   1% /opt/sdf/queues
/dev/mapper/system-var        2.0G   63M  1.8G   4% /var
/dev/mapper/system-sdfbackup   97G   63M   92G   1% /opt/sdf/backup

Make sure the partition you are planning to shrink is not being used by any process, this can be validated using below command
# lsof /opt/sdf/backup
bash    18978 root  cwd    DIR  253,2     4096    2 /opt/sdf/backup
If you get any output like above it means that the partition is in use by some process so you cannot continue with LVM shrinking, make sure all the processes are closed using this partition before starting

You must get a blank output from this command as below
# lsof /opt/sdf/backup
Next un-mount the partition
# umount /opt/sdf/backup
Do a filesystem check to make sure your disk is healthy and will survive the LVM operations, obviously we do not want our disk to get corrupted in between leaving us no other option and risking our valuable data.
# e2fsck -f /dev/mapper/system-sdfbackup
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/system-sdfbackup: 60/6414336 files (0.0% non-contiguous), 451402/25649152 blocks
If you get any warning or error in above step, it is strongly recommend to validate your disk health status and do not perform lvm reduction.

Next resize your LVM to the size you want the LVM to be, for eg I want my partition to become 80G after reduction so I am freeing up around 17 GB of size from existing volume
# resize2fs /dev/mapper/system-sdfbackup 80G
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/mapper/system-sdfbackup to 20971520 (4k) blocks.
The filesystem on /dev/mapper/system-sdfbackup is now 20971520 blocks long.
This passed successfully
Next perform lvreduce with same size option as used above.
# lvreduce -L 80G /dev/mapper/system-sdfbackup
WARNING: Reducing active logical volume to 80.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce system/sdfbackup? [y/n]: y
  Size of logical volume system/sdfbackup changed from 97.84 GiB (3131 extents) to 80.00 GiB (2560 extents).
  Logical volume system/sdfbackup successfully resized.
We are done here and ready to mount our partition
# mount /dev/mapper/system-sdfbackup /opt/sdf/backup
Validate the new size
# df -h /dev/mapper/system-sdfbackup
Filesystem                    Size  Used Avail Use% Mounted on
79G   59M   75G   1% /opt/sdf/backup
Time to extend our /tmp partition with the available size now
NOTE: Keep some free size buffer free space depending upon the size reduced as there might not be enough free blocks available.

Here I will extend my /tmp partition with 10 GB
# lvextend -L +10G /dev/mapper/system-tmp
  Size of logical volume system/tmp changed from 2.00 GiB (64 extents) to 12.00 GiB (384 extents).
  Logical volume system/tmp successfully resized.

Now resize your filesystem with the changes we did above
# resize2fs /dev/mapper/system-tmp
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/mapper/system-tmp is mounted on /tmp; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/mapper/system-tmp is now 3145728 blocks long.
Check the output status of our last command
# echo $?
The magic has been done and we can see our /tmp partition size is now 12 GB
# df -h /dev/mapper/system-tmp
Filesystem              Size  Used Avail Use% Mounted on
12G   11M   12G   1% /tmp

I hope the article was useful.

Related Articles:
Understanding LVM snapshots (create, merge, remove, extend)
How to rename Logical Volume and Volume Group in Linux
How to extend/resize Logical Volume and Volume Group in Linux
How to remove logical and physical volume from Volume Group in Linux