Azure: Reduce a Linux VM data disk size
Have you ever needed to decrease the size of a data disk in Azure? I have, for a variety of reasons, including cost optimization. To my surprise, reducing the size of a disk in Azure is more difficult than expanding it.
When you think about it, it makes sense because there’s a risk of data loss. No cloud provider would want to take on that responsibility for users. But sometimes, things must be done, and it comes down to us professionals (Sysadmin, SRE, DevOps engineer…). So I decided to write down here my approach to solving the problem.
In this article, I’m going to show you how to shrink the size of a data disk attached to an Azure Linux virtual machine.
Here’s the plan:
- Identify the disk to reduce
- Attach a new disk (with desired size)
- Copy data from the old disk to the new one
- Unmount both disks
- Replace the “old” drive by the new one
- Delete the “old” disk
Let’s get started!
Note: The method shown below uses a CentOS server but should work for other Linux distributions
Identify the disk
Let’s connect to the Linux VM terminal and list the available disks attached to it using the lsblk
command:
[root@azure-vm ~]# sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4T 0 disk
└─sda1 8:1 0 4T 0 part /datadrive
sdb 8:16 0 64G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 62.5G 0 part /
sdc 8:32 0 64G 0 disk
└─sdc1 8:33 0 64G 0 part /mnt
As you can see in the example above, I have a data disk (sda
) of 4TB. I want to decrease the size to 1TB.
Attach a new disk
I will attach a new disk of 1TB size to the VM from the Azure portal. Once the new disk is attached, I will list again all the disks:
[root@azure-vm ~]# sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4T 0 disk
└─sda1 8:1 0 4T 0 part /datadrive
sdb 8:16 0 64G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 62.5G 0 part /
sdc 8:32 0 64G 0 disk
└─sdc1 8:33 0 64G 0 part /mnt
sdd 8:48 0 1T 0 disk
From the output of lsblk
you can see that sdd
is the new disk that is added.
I will partition the new disk (replace sdd
with the name of your disk in the commands below):
[root@azure-vm ~]# sudo parted /dev/sdd --script mklabel gpt mkpart xfspart xfs 0% 100%
[root@azure-vm ~]# sudo mkfs.xfs /dev/sdd1
[root@azure-vm ~]# sudo partprobe /dev/sdd1
Then mount the new disk partition on /datadrive2
mount point:
[root@azure-vm ~]# sudo mkdir /datadrive2
[root@azure-vm ~]# sudo mount /dev/sdd1 /datadrive2
Let’s verify again:
[root@azure-vm ~]# sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4T 0 disk
└─sda1 8:1 0 2T 0 part /datadrive
sdb 8:16 0 64G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 62.5G 0 part /
sdc 8:32 0 64G 0 disk
└─sdc1 8:33 0 64G 0 part /mnt
sdd 8:48 0 1T 0 disk
└─sdd1 8:49 0 1024G 0 part /datadrive2
Copy data from old disk to the new one
We now need to copy all data from the old disk (sda
) to the new one (sdd
). To prevent any data corruption risk, we first have to stop and disable all the services that might write on the old disk. In my case, I have a database (PostgreSQL) and web server (Apache) services running, so:
[root@azure-vm ~]# sudo systemctl stop httpd
[root@azure-vm ~]# sudo systemctl disable httpd
[root@azure-vm ~]# sudo systemctl stop postgresql-12.service
[root@azure-vm ~]# sudo systemctl disable postgresql-12.service
Then I will copy files and subdirectories from /datadrive
to /datadrive2
(respective mount points of sda
and sdd
) with rsync:
Note: In the below command, the trailing slash
/
appended to the source directory (/datadrive
) is very important forrsync
to copy source content into the destination. It avoidsrsync
to re-create the top-level directory inside the destination.
[root@azure-vm ~]# sudo nohup rsync -az /datadrive/ /datadrive2 &
This operation might take some time based on your data size and your disk’s I/O rate.
Note:
nohup
allows to run a process/command or shell script to continue working in the background even if you close the terminal session.
In our example, we also added ‘&’ at the end, that helps to send the process to background.
I can see the process running in the background with sudo ps -aux | grep rsync
command:
[root@azure-vm ~]# sudo ps -aux | grep rsync
root 2480460 99.4 0.0 248444 4456 pts/0 R 12:13 0:41 rsync -az /datadrive/ /datadrive2
root 2480461 0.0 0.0 246796 2944 pts/0 S 12:13 0:00 rsync -az /datadrive/ /datadrive2
root 2480462 11.9 0.0 248068 1796 pts/0 S 12:13 0:05 rsync -az /datadrive/ /datadrive2
root 2480580 0.0 0.0 221936 1084 pts/0 R+ 12:14 0:00 grep --color=auto rsync
When the copy is done, the same previous command gives an output similar as below:
[root@azure-vm ~]# sudo ps -aux | grep rsync
root 2483357 0.0 0.0 221936 1200 pts/2 S+ 12:35 0:00 grep --color=auto rsync
You can see that there’s no more rsync process running. It took me about 1h to copy 164GB from the old disk to the new one. To ensure data consistency between disks, we can use the du
command to see if both mount points have the same size:
[root@azure-vm ~]# sudo du /datadrive2 -sh
164G /datadrive2
This is optional
[root@azure-vm ~]# sudo du /datadrive -sh
164G /datadrive
Unmount both disks
Let’s unmount the drives now:
Note : the “umount” command should not be mispelled for “unmount” as there are no “unmount” commands on Linux.
-
new drive
[root@azure-vm ~]# sudo umount /dev/sdd1
-
old drive
[root@azure-vm ~]# sudo umount /dev/sda1 umount: /datadrive: target is busy.
While unmounting the old drive, I got the
target is busy
error above. That means there are some processes that still using the drive. I need to identify them. I will use the lsof utility for that. The package isn’t integrated by default on my Linux distribution, I have to install it first (you can skip this part if you didn’t get the target busy error):[root@azure-vm ~]# sudo dnf install lsof -y
List of the files:
[root@azure-vm ~]# sudo lsof /dev/sda1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME php-fpm 1113 apache cwd DIR 8,1 149 1073742089 /datadrive/aboki/public php-fpm 1114 apache cwd DIR 8,1 149 1073742089 /datadrive/aboki/public php-fpm 1115 apache cwd DIR 8,1 149 1073742089 /datadrive/aboki/public php-fpm 56100 apache cwd DIR 8,1 149 1073742089 /datadrive/aboki/public sudo 214997 root cwd DIR 8,1 4096 2149581218 /datadrive/aboki/framework/utils/system sudo 214997 root 1w REG 8,1 75352 1074523444 /datadrive/aboki/storage/logs/installations/inst_151.log (deleted) sudo 214997 root 2w REG 8,1 75352 1074523444 /datadrive/aboki/storage/logs/installations/inst_151.log (deleted) java 214998 root cwd DIR 8,1 4096 2149581218 /datadrive/aboki/framework/utils/system java 214998 root DEL REG 8,1 1074081817 /datadrive/aboki/app/api/lib/selenium-java.jar java 214998 root DEL REG 8,1 1074081852 /datadrive/aboki/app/api/lib/selenium-firefox-driver.jar java 214998 root DEL REG 8,1 1074254104 /datadrive/aboki/app/api/lib/selenium-chrome-driver.jar java 214998 root DEL REG 8,1 1074081819 /datadrive/aboki/app/api/lib/selenium-api.jar java 214998 root DEL REG 8,1 1074188415 /datadrive/aboki/app/api/lib/postgresql.jar
We can see that there are some Java, PHP and other processes that still using files on the drive. Let’s stop them all:
[root@azure-vm ~]# sudo killall -9 java [root@azure-vm ~]# sudo killall -9 php-fpm
I can retry to unmount:
[root@azure-vm ~]# sudo umount /dev/sda1
Now it works!
Replace the old drive by the new one
Take a look at your fstab file. The fstab (/etc/fstab
) (or file systems table) file is a system configuration file on Linux systems. It typically lists all available disks and disk partitions, and indicates how they are to be initialized or otherwise integrated into the overall system’s file system. You will see my /etc/fstab
below:
[root@azure-vm ~]# sudo cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue May 17 00:53:17 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=411630eb-c6d0-4e09-9edf-d70f047202ea /xfs defaults0 0
UUID=9b1a637d-99b7-4fac-8dd7-071bdf047087 /bootxfs defaults0 0
UUID=1F91-F73A/boot/efivfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
UUID=7789c449-18f8-4b76-a3c7-7d7a8bd299eb /datadrive xfs defaults,nofail,discard 1 2
/dev/disk/cloud/azure_resource-part1 /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig02
In the example above, the line UUID=7789c449-18f8-4b76-a3c7-7d7a8bd299eb /datadrive xfs defaults,nofail,discard 1 2
designates the old drive. We need to comment that line in the /etc/fstab
file. You can use your fav text editor. Mine is nano
so I go like:
[root@azure-vm ~]# sudo nano /etc/fstab
Edit and save the file. It should be like:
#
# /etc/fstab
# Created by anaconda on Tue May 17 00:53:17 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=411630eb-c6d0-4e09-9edf-d70f047202ea /xfs defaults0 0
UUID=9b1a637d-99b7-4fac-8dd7-071bdf047087 /bootxfs defaults0 0
UUID=1F91-F73A/boot/efivfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
#UUID=7789c449-18f8-4b76-a3c7-7d7a8bd299eb /datadrive xfs defaults,nofail,discard 1 2
/dev/disk/cloud/azure_resource-part1 /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig02
Reload systemd to apply the changes:
[root@azure-vm ~]# sudo systemctl daemon-reload
Mount the new disk to the mount point of the old disk. For me, it’s /datadrive
:
[root@azure-vm ~]# sudo mount /dev/sdd1 /datadrive
We see that sda1
(old disk partition) was initially mounted on /datadrive
, but now it’s sdd1
(new disk partition):
[root@azure-vm ~]# sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4T 0 disk
└─sda1 8:1 0 4T 0 part
sdb 8:16 0 64G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 62.5G 0 part /
sdc 8:32 0 64G 0 disk
└─sdc1 8:33 0 64G 0 part /mnt
sdd 8:48 0 1T 0 disk
└─sdd1 8:49 0 1024G 0 part /datadrive
We need to make that change persistent across reboot. Let’s find the Universally Unique Identifier (UUID) of the new disk partition. UUIDs are generated by the make-filesystem utilities (mkfs.*) when you create a filesystem. blkid
command will show you the UUIDs of mounted devices and partitions:
[root@azure-vm ~]# sudo blkid
/dev/sdc1: UUID="3a547a6d-7db4-4e3a-a541-e1696dc01a80" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="219144cf-01"
/dev/sda1: UUID="7789c449-18f8-4b76-a3c7-7d7a8bd299eb" BLOCK_SIZE="4096" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="dc249cb1-84ee-4e3b-91ae-11cf0fe71344"
/dev/sdb1: UUID="1F91-F73A" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="2e27f8c2-ebe7-4e56-ba65-41a7a8905092"
/dev/sdb2: UUID="9b1a637d-99b7-4fac-8dd7-071bdf047087" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="c5526e62-0b2f-4a24-93a8-c3b2f3f80b48"
/dev/sdb3: UUID="411630eb-c6d0-4e09-9edf-d70f047202ea" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="0a23997e-e126-4bfe-8ffe-cf4f4a963297"
/dev/sdd1: UUID="813a54b4-dfd7-4581-b19d-3c3c679378dd" BLOCK_SIZE="4096" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="2adbfa02-0b20-45ec-b6ac-2eafaa6a096b"
From blkid
output, we can see that the UUID we’re looking for is 813a54b4-dfd7-4581-b19d-3c3c679378dd
. In the fstab file, we will uncomment the line we commented previously and replace the UUID on that line by this new value. The final version of fstab file is as follows:
#
# /etc/fstab
# Created by anaconda on Tue May 17 00:53:17 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=411630eb-c6d0-4e09-9edf-d70f047202ea /xfs defaults0 0
UUID=9b1a637d-99b7-4fac-8dd7-071bdf047087 /bootxfs defaults0 0
UUID=1F91-F73A/boot/efivfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
UUID=813a54b4-dfd7-4581-b19d-3c3c679378dd /datadrive xfs defaults,nofail,discard 1 2
/dev/disk/cloud/azure_resource-part1 /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig02
I will reload systemd to apply the changes:
[root@azure-vm ~]# sudo systemctl daemon-reload
Then I can re-enable and start all the services that were disabled previously:
[root@azure-vm ~]# sudo systemctl enable httpd
[root@azure-vm ~]# sudo systemctl start httpd
[root@azure-vm ~]# sudo systemctl enable postgresql-12.service
[root@azure-vm ~]# sudo systemctl start postgresql-12.service
Everything should run smoothly on the system. I can now delete the old disk from the Azure portal and go sip a drink. I think I deserve it!
Thank you for reading this article all the way to the end! I hope you found the information and insights shared here to be valuable and interesting. Get in touch with me on LinkedIn
I appreciate your support and look forward to sharing more content with you in the future. Until next time!