While preparing an upcoming talk at a community event I worked on some Terraform code I recycled. In a nutshell, I’m creating a virtual machine in Oracle Cloud Infrastructure using Oracle Linux 9. I’m fully intending to run Oracle Database 19.28.0 on that VM later.
Rather than sticking with the default boot volume size I asked for a 250GB boot volume. This created an interesting situation. Looking at the boot volume I could see a lot of unused space
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 250G 0 disk
├─sda1 8:1 0 100M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 44.5G 0 part
├─ocivolume-root 252:0 0 29.5G 0 lvm /
└─ocivolume-oled 252:1 0 15G 0 lvm /var/oled
So out of these 250GB total size, only round-about 50G are in use. This is directly reflected in the size of the root file system, too.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ocivolume-root 30G 12G 19G 38% /
So what if I want to use the remaining space? I need to
- Create a new partition (probably /dev/sda4)
- Add this partition to the ocivolume volume group
- Extend the ocivolume-root logical volume, resizing the file system at the same time
I’m going to repeat the process of creating the VM a few times, so let’s try and use Ansible to automate the toil. If you haven’t come across Ansible yet, suffice to say that it’s a very advanced, agentless configuration management tool. It reads so-called playbooks written in YAML to execute tasks against hosts.
Warning: Resizing the root volume group on Linux is a high-risk operation that can lead to system instability, data loss, and/or a non-bootable system if it goes wrong. The following Ansible playbook is suitable for the initial setup/configuration of a freshly created OCI VM only. Don’t use it once the VM is in use. In other words, should something go horribly wrong you should be in a position to throw the VM away and recreate it without any impact.
With that said, let’s look at the Ansible playbook first:
- name: Read device information
community.general.parted:
device: /dev/sda
unit: MiB
register: sda_data
- name: Extend the root volume group
when: sda_data.partitions | length < 4
block:
- name: Determine the necessary offset
ansible.builtin.set_fact:
start_loc: "{{ ((sda_data.partitions[2].end + 1)) | int }}"
- name: Add another physical partition for LVM
community.general.parted:
device: /dev/sda
number: 4
part_start: "{{ start_loc }}MiB"
part_type: primary
resize: true
state: present
label: gpt
unit: MiB
flags: [lvm]
- name: Extend the volume group
community.general.lvg:
vg: ocivolume
pvs:
- /dev/sda3
- /dev/sda4
state: present
- name: Grow the root - LV
community.general.lvol:
vg: ocivolume
lv: root
size: 100%FREE
resizefs: true
What happens here?
The first task involves the parted module to read the current partition table. It registers everything in a variable, sda_data. It contains the number of partitions, among other things.
Following the discovery operation a block of tasks is defined. Only if there are fewer than 4 partitions are these sub-tasks executed. The check is defined in line 8.
If a new partition has to be created, the offset (think: starting point) must be calculated. The fourth partition will start right after where the third one ends (line 13).
The rest of the playbook is rather straight forward. The parted module is once again used to create a new primary partition using all the remaining space (line 15). The partition’s flag is set to LVM – this allows the playbook to create a new physical volume, to be added to the volume group.
That’s exactly what happens next: the ocivolume volume group is extended in line 25. And finally, the root logical volume is extended, too, along with the XFS file system (line 35). All the available space is claimed.
After running the playbook a lot more space is available:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 250G 0 disk
├─sda1 8:1 0 100M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
├─sda3 8:3 0 44.5G 0 part
│ ├─ocivolume-root 252:0 0 203.4G 0 lvm /
│ └─ocivolume-oled 252:1 0 15G 0 lvm /var/oled
└─sda4 8:4 0 203.4G 0 part
└─ocivolume-root 252:0 0 203.4G 0 lvm /
Nice!