### 1. Do kubectl exec inside the cstor-pool-mgmt using command and install `parted` Get the pool pod name using `kubectl get pods -n openebs` command and exec inside the container. Install the `parted` tool using `apt-get install parted` after execing into the `cstor-pool-mgmt` container. ```sh $ kubectl exec -it cstor-pool-1fth-7fbbdfc747-sh25t -n openebs -c cstor-pool-mgmt bash ``` ### 2. zpool list (get the pool name) ```sh $ zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf 9.94G 220K 9.94G - 0% 0% 1.00x ONLINE - ``` ### 3. Set your zpool with `autoextend on` (it defaults to off) ```sh $ zpool set autoexpand=on cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf` ``` ### 4. Resize the disk used by the pool If this is done already, that's fine. ### 5. Get the expanded device name that is in-use with pool using `fdisk -l` command and use `parted /dev/` to lists partition layout on device. Just after this command, type `Fix` at prompt to use new available space. ```sh $ parted /dev/sdb print Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 20971520 blocks) or continue with the current setting? Fix/Ignore? Fix Model: Google PersistentDisk (scsi) Disk /dev/sdb: 21.5GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 10.7GB 10.7GB zfs zfs-d97901ec3aa0fb69 9 10.7GB 10.7GB 8389kB ``` ### 6. Remove the buffering partition ```sh parted /dev/sdb rm 9 ``` ### 7. Expand partition holding zpool ```sh $ parted /dev/sdb resizepart 1 100% sh: 1: udevadm: not found sh: 1: udevadm: not found Information: You may need to update /etc/fstab. ``` ### 8. Check the parted size again using `parted /dev/ print` ```sh $ parted /dev/sdb print Model: Google PersistentDisk (scsi) Disk /dev/sdb: 21.5GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 21.5GB 21.5GB zfs zfs-d97901ec3aa0fb69 ``` ### 9. Size is changed from 10GB to 20GB, Now we have to tell the zpool to bring specified physical device online using following command. Note: Replace the disk name below by get disk name using `zpool status` command. ```sh zpool online -e cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf /dev/disk/by-id/scsi-0Google_PersistentDisk_pdisk2 ``` ### 10. Restart the NDM pods schedeuled on same node with pool to reflect the updated size in `Disk` customresource After restart make sure NDM pod comes in `Running` state.