Quantcast
Channel: Active questions tagged partition - Server Fault
Viewing all articles
Browse latest Browse all 65

Repartitioning proxmox root/data LVM and zfs pool recovery

$
0
0

Ok so I created my first proxmox virtual environment about a month or two ago and due to inexperience and miscalculations I created a partition table I am not really happy with. Along with that it has come to my attention that I probably need a zfs (intent) log disk/partition, and unfortunately that partition will need to come out of my ssd.

Currently I have proxmox node installed on a 500G ssd that is setup as an LVM and 5 4T hhds in a zrad2 configuration.

Now the main ssd (/dev/nvme0n1) has 3 partitions

  • one (1007K-nvme0n1p1) for what I assume to be BIOS/UEFI
  • one (1G-nvme0n1p2) mounted on /boot/efi
  • and the rest (464.8G-nvme0n1p3) is an LVM

Now what I am trying to do is to reparation the main ssd that contains proxmox so that I have 5 partitions instead of 3. I want the 3 original partitions but shrink 3rd partition in sized and then create 2 additional partitions, one to be used as a zfs (intention) log drive/partition and another to be left as either unallocated space and available for future expansion or for whatever arises so I dont need to go through this process again or just a standard partition to be used for data.
Now the issue is - this is my main drive that the OS runs on so it would need to be done in a live environment, which OK whatever but there's some things I'm worried about so let me explain what and how I want to do and what my hesitations are.

Let's start off by getting an understanding of my current setup:lsblk output:

NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTSsda                            8:0    0   3.6T  0 disk ├─sda1                         8:1    0   3.6T  0 part └─sda9                         8:9    0     8M  0 part sdb                            8:16   0   3.6T  0 disk ├─sdb1                         8:17   0   3.6T  0 part └─sdb9                         8:25   0     8M  0 part sdc                            8:32   0   3.6T  0 disk ├─sdc1                         8:33   0   3.6T  0 part └─sdc9                         8:41   0     8M  0 part sdd                            8:48   0   3.6T  0 disk ├─sdd1                         8:49   0   3.6T  0 part └─sdd9                         8:57   0     8M  0 part sde                            8:64   0   3.6T  0 disk ├─sde1                         8:65   0   3.6T  0 part └─sde9                         8:73   0     8M  0 part zd16                         230:16   0    32G  0 disk zd32                         230:32   0    32G  0 disk nvme0n1                      259:0    0 465.8G  0 disk ├─nvme0n1p1                  259:1    0  1007K  0 part ├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi└─nvme0n1p3                  259:3    0 464.8G  0 part ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]├─pve-root                 252:1    0    96G  0 lvm  /├─pve-data_tmeta           252:2    0   3.4G  0 lvm  │└─pve-data-tpool         252:4    0 337.9G  0 lvm  │├─pve-data             252:5    0 337.9G  1 lvm  │└─pve-vm--101--disk--0 252:6    0    32G  0 lvm  └─pve-data_tdata           252:3    0 337.9G  0 lvm  └─pve-data-tpool         252:4    0 337.9G  0 lvm  ├─pve-data             252:5    0 337.9G  1 lvm  └─pve-vm--101--disk--0 252:6    0    32G  0 lvm

'pvs' output

  PV             VG  Fmt  Attr PSize    PFree   /dev/nvme0n1p3 pve lvm2 a--  <464.76g 16.00g

'vgs' output

  VG  #PV #LV #SN Attr   VSize    VFree   pve   1   4   0 wz--n- <464.76g 16.00g

'lvs' output

  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  data          pve twi-aotz-- 337.86g             1.43   0.54                              root          pve -wi-ao----  96.00g                                                      swap          pve -wi-ao----   8.00g                                                      vm-101-disk-0 pve Vwi-aotz--  32.00g data        15.12    

Again the goal is to repartition nvme0n1 to have 5 partitions but because it is currently being fully utilized by the LVM volume we need to decrease the filesystem and the logical volume so that they fit in the 3rd partition and I have space left over for the other 2 that are to be created.

I was going to do this by booting into a live env that supports lvm and then resizing the logical volumes using the following (I'm resizing root vol as I might as well because I don't need all that space there)

# resize root volume to 30G # use -r to resize filesystem within as well# notify command this is a thin pool using --type and make it loud so i know what's going onlvresize -L30G /dev/mapper/root -r -vvvv --type thin-pool# do the same to the data volume except this time to 230Glvresize -L230G /dev/mapper/data -r -vvvv --type thin-pool# now reduce the physical volume size to 272G (8swap + 30root +230data +3.4meta)pvresize --setphysicalvolumesize 272G  /dev/nvme0n1p3# now repartition the disk with fdisk fdisk /dev/nvme0n1# delete partition, partition number: d, 3# create new partition, new partition number(same as deleted): n, 3# accept default first sector, last sector: enter, +300G# n=dont remove the LVM signature# set partition type, partition number, type=Linux LVM: t, 3, 30# new partition, 50G, type solaris zfs : n, 4, enter, 50G, t, bf# new partition, the rest, type linux : n, 5, enter, enter# write the changesd - 3 - n - 3 - enter - +300G - n - t - 3 - 30 - n - 4 - enter - +50G - t - bf - n - 5 - enter - enter - t - 83 - w

Now boot back into proxmox, however proxmox doesn't know about these changes so will proxmox just freak out and maybe not even boot if this happens?Does this seem like the correct course of action or is there anything im missing here, maybe something I could do differently/better?

Now let's assume I break everything and I need to reinstall proxmox on a freshly formatted ssd, fine, whatever such is life, but now I have to worry about all the data on my zfs pool, will that stay in tact? If so is there anything I need to do in order to recreate/reimplement it or is all the data stored somewhere in the disks maybe meta data somewhere and I can just plug the devices in and they will be recognized and configure themselves and all my data is back to normal? The Vms/(data) that were on the ZFS pool, will they be recognized by proxmox and added back as VMs and be used just like that or will i need to do some sort of backup and reimport or even worse som kind of hack job to recreate them and then set their data to a location in the zfs pool or something like that?


Viewing all articles
Browse latest Browse all 65

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>