Shrinking physical volumes in LVM on a Linux Guest in ESXi 5.0

  • The problem:

    Linux guest (OpenSuse 12.1), with multiple virtual disks attached.

    3 disks are in a logical volume, two of which are exactly 2TB.

      None of the disks are independent, and due to the backup software we use, cannot be independent.

      When the two 2TB virtual disks are "dependent", the snapshot fails stating that the file is too large for the datastore. When I put those two disks in independent mode, snapshots work fine (the other disk is 1.8TB).

      I have therefore concluded that even shrinking the two physical disks by 100GB should solve the problem, however I am having trouble conceptualizing how to go about getting those disks smaller without breaking the LVM entirely.

      The actual LV has 1.3TB free, so there is plenty of space to shrink with.

      What I need to accomplish:

      Deallocate 100GB from the two, 2TB virtual disks within the linux guest.

      Shrink the two virtual disks by 100GB within vsphere (not as complicated).

      Are there any vsphere/LVM gurus that can give me a clue?

      Edit:

      Fixing formatting:

      Something like this? e2fsk -f /dev/VGroup1

      resize2fs /dev/VGroup1 5922108040K (that is a 200GB shrink in KB)

      lvreduce -L 209715200K /dev/Vgroup1 pvresize /dev/sdb1(and sdc1) --

      setphysicalvolumesize 2042625023K Correct?

        Another thought occurred to me: Maybe to play on the safe side I should reduce 25G more than I plan on reducing the disks, to ensure that the physical volumes aren't smaller than the filesystem.

      Answers(20)

      • For the lack of pvresize to compact blocks, do it yourself. From the pvmove man page:

          If the source and destination are on the same disk, the anywhere allocation policy would be needed, like this:

                  pvmove --alloc anywhere /dev/sdb1:1000-1999 /dev/sdb1:0-999
          

          You can check where blocks are with "lvdisplay --map" and do a little math, you don't have to move your whole lvs to another disk.

        • I removed the comment and added it to an edit on the original question, better formatting. – Stew Apr 15 '12 at 13:03

        • This did not go well, however I am going to attempt it again when I can plan for a bit of downtime of that volume. Thanks for the answer, this is the route I am going to try and worse case scenario I can delete everything and recreate it from backup. Thanks! – Stew Apr 17 '12 at 7:02

        • Oh it's multi-extent? hmmm, not a big fan of doing that but yeah, I can see how that changes things, still at least you'll be able to follow that tool chain I mentioned. – Chopper3 Apr 15 '12 at 12:40

        • I removed the comment and added it to an edit on the original question, better formatting. – Stew Apr 15 '12 at 13:03

          • There is 4TB free on the datastore, so I don't think that is the issue. – Stew Apr 15 '12 at 12:38

          • Why do you blame LVM? It's really handy for normal partition management in Linux. It's true that resizing the whole LVM isn't fun, but it still beats having to resize the whole root partition. If you wouldn't have used LVM, you would 've ended up with one big root partition, you had to resize it and then had to start with step 8 of your link. Still not quite a fun exorcize. – blauwblaatje Jun 15 '11 at 7:27

          • IF it's on the same disk, this old post documents exactly that. Though might want to read fully first before you try:

          • A thought: Perhaps the simplest way to achieve this would be: Add an additional 1.8TB disk, run pvmove on one of the 2TB disks, when all the data is moved off that disk to the new one, remove it from the vg and what virtual machine. Rinse repeat for the second disk. As a matter of fact I am going to try that today. – Stew Apr 15 '12 at 11:52

          • Oh it's multi-extent? hmmm, not a big fan of doing that but yeah, I can see how that changes things, still at least you'll be able to follow that tool chain I mentioned. – Chopper3 Apr 15 '12 at 12:40

          • This isn't a VMWare issue really, the issue with the 2TB vmdk's is that there's no space left on the datastore to commit to a snapshot, as you say dropping the size of the vmdk will allow that to work.

            Now obviously you can use the usual chain of umount, e2fsck, resize2fs, lvreduce and pvresize then reduce the vmdk size within the vsclient, but there's another thought, if you have enough temporary space you could just convert them to thin disks. Obviously there can be a write penalty for this but it'd mean you'd not have to touch your guest filesystem.

          • This did not go well, however I am going to attempt it again when I can plan for a bit of downtime of that volume. Thanks for the answer, this is the route I am going to try and worse case scenario I can delete everything and recreate it from backup. Thanks! – Stew Apr 17 '12 at 7:02

          • Welcome to Server Fault! Generally we like answers on the site to be able to stand on their own - Links are great, but if that link ever breaks the answer should have enough information to still be helpful. Please consider editing your answer to include more detail. See the FAQ for more info. – slm Jul 25 '13 at 2:16

          • If you're buying a new drive, just install Windows there. Then you won't need to resize anything. – Joe Internet Jun 15 '11 at 20:53

          • This isn't a VMWare issue really, the issue with the 2TB vmdk's is that there's no space left on the datastore to commit to a snapshot, as you say dropping the size of the vmdk will allow that to work.

              Now obviously you can use the usual chain of umount, e2fsck, resize2fs, lvreduce and pvresize then reduce the vmdk size within the vsclient, but there's another thought, if you have enough temporary space you could just convert them to thin disks. Obviously there can be a write penalty for this but it'd mean you'd not have to touch your guest filesystem.

            • @blauwblaatje Respectfully, I compeletely disagree, Sir. In my case it wasnt about the root partition, so I have to buy a new drive now to temporarily store the contents of the 400G logical volume in question until pvresize matures. Would I have used normal partitions I would be golden now. – Achmed Durangi Jun 15 '11 at 18:45

            • A thought: Perhaps the simplest way to achieve this would be: Add an additional 1.8TB disk, run pvmove on one of the 2TB disks, when all the data is moved off that disk to the new one, remove it from the vg and what virtual machine. Rinse repeat for the second disk. As a matter of fact I am going to try that today. – Stew Apr 15 '12 at 11:52

            • There is 4TB free on the datastore, so I don't think that is the issue. – Stew Apr 15 '12 at 12:38

            • LVM is a good thing, but it's also true that the process to shrink the partition is quite freaky. But it's more easy to do it with LVM than other utility, and it's more secure. Well, thanks posting you solution and link to help people that will encounter the same problem. – Anarko_Bizounours Jun 15 '11 at 7:19

            • I'm sorry, I misunderstood your lvm setup. But... I'd still prefer LVM over traditional. Adding an lv is very easy, so is resizing an lv on-the-fly. I wouldn't want to try that with with traditional partitions on a live-server. Your problem however sounds more like a workstationproblem and in that case, I'd think your complaint should be on poweruser, not serverfault. For a workstation that you're not sure of if you ever want to have something else other than linux, I'd suggest leaving a free partition. – blauwblaatje Jun 23 '11 at 12:51