ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ProxMox Storage Configuration Question (idk how lol)

    IT Discussion
    4
    18
    1.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • GUIn00bG
      GUIn00b
      last edited by

      I have a fresh install of Proxmox. I have 4x Seagate NAS (SATA) spindles wiped and ready to.... yeah that's where I'm at. I thought I was going to mdadm a RAID-10. Nope! AAaaaand Proxmox seems to have "ZOMG USE ZFS" all over their documentation, and I'm not interested in that if there are other options. I want to leverage LVM/LVM-thin, but I'm not really sure where to start here. Here's lsblk:

      root@pve:~# lsblk
      NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
      sda                  8:0    0   3.6T  0 disk 
      └─sda1               8:1    0   3.6T  0 part 
      sdb                  8:16   0   3.6T  0 disk 
      └─sdb1               8:17   0   3.6T  0 part 
      sdc                  8:32   0   3.6T  0 disk 
      └─sdc1               8:33   0   3.6T  0 part 
      sdd                  8:48   0   3.6T  0 disk 
      └─sdd1               8:49   0   3.6T  0 part 
      nvme0n1            259:0    0 238.5G  0 disk 
      ├─nvme0n1p1        259:1    0  1007K  0 part 
      ├─nvme0n1p2        259:2    0     1G  0 part /boot/efi
      └─nvme0n1p3        259:3    0 237.5G  0 part 
        ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
        ├─pve-root       253:1    0  69.4G  0 lvm  /
        ├─pve-data_tmeta 253:2    0   1.4G  0 lvm  
        │ └─pve-data     253:4    0 141.2G  0 lvm  
        └─pve-data_tdata 253:3    0 141.2G  0 lvm  
          └─pve-data     253:4    0 141.2G  0 lvm  
      root@pve:~# 
      

      Any advice is appreciated. Thanks all! 🙂

      GUIn00bG scottalanmillerS JaredBuschJ 3 Replies Last reply Reply Quote 0
      • GUIn00bG
        GUIn00b @GUIn00b
        last edited by

        Quick update: I did this https://help.nodespace.com/knowledgebase.php?article=307 to all 4 drives, then created a 7.0 TB lvm-thin. Is that the right way to go about this? Here's console output:

        root@pve:~# wipefs -a /dev/sda /dev/sdb /dev/sdc /dev/sdd
        /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
        /dev/sda: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
        /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
        /dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdb: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
        /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdc: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
        /dev/sdd: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdd: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
        /dev/sdd: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
        /dev/sdd: calling ioctl to re-read partition table: Success
        /dev/sda: calling ioctl to re-read partition table: Success
        /dev/sdb: calling ioctl to re-read partition table: Success
        /dev/sdc: calling ioctl to re-read partition table: Success
        root@pve:~# pvcreate /dev/sda /dev/sdb /dev/sdc /dev/sdd
          Physical volume "/dev/sda" successfully created.
          Physical volume "/dev/sdb" successfully created.
          Physical volume "/dev/sdc" successfully created.
          Physical volume "/dev/sdd" successfully created.
        root@pve:~# vgcreate hdd-thin /dev/sda /dev/sdb /dev/sdc /dev/sdd
          Volume group "hdd-thin" successfully created
        root@pve:~# lvcreate -L 7T --thinpool hdd-thin hdd-thin
          Thin pool volume with chunk size 4.00 MiB can address at most <1016.02 TiB of data.
          WARNING: Pool zeroing and 4.00 MiB large chunk size slows down thin provisioning.
          WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
          Logical volume "hdd-thin" created.
        root@pve:~# 
        
        travisdh1T 1 Reply Last reply Reply Quote 0
        • travisdh1T
          travisdh1 @GUIn00b
          last edited by travisdh1

          @GUIn00b said in ProxMox Storage Configuration Question (idk how lol):

          Quick update: I did this https://help.nodespace.com/knowledgebase.php?article=307 to all 4 drives, then created a 7.0 TB lvm-thin. Is that the right way to go about this? Here's console output:

          root@pve:~# wipefs -a /dev/sda /dev/sdb /dev/sdc /dev/sdd
          /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
          /dev/sda: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
          /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
          /dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdb: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
          /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdc: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
          /dev/sdd: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdd: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
          /dev/sdd: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
          /dev/sdd: calling ioctl to re-read partition table: Success
          /dev/sda: calling ioctl to re-read partition table: Success
          /dev/sdb: calling ioctl to re-read partition table: Success
          /dev/sdc: calling ioctl to re-read partition table: Success
          root@pve:~# pvcreate /dev/sda /dev/sdb /dev/sdc /dev/sdd
            Physical volume "/dev/sda" successfully created.
            Physical volume "/dev/sdb" successfully created.
            Physical volume "/dev/sdc" successfully created.
            Physical volume "/dev/sdd" successfully created.
          root@pve:~# vgcreate hdd-thin /dev/sda /dev/sdb /dev/sdc /dev/sdd
            Volume group "hdd-thin" successfully created
          root@pve:~# lvcreate -L 7T --thinpool hdd-thin hdd-thin
            Thin pool volume with chunk size 4.00 MiB can address at most <1016.02 TiB of data.
            WARNING: Pool zeroing and 4.00 MiB large chunk size slows down thin provisioning.
            WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
            Logical volume "hdd-thin" created.
          root@pve:~# 
          

          That looks correct to me. Convincing ProxMox to use it as an lvm-thin pool might be a bit tricky as it wants a blank block device in the gui....

          ProxMox's aversion to the best RAID system available is mind blowing to me. They've completely bought into the cult of ZFS and the Windows world of "software RAID is bad".

          scottalanmillerS 1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @GUIn00b
            last edited by

            @GUIn00b said in ProxMox Storage Configuration Question (idk how lol):

            I thought I was going to mdadm a RAID-10. Nope! AAaaaand Proxmox seems to have "ZOMG USE ZFS" all over their documentation, and I'm not interested in that if there are other options

            The only way I'd use it is either manually creating a RAID array with MD and not telling ProxMox, lol. But that's kludgy. Or do things the ProxMox way and use a hardware RAID controller.

            ProxMox isn't stable with ZFS (ZFS is not stable on Linux and we know exactly why) and ProxMox ignores this and leaves the user at risk. But almost no one deploys Proxmox that way so it is rarely an issue.

            But like VMware, officially there is no software RAID option that is production capable. Sucks because MD would do a great job here.

            1 Reply Last reply Reply Quote 2
            • scottalanmillerS
              scottalanmiller @travisdh1
              last edited by

              @travisdh1 said in ProxMox Storage Configuration Question (idk how lol):

              They've completely bought into the cult of ZFS and the Windows world of "software RAID is bad".

              It's weird because they push software RAID, just not good software RAID that's baked in. Only software RAID from a system that isn't native nor stable on Linux. Ugh.

              1 Reply Last reply Reply Quote 1
              • JaredBuschJ
                JaredBusch @GUIn00b
                last edited by

                @GUIn00b said in ProxMox Storage Configuration Question (idk how lol):

                Any advice is appreciated.

                what is the point of this host?
                how important is your data?
                why use software raid?
                who will support it if you are not there?
                do they actually understand anything?

                I use hardware raid 100% in client systems. Why? because bespoke is bad. Software raid is bespoke. No matter how old and well documented it is, it is something that no one in the majority deals with. MDADM or ZFS or whatever, it matters not.

                I don't use it (hardware raid) because it is better, I use it because it is more easily understood and supported by the majority.

                scottalanmillerS GUIn00bG 4 Replies Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @JaredBusch
                  last edited by

                  @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                  Software raid is bespoke.

                  That's true in the case of ProxMox with ZFS, because ZFS isn't native nor stable on that platform nor is there any team that makes that work. It's a product from one world shoehorned to allowed it to run (possibly in license violation) in another and there is no official support, testing, or anything from any vendor or team.

                  But MD is way, way the opposite. MD is as enterprise and anti-bespoke as it gets. It's part of the base OS, it is the most reliable RAID system out there.

                  Technically, using a third party external hardware replacement for the OS' own tooling is far more bespoke than using MD internally.

                  1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @JaredBusch
                    last edited by

                    @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                    I don't use it (hardware raid) because it is better, I use it because it is more easily understood and supported by the majority.

                    When non-IT people need to interact, it's better. That's why we use it. We don't want customers thinking that they can do stuff without IT and causing damage. It means we can send a middle schooler in to change drives without needing to coordinate on the timing. It means some random drive delivery guy can do the drive swap without asking.

                    It costs a lot more. It lowers performance. But the blind swap value for customers without someone technical making sure random people aren't touching servers when they aren't supposed to is a big deal.

                    However, all those non-technical people are still hitting the power button, pulling cables, spilling coffee... so I don't know if it has ever protected us, lol.

                    JaredBuschJ 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @JaredBusch
                      last edited by

                      @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                      I use it because it is more easily understood and supported by the majority.

                      In general, I actually think this is a negative. Making systems that are "easy for people who don't know what they are doing to pretend that they do" is one of the biggest causes of problems I find in customer systems.

                      "Oh, I thought I could just change how these things work and..." now they have no backups, no they are offline, now their data is corrupt, etc. etc.

                      The appearance of being accessible without knowledge encourages the Jurassic Park Effect.

                      JaredBuschJ 1 Reply Last reply Reply Quote 0
                      • JaredBuschJ
                        JaredBusch @scottalanmiller
                        last edited by

                        @scottalanmiller said in ProxMox Storage Configuration Question (idk how lol):

                        It means some random drive delivery guy can do the drive swap without asking.

                        It means the "dell" or "hp" tech that is just a random contractor can swap the drive under the warranty support wihtout needing us to deal with anything more than checking that the autorebuild started in the controller.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • JaredBuschJ
                          JaredBusch @scottalanmiller
                          last edited by

                          @scottalanmiller said in ProxMox Storage Configuration Question (idk how lol):

                          In general, I actually think this is a negative. Making systems that are "easy for people who don't know what they are doing to pretend that they do" is one of the biggest causes of problems I find in customer systems.

                          No one knows how to use MD or ZFS. Instead they go to Google-sensei.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @JaredBusch
                            last edited by

                            @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                            @scottalanmiller said in ProxMox Storage Configuration Question (idk how lol):

                            In general, I actually think this is a negative. Making systems that are "easy for people who don't know what they are doing to pretend that they do" is one of the biggest causes of problems I find in customer systems.

                            No one knows how to use MD or ZFS. Instead they go to Google-sensei.

                            End users, sure. Sales people, sure. But if you have an IT team, you are good to go. Having to pay hundreds of dollars for lower reliability, lower performance solutions to allow shops without IT to pretend to keep themselves safe is a penalty for people who don't want skilled labor. But overall, just hiring an IT team or having a qualified IT department makes far more sense. You get more protection and often actually costs less. There's no shortage of IT people.

                            I'm a huge believer in doing a good job and if the customer screws up not caring, that's on them. But intentionally doing a bad just assuming the customer is an idiot makes it my fault.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @JaredBusch
                              last edited by

                              @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                              @scottalanmiller said in ProxMox Storage Configuration Question (idk how lol):

                              It means some random drive delivery guy can do the drive swap without asking.

                              It means the "dell" or "hp" tech that is just a random contractor can swap the drive under the warranty support wihtout needing us to deal with anything more than checking that the autorebuild started in the controller.

                              Right. Not something I recommend doing, ever. Because that's the same people who pull the wrong drive or the right drive from teh wrong server. It's a great idea, and that's why we normally do it for really small shops. But it carries a lot of dangers of its own because it encourages people to make big hardware changes without asking.

                              Seen a LOT of data loss causes by making this seem like a good idea.

                              1 Reply Last reply Reply Quote 0
                              • GUIn00bG
                                GUIn00b @JaredBusch
                                last edited by

                                @JaredBusch said in ProxMox Storage Configuration Question (idk how lol):

                                @GUIn00b said in ProxMox Storage Configuration Question (idk how lol):

                                Any advice is appreciated.

                                what is the point of this host?
                                how important is your data?
                                why use software raid?
                                who will support it if you are not there?
                                do they actually understand anything?

                                I use hardware raid 100% in client systems. Why? because bespoke is bad. Software raid is bespoke. No matter how old and well documented it is, it is something that no one in the majority deals with. MDADM or ZFS or whatever, it matters not.

                                I don't use it (hardware raid) because it is better, I use it because it is more easily understood and supported by the majority.

                                1. It's my home hypervisor for hosting my home things but it's also serving as a learning tool as I'm a veteran of M$ systems management/consulting and wish to build a comparable mastery skillset in all things Linux.

                                2. It's important enough I don't want to have to rebuild this over and over. I like the idea of LVM offering snapshotting (similar to M$ Shadow Copy) so I have a little fail-safe there. Also using RAID serves as fortified storage. I do plan to implement backups once I have some things on here built that are worth backing up. Not only will this be a hypervisor I can spin up various things for tinkering/learning, but it will host my personal media and documents as well as provide various services to my house.

                                3. Because technology has reached a point that hardware RAID largely doesn't have the necessity at these lower levels that it used to. It's also going to be a working model of advanced storage configuration for me to build my home infrastructure upon and experience again counting toward building that Linux experience.

                                4. I presume this is from a "you're no longer around" premise. Being my personal home system, if I'm "no longer around" (hit by a bus? lol), I'm not really too concerned about it. 😉

                                5. No, I don't. 😉

                                1 Reply Last reply Reply Quote 0
                                • GUIn00bG
                                  GUIn00b
                                  last edited by GUIn00b

                                  My apologies and thanks to all for the feedback. Life decided to drive a Mack truck with no brakes through my world this past week or so so I've been MIA.

                                  I went ahead and blew away that mdadmin setup and used LVM2 (I think LVM2?) to create the RAID. Here's the console outputses:

                                  root@pve:/# lvcreate --type raid10 -l 100%FREE --stripesize 2048k --name LVOBR10 OBR10
                                    Logical volume "LVOBR10" created.
                                  root@pve:/# lvs
                                    LV      VG    Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
                                    LVOBR10 OBR10 rwi-a-r---   <7.28t                                    0.00            
                                    data    pve   twi-a-tz-- <141.23g             0.00   1.13                            
                                    root    pve   -wi-ao----  <69.37g                                                    
                                    swap    pve   -wi-ao----    8.00g                                                    
                                  root@pve:/# 
                                  
                                  
                                  root@pve:~# lsblk
                                  NAME                     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
                                  sda                        8:0    0   3.6T  0 disk 
                                  ├─OBR10-LVOBR10_rmeta_0  253:5    0     4M  0 lvm  
                                  │ └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  └─OBR10-LVOBR10_rimage_0 253:6    0   3.6T  0 lvm  
                                    └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  sdb                        8:16   0   3.6T  0 disk 
                                  ├─OBR10-LVOBR10_rmeta_1  253:7    0     4M  0 lvm  
                                  │ └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  └─OBR10-LVOBR10_rimage_1 253:8    0   3.6T  0 lvm  
                                    └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  sdc                        8:32   0   3.6T  0 disk 
                                  ├─OBR10-LVOBR10_rmeta_2  253:9    0     4M  0 lvm  
                                  │ └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  └─OBR10-LVOBR10_rimage_2 253:10   0   3.6T  0 lvm  
                                    └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  sdd                        8:48   0   3.6T  0 disk 
                                  ├─OBR10-LVOBR10_rmeta_3  253:11   0     4M  0 lvm  
                                  │ └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  └─OBR10-LVOBR10_rimage_3 253:12   0   3.6T  0 lvm  
                                    └─OBR10-LVOBR10        253:13   0   7.3T  0 lvm  
                                  nvme0n1                  259:0    0 238.5G  0 disk 
                                  ├─nvme0n1p1              259:1    0  1007K  0 part 
                                  ├─nvme0n1p2              259:2    0     1G  0 part /boot/efi
                                  └─nvme0n1p3              259:3    0 237.5G  0 part 
                                    ├─pve-swap             253:0    0     8G  0 lvm  [SWAP]
                                    ├─pve-root             253:1    0  69.4G  0 lvm  /
                                    ├─pve-data_tmeta       253:2    0   1.4G  0 lvm  
                                    │ └─pve-data           253:4    0 141.2G  0 lvm  
                                    └─pve-data_tdata       253:3    0 141.2G  0 lvm  
                                      └─pve-data           253:4    0 141.2G  0 lvm  
                                  root@pve:~# df -h
                                  Filesystem            Size  Used Avail Use% Mounted on
                                  udev                   32G     0   32G   0% /dev
                                  tmpfs                 6.3G  1.9M  6.3G   1% /run
                                  /dev/mapper/pve-root   68G  2.9G   62G   5% /
                                  tmpfs                  32G   46M   32G   1% /dev/shm
                                  tmpfs                 5.0M     0  5.0M   0% /run/lock
                                  /dev/nvme0n1p2       1022M  344K 1022M   1% /boot/efi
                                  /dev/fuse             128M   16K  128M   1% /etc/pve
                                  tmpfs                 6.3G     0  6.3G   0% /run/user/0
                                  root@pve:~# 
                                  
                                  

                                  It's not LVM-thin and I don't know if it should be or not. From a "learn things one step at a time" perspective, this seems straight-forward and I grasp it conceptually. I'll do the LVM-thin thing down the road probably. My only pending curiosity on this config is did I select a good stripe size? Default is 64kb but I read somewhere that for this it should be 2048. I have no idea lol so I went with 2048.

                                  Next is to setup a Fedora server VM. 🙂

                                  1 Reply Last reply Reply Quote 0
                                  • GUIn00bG
                                    GUIn00b
                                    last edited by GUIn00b

                                    I'm missing something again. When I tried to create a VM and use that as the location to store the VM's virtual disk, it generated an error saying there was no free space. I'm going to put on some easy listening circus music while I Google and read some more for now! 😉

                                    Screenshot from 2023-10-04 02-59-00.png

                                    1 Reply Last reply Reply Quote 0
                                    • GUIn00bG
                                      GUIn00b
                                      last edited by

                                      I think I fixed it lol but please if anyone sees otherwise, let me know. It's a new adventure for me! 😉

                                      root@pve:~# lvcreate --type raid10 --size 7t --stripesize 2048k --name LVOBR10 OBR10
                                        Logical volume "LVOBR10" created.
                                      root@pve:~# lvs
                                        LV            VG    Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
                                        LVOBR10       OBR10 rwi-a-r---    7.00t                                    0.00            
                                        data          pve   twi-aotz-- <141.23g             1.60   1.18                            
                                        root          pve   -wi-ao----  <69.37g                                                    
                                        swap          pve   -wi-ao----    8.00g                                                    
                                        vm-100-disk-0 pve   Vwi-aotz--   80.00g data        2.82                                   
                                      root@pve:~# vgs
                                        VG    #PV #LV #SN Attr   VSize   VFree  
                                        OBR10   4   1   0 wz--n-  14.55t 568.06g
                                        pve     1   4   0 wz--n- 237.47g  16.00g
                                      root@pve:~# pvs
                                        PV             VG    Fmt  Attr PSize   PFree   
                                        /dev/nvme0n1p3 pve   lvm2 a--  237.47g   16.00g
                                        /dev/sda       OBR10 lvm2 a--   <3.64t <142.02g
                                        /dev/sdb       OBR10 lvm2 a--   <3.64t <142.02g
                                        /dev/sdc       OBR10 lvm2 a--   <3.64t <142.02g
                                        /dev/sdd       OBR10 lvm2 a--   <3.64t <142.02g
                                      root@pve:~# lsblk
                                      NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
                                      sda                            8:0    0   3.6T  0 disk 
                                      ├─OBR10-LVOBR10_rmeta_0      253:5    0     4M  0 lvm  
                                      │ └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      └─OBR10-LVOBR10_rimage_0     253:6    0   3.5T  0 lvm  
                                        └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      sdb                            8:16   0   3.6T  0 disk 
                                      ├─OBR10-LVOBR10_rmeta_1      253:7    0     4M  0 lvm  
                                      │ └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      └─OBR10-LVOBR10_rimage_1     253:8    0   3.5T  0 lvm  
                                        └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      sdc                            8:32   0   3.6T  0 disk 
                                      ├─OBR10-LVOBR10_rmeta_2      253:9    0     4M  0 lvm  
                                      │ └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      └─OBR10-LVOBR10_rimage_2     253:10   0   3.5T  0 lvm  
                                        └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      sdd                            8:48   0   3.6T  0 disk 
                                      ├─OBR10-LVOBR10_rmeta_3      253:11   0     4M  0 lvm  
                                      │ └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      └─OBR10-LVOBR10_rimage_3     253:12   0   3.5T  0 lvm  
                                        └─OBR10-LVOBR10            253:13   0     7T  0 lvm  
                                      nvme0n1                      259:0    0 238.5G  0 disk 
                                      ├─nvme0n1p1                  259:1    0  1007K  0 part 
                                      ├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
                                      └─nvme0n1p3                  259:3    0 237.5G  0 part 
                                        ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
                                        ├─pve-root                 253:1    0  69.4G  0 lvm  /
                                        ├─pve-data_tmeta           253:2    0   1.4G  0 lvm  
                                        │ └─pve-data-tpool         253:4    0 141.2G  0 lvm  
                                        │   ├─pve-data             253:14   0 141.2G  1 lvm  
                                        │   └─pve-vm--100--disk--0 253:15   0    80G  0 lvm  
                                        └─pve-data_tdata           253:3    0 141.2G  0 lvm  
                                          └─pve-data-tpool         253:4    0 141.2G  0 lvm  
                                            ├─pve-data             253:14   0 141.2G  1 lvm  
                                            └─pve-vm--100--disk--0 253:15   0    80G  0 lvm  
                                      root@pve:~# df -h
                                      Filesystem            Size  Used Avail Use% Mounted on
                                      udev                   32G     0   32G   0% /dev
                                      tmpfs                 6.3G  1.9M  6.3G   1% /run
                                      /dev/mapper/pve-root   68G  3.5G   61G   6% /
                                      tmpfs                  32G   55M   32G   1% /dev/shm
                                      tmpfs                 5.0M     0  5.0M   0% /run/lock
                                      /dev/nvme0n1p2       1022M  344K 1022M   1% /boot/efi
                                      /dev/fuse             128M   16K  128M   1% /etc/pve
                                      tmpfs                 6.3G     0  6.3G   0% /run/user/0
                                      root@pve:~# 
                                      
                                      

                                      Screenshot from 2023-10-04 09-43-38.png

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @GUIn00b
                                        last edited by

                                        @GUIn00b Seems right. I'm old school and learned MD before the interface was added to LVM. It's all the same stuff, just new command line options. But it sure looks right to me.

                                        1 Reply Last reply Reply Quote 2
                                        • 1 / 1
                                        • First post
                                          Last post