ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Solved Attach drive to VM in Xenserver (not as Storage Repository)

    IT Discussion
    xenserver xcp-ng xen
    3
    6
    5.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 1
      1337
      last edited by 1337

      I have a 4TB drive with files on it. How do I "attach/assign" it in xenserver so one of the VMs can access the drive as /dev/sdb (or whatever it shows up as)?

      I don't want the drive as a storage repository of some kind, just as a block device to one VM.

      1 Reply Last reply Reply Quote 0
      • black3dynamiteB
        black3dynamite
        last edited by

        I believe in Xen Orchestra, for that specific VM, there should be an option in the PCI Devices to attach a PCI device like a USB drive.

        1 1 Reply Last reply Reply Quote 0
        • 1
          1337 @black3dynamite
          last edited by

          @black3dynamite said in Attach drive to VM in Xenserver (not as Storage Repository):

          I believe in Xen Orchestra, for that specific VM, there should be an option in the PCI Devices to attach a PCI device like a USB drive.

          Thanks, but it's not an USB drive. Just a regular SAS/SATA drive in a drive bay.

          black3dynamiteB dbeatoD 2 Replies Last reply Reply Quote 0
          • black3dynamiteB
            black3dynamite @1337
            last edited by

            @pete-s said in Attach drive to VM in Xenserver (not as Storage Repository):

            @black3dynamite said in Attach drive to VM in Xenserver (not as Storage Repository):

            I believe in Xen Orchestra, for that specific VM, there should be an option in the PCI Devices to attach a PCI device like a USB drive.

            Thanks, but it's not an USB drive. Just a regular SAS/SATA drive in a drive bay.

            My bad. This should help you. The article was written for XenServer 5.x
            http://techblog.conglomer.net/sata-direct-local-disk-access-on-xenserver/

            1 1 Reply Last reply Reply Quote 2
            • dbeatoD
              dbeato @1337
              last edited by

              @pete-s said in Attach drive to VM in Xenserver (not as Storage Repository):

              @black3dynamite said in Attach drive to VM in Xenserver (not as Storage Repository):

              I believe in Xen Orchestra, for that specific VM, there should be an option in the PCI Devices to attach a PCI device like a USB drive.

              Thanks, but it's not an USB drive. Just a regular SAS/SATA drive in a drive bay.

              You can add it as an additional drive on XenServer
              https://support.citrix.com/article/CTX130897

              But would you need access to that Data to a VM?

              1 Reply Last reply Reply Quote 0
              • 1
                1337 @black3dynamite
                last edited by 1337

                Thanks guys.

                Unfortunately the link @dbeato provided is how you add a new disk to xenserver when you want it to be Storage Repository - a place to store VM partitions. So if you have a disk already xenserver will wipe it clean and put LVMs or EXT3 with VDI files on it.

                When it's passed through as a block device to a VM it will have whatever filesystem the VM formats it with.

                The problem with the info in the link @black3dynamite provided is that it's for xenserver 5.x so it doesn't work straight up with Xenserver 7.x (I'm running 7.4).

                What I ended up doing was adding a raid 1 array instead of just a disk. The principle is the same though, just another name on the block device.

                The array /dev/md0 is passed through to the VM as a block device.

                I did it by adding a rule to /etc/udev/rules.d/65-md-incremental.rules almost at the end.

                KERNEL=="md*", SUBSYSTEM=="block", ACTION=="change", SYMLINK+="xapi/block/%k", \
                        RUN+="/bin/sh -c '/opt/xensource/libexec/local-device-change %k 2>&1 >/dev/null&'"
                

                This rule will pass all md arrays to the VMs as Removable Storage in Xenserver (so you can attach it to whatever VM you want).

                Note that * in KERNEL=="md*" is a wildcard. So this will match the devices /dev/md0, md1 md2 etc. Just replace md* with whatever block device you want to pass through.

                The array is 2TB so I don't know if this works with bigger arrays.
                After trying some larger drives I can verify that it works fine with larger than 2TB arrays.
                Also the disks were empty so I'm not sure if xenserver will wipe the disk when you set this up the first time.
                After some experimenting it looks like Xenserver will not touch the drive.

                I'll add the complete file for reference.

                KERNEL=="td[a-z]*", GOTO="md_end"
                # This file causes block devices with Linux RAID (mdadm) signatures to
                # automatically cause mdadm to be run.
                # See udev(8) for syntax
                
                # Don't process any events if anaconda is running as anaconda brings up
                # raid devices manually
                ENV{ANACONDA}=="?*", GOTO="md_end"
                
                # Also don't process disks that are slated to be a multipath device
                ENV{DM_MULTIPATH_DEVICE_PATH}=="?*", GOTO="md_end"
                
                # We process add events on block devices (since they are ready as soon as
                # they are added to the system), but we must process change events as well
                # on any dm devices (like LUKS partitions or LVM logical volumes) and on
                # md devices because both of these first get added, then get brought live
                # and trigger a change event.  The reason we don't process change events
                # on bare hard disks is because if you stop all arrays on a disk, then
                # run fdisk on the disk to change the partitions, when fdisk exits it
                # triggers a change event, and we want to wait until all the fdisks on
                # all member disks are done before we do anything.  Unfortunately, we have
                # no way of knowing that, so we just have to let those arrays be brought
                # up manually after fdisk has been run on all of the disks.
                
                # First, process all add events (md and dm devices will not really do
                # anything here, just regular disks, and this also won't get any imsm
                # array members either)
                SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="linux_raid_member", \
                        RUN+="/sbin/mdadm -I $env{DEVNAME}"
                
                # Next, check to make sure the BIOS raid stuff wasn't turned off via cmdline
                IMPORT{cmdline}="noiswmd"
                IMPORT{cmdline}="nodmraid"
                ENV{noiswmd}=="?*", GOTO="md_imsm_inc_end"
                ENV{nodmraid}=="?*", GOTO="md_imsm_inc_end"
                SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="isw_raid_member", \
                        RUN+="/sbin/mdadm -I $env{DEVNAME}"
                LABEL="md_imsm_inc_end"
                
                SUBSYSTEM=="block", ACTION=="remove", ENV{ID_PATH}=="?*", \
                        RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
                SUBSYSTEM=="block", ACTION=="remove", ENV{ID_PATH}!="?*", \
                        RUN+="/sbin/mdadm -If $name"
                
                # Next make sure that this isn't a dm device we should skip for some reason
                ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="dm_change_end"
                ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="dm_change_end"
                ENV{DM_SUSPENDED}=="1", GOTO="dm_change_end"
                KERNEL=="dm-*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \
                        ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}"
                LABEL="dm_change_end"
                
                # Finally catch any nested md raid arrays.  If we brought up an md raid
                # array that's part of another md raid array, it won't be ready to be used
                # until the change event that occurs when it becomes live
                KERNEL=="md*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \
                        ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}"
                
                # Added line
                # Pass-through of all /dev/md* arrays. 
                # Will end up as Removable Storage that can be assigned to a VM.
                KERNEL=="md*", SUBSYSTEM=="block", ACTION=="change", SYMLINK+="xapi/block/%k", \
                        RUN+="/bin/sh -c '/opt/xensource/libexec/local-device-change %k 2>&1 >/dev/null&'"
                
                LABEL="md_end"
                
                1 Reply Last reply Reply Quote 4
                • 1 / 1
                • First post
                  Last post