ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Xenserver Space Woes

    Scheduled Pinned Locked Moved IT Discussion
    65 Posts 9 Posters 14.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DanpD
      Danp
      last edited by Danp

      Can you post the results of xe vdi-list showing both VDIs? I'm wondering if one VDI is acting as a base copy for the other.

      jrcJ 1 Reply Last reply Reply Quote 1
      • jrcJ
        jrc @Danp
        last edited by jrc

        @Danp said in Xenserver Space Woes:

        Can you post the results of xe vdi-list showing both VDIs? I'm wondering if one VDI is acting as a base copy for the other.

        Bad? One:

        uuid ( RO): 5535a3db-da4f-4211-afa8-077241f63221
        name-label ( RW): Staff Home
        name-description ( RW): VDI for staff home folders
        sr-uuid ( RO): 4558cecd-d90d-3259-7ea5-09478d0e386c
        virtual-size ( RO): 2193654546432
        sharable ( RO): false
        read-only ( RO): true

        Good one:

        uuid ( RO): 6255caa0-e7d4-4d27-a257-b33aaf3a7507
        name-label ( RW): Staff Home
        name-description ( RW): VDI for staff home folders
        sr-uuid ( RO): 4558cecd-d90d-3259-7ea5-09478d0e386c
        virtual-size ( RO): 2193654546432
        sharable ( RO): false
        read-only ( RO): false

        EDIT: Maybe I am barking up the wrong SR-VDI here, since I ran the vhd-util command and got:

        vhd=VHD-f832866c-1bb4-48d5-81e7-4dd468b2618b capacity=2,193,654,546,432 size=2,197,689,466,880 hidden=1 parent=none
        vhd=VHD-5535a3db-da4f-4211-afa8-077241f63221 capacity=2,193,654,546,432 size=14,424,211,456 hidden=1 parent=VHD-f832866c-1bb4-48d5-81e7-4dd468b2618b
        vhd=VHD-6255caa0-e7d4-4d27-a257-b33aaf3a7507 capacity=2,193,654,546,432 size=2,197,945,319,424 hidden=0 parent=VHD-5535a3db-da4f-4211-afa8-077241f63221

        That seems to imply that the "good" vdi is a child of the "bad" vdi, which is itself a copy of the base VDI. Which would seem to be "normal" but still, where is that extra 2Tb going? And why can't I free it up?

        1 Reply Last reply Reply Quote 0
        • momurdaM
          momurda
          last edited by

          This issue is fascinating.
          Here is an article from Citrix, the answer is probably here, though at this time it is a bit over my head.
          http://support.citrix.com/article/CTX201296
          This discusses coalescing, and reasons for failure and steps to troubleshoot and fix the coalescing issues.
          There seem to be 8 possible issues for this happening automatically.
          /var/log/SMlog probably has more info about the problem according to this.
          Also, are you able to move the SR(which will automatically get rid of ss chains) or export the vm and delete it, then import it?
          I also think that any of these solutions require you to have sufficient free space on the SR.

          jrcJ 1 Reply Last reply Reply Quote 1
          • jrcJ
            jrc @momurda
            last edited by

            @momurda said in Xenserver Space Woes:

            This issue is fascinating.
            Here is an article from Citrix, the answer is probably here, though at this time it is a bit over my head.
            http://support.citrix.com/article/CTX201296
            This discusses coalescing, and reasons for failure and steps to troubleshoot and fix the coalescing issues.
            There seem to be 8 possible issues for this happening automatically.
            /var/log/SMlog probably has more info about the problem according to this.
            Also, are you able to move the SR(which will automatically get rid of ss chains) or export the vm and delete it, then import it?
            I also think that any of these solutions require you to have sufficient free space on the SR.

            Browsing through /var/log/SMlog does not really show anything obvious. I can see where it is doing some thing with the three VDIs previously mentioned, but it looks like that was a success. Yet I continue to be using 2Tb more than is virtually assigned.

            I am going to dig through that support doc you linked and see if I can work anything out.

            1 Reply Last reply Reply Quote 0
            • jrcJ
              jrc
              last edited by

              I think I may have worked it out. It would appear that the online coalesce for the VM in question keeps timing out on the specific VDI in question (the 6255... one), they go on to say this might be due to heavy load on the storage at the time it tries. I do not think this is the case here, but the suggested solution is to shut it down and do an offline coalesce with the command:

              xe host-call-plugin host-uuid=<UUID of the pool master Host> plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<uuid of the VM you want to coalesce>

              I am going to try this tonight and see what happens.

              A side question: How does one work out: 1. If your storage is too slow? and 2. What is the IOP speed your storage is capable of?

              1 Reply Last reply Reply Quote 2
              • momurdaM
                momurda
                last edited by

                In XenCenter, if your Xenserver is up to date with all hotfixes, you can use the performance tab in XC on the XS host to measure disk performance (read/write/total iops, queue length for each SR or vd) and you should get accurate results. If you dont have the hotfixes installed, you prob will not get accurate results.

                In general longer queue lengths mean the disk cant keep up with what it is being asked to do.
                You can also query performance from the cli using iostat.

                jrcJ BRRABillB 2 Replies Last reply Reply Quote 2
                • jrcJ
                  jrc @momurda
                  last edited by

                  @momurda said in Xenserver Space Woes:

                  In XenCenter, if your Xenserver is up to date with all hotfixes, you can use the performance tab in XC on the XS host to measure disk performance (read/write/total iops, queue length for each SR or vd) and you should get accurate results. If you dont have the hotfixes installed, you prob will not get accurate results.

                  In general longer queue lengths mean the disk can't keep up with what it is being asked to do.
                  You can also query performance from the cli using iostat.

                  Cool, I created a graph and added Disk IO Wait and Disk Queue size, but there appears to be no data (the hosts are completely up to date as of this weekend). I do note that on the standard Disk Performance graph there is not too much activity, over the last few days it's topped out at around 0.33MBps.

                  I guess I'll check in on it over the next few days and see what it looks like, but I don't think I'm having disk performance issues.

                  1 Reply Last reply Reply Quote 0
                  • BRRABillB
                    BRRABill @momurda
                    last edited by

                    @momurda said

                    In XenCenter, if your Xenserver is up to date with all hotfixes,

                    Is it the hotfixes, or the XS Tools? I know the tools have to be installed to run some of the stuff. (Like memory.)

                    jrcJ 1 Reply Last reply Reply Quote 0
                    • jrcJ
                      jrc @BRRABill
                      last edited by

                      @BRRABill said in Xenserver Space Woes:

                      @momurda said

                      In XenCenter, if your Xenserver is up to date with all hotfixes,

                      Is it the hotfixes, or the XS Tools? I know the tools have to be installed to run some of the stuff. (Like memory.)

                      Good point. The tools are not up to date. So I'll need to update them tonight, though I am looking at historical data from before I applied SP1 and the other updates.

                      1 Reply Last reply Reply Quote 0
                      • momurdaM
                        momurda
                        last edited by

                        You can also throw some io at a disk by copying a large file or lots of small files to a vm(do it twice at the same time if you want to see if you max out) to test your iops. Or reboot a few vms at the same time. My storage array hits 1500 or so before it starts to peak, iirc from some tests i did back in the winter. Though i do wonder if some of that isnt bound by us using a Gb network rather than 10Gb.![iscsi iops for my XS001 Xenserver host](0_1466533825240_upload-9dea6a0f-8cd9-4594-aa69-430e8b1b3c56 image url)
                        This shows the last ten minutes of iops for all SRs attached to my XS001 host. The purple iscsi3 is an SR; i booted a vm that lives there that nobody ever uses.

                        1 Reply Last reply Reply Quote 1
                        • jrcJ
                          jrc
                          last edited by

                          So my IOPs seem to be jumping between 0 and 900k fairly quickly. But the Queue size seems to stay between 0 and 1, with the latency very low (near zero) as well. Network traffic is well under 1MBps. This is from the performance meters on the Xen master host.

                          scottalanmillerS 1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller @jrc
                            last edited by

                            @jrc said in Xenserver Space Woes:

                            So my IOPs seem to be jumping between 0 and 900k fairly quickly. But the Queue size seems to stay between 0 and 1, with the latency very low (near zero) as well. Network traffic is well under 1MBps. This is from the performance meters on the Xen master host.

                            Basically what that is telling me is that you have plenty of IOPS in reserve and you are never demanding more from it than it can provide. Those numbers are basically showing your storage as "idle" and ready for whatever you want to throw at it.

                            1 Reply Last reply Reply Quote 1
                            • jrcJ
                              jrc
                              last edited by

                              @scottalanmiller said in Xenserver Space Woes:

                              @jrc said in Xenserver Space Woes:

                              So my IOPs seem to be jumping between 0 and 900k fairly quickly. But the Queue size seems to stay between 0 and 1, with the latency very low (near zero) as well. Network traffic is well under 1MBps. This is from the performance meters on the Xen master host.

                              Basically what that is telling me is that you have plenty of IOPS in reserve and you are never demanding more from it than it can provide. Those numbers are basically showing your storage as "idle" and ready for whatever you want to throw at it.

                              Ok, so my gut on that was right. Then I need to work out why the leaf quiescence thingy is timing out, since it appears to not be a disk IO thing.

                              1 Reply Last reply Reply Quote 2
                              • jrcJ
                                jrc
                                last edited by

                                I fixed it! Shut down the VM, then ran an offline quiescence and that did it:

                                xe host-call-plugin host-uuid=<Host UUID> plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<VM UUID>

                                It did take about 45 minutes, but once it was done the space was free. Xencenter is now happily reporting the used space as 4127Gb and a virtually assigned is 4115Gb, it's not perfect, but I'll take it!

                                1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  Awesome, glad that that fixed things.

                                  1 Reply Last reply Reply Quote 2
                                  • 1
                                  • 2
                                  • 3
                                  • 4
                                  • 3 / 4
                                  • First post
                                    Last post