ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer Export Performance Seems Poor

    IT Discussion
    xenserver xenserver 6.5 gzip
    8
    120
    37.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender @scottalanmiller
      last edited by

      @scottalanmiller said:

      @Dashrender said:

      @scottalanmiller said:

      @Dashrender said:

      Oh well sure - I can definitely make it role back on reboots - though, now that I think about it.. I can't do that either.. because while I want the data itself to be read only - I need access logs. Those logs are part of the system itself. Now those logs are in the SQL DB, and I could just backup the SQL DB or find a way to export them and only backup that part... then worry less about the rest...

      So it is NOT read only, hence the problem.

      Continuing discussion has brought this to the surface.. I wasn't intentionally just not mentioning it earlier.. so yeah... that part at least is not read only.

      Can the logs just go elsewhere? ELK for example?

      If I pay a developer to learn how it works - sure it could.

      1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller
        last edited by

        Where are the logs going now?

        DashrenderD 1 Reply Last reply Reply Quote 0
        • DustinB3403D
          DustinB3403
          last edited by

          If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.

          Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.

          My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.

          DashrenderD 1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @scottalanmiller
            last edited by

            @scottalanmiller said:

            Where are the logs going now?

            Into the SQL DB on the server.
            same place where the EHR data lives.

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • DashrenderD
              Dashrender @DustinB3403
              last edited by

              @DustinB3403 said:

              If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.

              Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.

              My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.

              This is not correct - I guess there was a misunderstanding somewhere.

              DustinB3403D 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @Dashrender
                last edited by

                @Dashrender said:

                @scottalanmiller said:

                Where are the logs going now?

                Into the SQL DB on the server.
                same place where the EHR data lives.

                A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.

                DashrenderD 1 Reply Last reply Reply Quote 0
                • DashrenderD
                  Dashrender
                  last edited by

                  This server has a 60 GB SQL db, 500+ GB of TIF (scanned in paper documents) and another 100+ of application and other files associated with the old EHR.

                  At this point in time, the only thing changing on this system should be the access logs - who's logging in, who they are searching for, etc. The data in the DB and the TIF files, etc should all remain static.

                  The system (other than the log growth) should not be growing. It has around 50 GB of free space currently. This should be a lifetime of space since the main data isn't growing anymore.

                  1 Reply Last reply Reply Quote 0
                  • DustinB3403D
                    DustinB3403 @Dashrender
                    last edited by

                    So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?

                    Or can all of the extra stuff get pushed off to something else?

                    If the goal is to ensure the VM boots, and the database is accessible, then you should reduce the size of the VM as much as possible.

                    Anything that is static and that can get moved out of it, I would imagine should be, so you could recover from a faulty OS update that much more quickly.

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      @Dashrender said:

                      @scottalanmiller said:

                      Where are the logs going now?

                      Into the SQL DB on the server.
                      same place where the EHR data lives.

                      A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.

                      I'm guessing that you're assuming that all of the logs are in a single table - and assuming that's true, then I agree with you.

                      1 Reply Last reply Reply Quote 0
                      • DashrenderD
                        Dashrender @DustinB3403
                        last edited by

                        @DustinB3403 said:

                        So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?

                        yes - if anything on there is removed (or not mapped into it) the whole thing doesn't function as it should.

                        1 Reply Last reply Reply Quote 0
                        • DashrenderD
                          Dashrender
                          last edited by

                          I should also add - 30 hours of downtime on this system would not be a huge deal.

                          DashrenderD DustinB3403D 2 Replies Last reply Reply Quote 0
                          • DashrenderD
                            Dashrender @Dashrender
                            last edited by

                            @Dashrender said:

                            I should also add - 30 hours of downtime on this system would not be a huge deal.

                            If we have to go to a paper chart (yes we still have 10's of thousands of them in storage) it would take at least 24 hours to get it.. this "old" system is now in that ball park.

                            1 Reply Last reply Reply Quote 0
                            • DustinB3403D
                              DustinB3403 @Dashrender
                              last edited by

                              @Dashrender said:

                              I should also add - 30 hours of downtime on this system would not be a huge deal.

                              But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.

                              DashrenderD 1 Reply Last reply Reply Quote 0
                              • DashrenderD
                                Dashrender @DustinB3403
                                last edited by

                                @DustinB3403 said:

                                @Dashrender said:

                                I should also add - 30 hours of downtime on this system would not be a huge deal.

                                But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.

                                and it would be down for multiple days if the data VM dies and doesn't restore correctly either.

                                1 Reply Last reply Reply Quote 0
                                • DustinB3403D
                                  DustinB3403
                                  last edited by

                                  But with the data you could have multiple known good copies, with the VM you have your individual backups.

                                  Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.

                                  DashrenderD 1 Reply Last reply Reply Quote 0
                                  • DashrenderD
                                    Dashrender @DustinB3403
                                    last edited by

                                    @DustinB3403 said:

                                    But with the data you could have multiple known good copies, with the VM you have your individual backups.

                                    Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.

                                    Multiple known good copies? huh? Why would I have multiple copies of that non changing data?

                                    1 Reply Last reply Reply Quote 0
                                    • DustinB3403D
                                      DustinB3403
                                      last edited by

                                      The very same reason you keep multiple copies of anything critical..... so you have another to recover from.

                                      Even if all 700GB are in this VM, you don't keep just 1 backup of it.

                                      DashrenderD 1 Reply Last reply Reply Quote 1
                                      • DashrenderD
                                        Dashrender @DustinB3403
                                        last edited by

                                        @DustinB3403 said:

                                        The very same reason you keep multiple copies of anything critical..... so you have another to recover from.

                                        Even if all 700GB are in this VM, you don't keep just 1 backup of it.

                                        You have a point here.

                                        1 Reply Last reply Reply Quote 0
                                        • DashrenderD
                                          Dashrender
                                          last edited by

                                          Dustin, you still haven't told me what makes my application VM more vulnerable than a Data SAMBA share or a NAS though to warrant splitting it.

                                          1 Reply Last reply Reply Quote 0
                                          • DustinB3403D
                                            DustinB3403
                                            last edited by DustinB3403

                                            So my point with reducing the size of your VM is multiple pointed.

                                            It'll reduce backup time (unless you're doing delta's in which case only the roll-over) will take a while
                                            It'll speed up import time (less to transfer into XS)
                                            It'll be less to have to keep stored as a backup.

                                            If you put the data into a separate medium (and chime in folks if you think I'm wrong here) you'd simply update the pathing in the database to access the primary remote store.

                                            This remote store would get backed up to (lets just say) a 4 bay Synology, which then gets pushed off to (again lets just say) BackBlaze B2.

                                            You'd have multiple copies of the data which is needed for the VM, off host, which can then be restored from separate mediums should something go belly up.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post