ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    HyperV Server - Raid Best Practices

    IT Discussion
    14
    55
    7.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • PhlipElderP
      PhlipElder
      last edited by

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      DustinB3403D DashrenderD 3 Replies Last reply Reply Quote 0
      • DustinB3403D
        DustinB3403 @PhlipElder
        last edited by

        @PhlipElder said in HyperV Server - Raid Best Practices:

        We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

        It's set up with:
        2x 240GB Intel SSD DC S4500 RAID 1 for host OS
        2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

        It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

        What you have is wasted SSD performance and cost and storage capacity.

        PhlipElderP 1 Reply Last reply Reply Quote 2
        • PhlipElderP
          PhlipElder @DustinB3403
          last edited by

          @DustinB3403 said in HyperV Server - Raid Best Practices:

          @PhlipElder said in HyperV Server - Raid Best Practices:

          We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

          It's set up with:
          2x 240GB Intel SSD DC S4500 RAID 1 for host OS
          2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

          It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

          What you have is wasted SSD performance and cost and storage capacity.

          The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

          Cost wise, it's not that much of a step.

          DashrenderD 1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @scottalanmiller
            last edited by

            @scottalanmiller said in HyperV Server - Raid Best Practices:

            @JaredBusch said in HyperV Server - Raid Best Practices:

            @Obsolesce said in HyperV Server - Raid Best Practices:

            @Joel said in HyperV Server - Raid Best Practices:

            Hi guys.
            Im drawn between two setup scenarios for a new server:

            Option1:
            2x 240GB SSD Sata 6GB (for OS)
            4X 2TB 12Gb/s (for Data)
            I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

            Options2:
            6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).

            Is there any better options? What would you do.

            Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).

            Thoughts welcomed and appreciated.

            I'd go with the Option 1 setup with the following changes:

            RAID10 the 4x 2tb drives, and partition that for C:OS and D:Data.

            RAID1 the SSDs (E:) , and store the database virtual disks on there, and the rest on D:.

            Using SSD for most SMB database needs are overkill. They run perfectly fine on R10 spinners.

            Having 6x drives is probably overkill, too.

            In reality, maybe just two SSDs in RAID 1 is all that is needed.

            Well - other than shear volume of storage.

            1 Reply Last reply Reply Quote 0
            • DashrenderD
              Dashrender @PhlipElder
              last edited by

              @PhlipElder said in HyperV Server - Raid Best Practices:

              @DustinB3403 said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

              It's set up with:
              2x 240GB Intel SSD DC S4500 RAID 1 for host OS
              2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

              It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

              What you have is wasted SSD performance and cost and storage capacity.

              The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

              Cost wise, it's not that much of a step.

              What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

              As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

              PhlipElderP 1 Reply Last reply Reply Quote 1
              • PhlipElderP
                PhlipElder @Dashrender
                last edited by PhlipElder

                @Dashrender said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @DustinB3403 said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                It's set up with:
                2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                What you have is wasted SSD performance and cost and storage capacity.

                The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                Cost wise, it's not that much of a step.

                What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                I was thinking more for the guests than the host.

                A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                DustinB3403D DashrenderD 2 Replies Last reply Reply Quote 0
                • DustinB3403D
                  DustinB3403 @PhlipElder
                  last edited by

                  @PhlipElder it's still added cost for little to no gain.

                  Try and justify this poor decision all you want. But it was and is still a poor decision.

                  PhlipElderP 1 Reply Last reply Reply Quote 1
                  • PhlipElderP
                    PhlipElder @DustinB3403
                    last edited by

                    @DustinB3403 said in HyperV Server - Raid Best Practices:

                    @PhlipElder it's still added cost for little to no gain.

                    Try and justify this poor decision all you want. But it was and is still a poor decision.

                    To each their own.

                    1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @PhlipElder
                      last edited by

                      @PhlipElder said in HyperV Server - Raid Best Practices:

                      @Dashrender said in HyperV Server - Raid Best Practices:

                      @PhlipElder said in HyperV Server - Raid Best Practices:

                      @DustinB3403 said in HyperV Server - Raid Best Practices:

                      @PhlipElder said in HyperV Server - Raid Best Practices:

                      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                      It's set up with:
                      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                      What you have is wasted SSD performance and cost and storage capacity.

                      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                      Cost wise, it's not that much of a step.

                      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                      I was thinking more for the guests than the host.

                      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                      PhlipElderP 1 Reply Last reply Reply Quote 0
                      • PhlipElderP
                        PhlipElder @Dashrender
                        last edited by

                        @Dashrender said in HyperV Server - Raid Best Practices:

                        @PhlipElder said in HyperV Server - Raid Best Practices:

                        @Dashrender said in HyperV Server - Raid Best Practices:

                        @PhlipElder said in HyperV Server - Raid Best Practices:

                        @DustinB3403 said in HyperV Server - Raid Best Practices:

                        @PhlipElder said in HyperV Server - Raid Best Practices:

                        We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                        It's set up with:
                        2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                        2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                        It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                        What you have is wasted SSD performance and cost and storage capacity.

                        The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                        Cost wise, it's not that much of a step.

                        What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                        As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                        I was thinking more for the guests than the host.

                        A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                        EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                        OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                        Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                        Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                        As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                        We shall need to agree to disagree.

                        TTFN

                        DashrenderD 1 Reply Last reply Reply Quote 0
                        • DashrenderD
                          Dashrender @PhlipElder
                          last edited by

                          @PhlipElder said in HyperV Server - Raid Best Practices:

                          @Dashrender said in HyperV Server - Raid Best Practices:

                          @PhlipElder said in HyperV Server - Raid Best Practices:

                          @Dashrender said in HyperV Server - Raid Best Practices:

                          @PhlipElder said in HyperV Server - Raid Best Practices:

                          @DustinB3403 said in HyperV Server - Raid Best Practices:

                          @PhlipElder said in HyperV Server - Raid Best Practices:

                          We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                          It's set up with:
                          2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                          2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                          It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                          What you have is wasted SSD performance and cost and storage capacity.

                          The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                          Cost wise, it's not that much of a step.

                          What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                          As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                          I was thinking more for the guests than the host.

                          A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                          EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                          OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                          Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                          Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                          As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                          We shall need to agree to disagree.

                          TTFN

                          Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                          PhlipElderP 1 Reply Last reply Reply Quote 0
                          • PhlipElderP
                            PhlipElder @Dashrender
                            last edited by PhlipElder

                            @Dashrender said in HyperV Server - Raid Best Practices:

                            @PhlipElder said in HyperV Server - Raid Best Practices:

                            @Dashrender said in HyperV Server - Raid Best Practices:

                            @PhlipElder said in HyperV Server - Raid Best Practices:

                            @Dashrender said in HyperV Server - Raid Best Practices:

                            @PhlipElder said in HyperV Server - Raid Best Practices:

                            @DustinB3403 said in HyperV Server - Raid Best Practices:

                            @PhlipElder said in HyperV Server - Raid Best Practices:

                            We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                            It's set up with:
                            2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                            2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                            It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                            What you have is wasted SSD performance and cost and storage capacity.

                            The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                            Cost wise, it's not that much of a step.

                            What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                            As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                            I was thinking more for the guests than the host.

                            A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                            EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                            OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                            Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                            Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                            As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                            We shall need to agree to disagree.

                            TTFN

                            Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                            See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                            SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                            Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                            DustinB3403D 1 Reply Last reply Reply Quote 0
                            • DustinB3403D
                              DustinB3403 @PhlipElder
                              last edited by

                              @PhlipElder said in HyperV Server - Raid Best Practices:

                              @Dashrender said in HyperV Server - Raid Best Practices:

                              @PhlipElder said in HyperV Server - Raid Best Practices:

                              @Dashrender said in HyperV Server - Raid Best Practices:

                              @PhlipElder said in HyperV Server - Raid Best Practices:

                              @Dashrender said in HyperV Server - Raid Best Practices:

                              @PhlipElder said in HyperV Server - Raid Best Practices:

                              @DustinB3403 said in HyperV Server - Raid Best Practices:

                              @PhlipElder said in HyperV Server - Raid Best Practices:

                              We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                              It's set up with:
                              2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                              2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                              It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                              What you have is wasted SSD performance and cost and storage capacity.

                              The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                              Cost wise, it's not that much of a step.

                              What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                              As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                              I was thinking more for the guests than the host.

                              A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                              EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                              OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                              Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                              Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                              As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                              We shall need to agree to disagree.

                              TTFN

                              Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                              See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                              SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                              Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                              But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                              PhlipElderP 1 Reply Last reply Reply Quote 0
                              • PhlipElderP
                                PhlipElder @DustinB3403
                                last edited by PhlipElder

                                @DustinB3403 said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                @Dashrender said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                @Dashrender said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                @Dashrender said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                @DustinB3403 said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                                It's set up with:
                                2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                                2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                                It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                                What you have is wasted SSD performance and cost and storage capacity.

                                The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                                Cost wise, it's not that much of a step.

                                What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                                As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                                I was thinking more for the guests than the host.

                                A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                                EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                                OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                                Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                                Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                                As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                                We shall need to agree to disagree.

                                TTFN

                                Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                                See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                                SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                                Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                                But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                                Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

                                DashrenderD 1 Reply Last reply Reply Quote 0
                                • DashrenderD
                                  Dashrender @PhlipElder
                                  last edited by

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  @DustinB3403 said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  @Dashrender said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  @Dashrender said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  @Dashrender said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  @DustinB3403 said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                                  It's set up with:
                                  2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                                  2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                                  It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                                  What you have is wasted SSD performance and cost and storage capacity.

                                  The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                                  Cost wise, it's not that much of a step.

                                  What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                                  As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                                  I was thinking more for the guests than the host.

                                  A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                                  EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                                  OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                                  Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                                  Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                                  As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                                  We shall need to agree to disagree.

                                  TTFN

                                  Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                                  See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                                  SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                                  Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                                  But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                                  Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

                                  I agree with Dustin - you make it seem like putting the hypervisor on SSD is something that matters - that it's a choice that could be good - and that's so rarely true. Personally it's so rarely true that I wouldn't even consider it personally.

                                  Now - an all SSD or all HDD - that's totally a different conversation - definitely choose what is right for the customer (or what they choose is right for themselves)... but that is HUGELY different than the hypervisor being on SSD and the VMs being on HDD - that just seems like a complete waste of money.

                                  PhlipElderP 1 Reply Last reply Reply Quote 0
                                  • PhlipElderP
                                    PhlipElder @Dashrender
                                    last edited by

                                    @Dashrender said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    @DustinB3403 said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    @Dashrender said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    @Dashrender said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    @Dashrender said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    @DustinB3403 said in HyperV Server - Raid Best Practices:

                                    @PhlipElder said in HyperV Server - Raid Best Practices:

                                    We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                                    It's set up with:
                                    2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                                    2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                                    It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                                    What you have is wasted SSD performance and cost and storage capacity.

                                    The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                                    Cost wise, it's not that much of a step.

                                    What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                                    As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                                    I was thinking more for the guests than the host.

                                    A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                                    EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                                    OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                                    Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                                    Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                                    As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                                    We shall need to agree to disagree.

                                    TTFN

                                    Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                                    See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                                    SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                                    Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                                    But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                                    Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

                                    I agree with Dustin - you make it seem like putting the hypervisor on SSD is something that matters - that it's a choice that could be good - and that's so rarely true. Personally it's so rarely true that I wouldn't even consider it personally.

                                    Now - an all SSD or all HDD - that's totally a different conversation - definitely choose what is right for the customer (or what they choose is right for themselves)... but that is HUGELY different than the hypervisor being on SSD and the VMs being on HDD - that just seems like a complete waste of money.

                                    Point of clarification: We deploy all 10K SAS RAID 6 or we deploy all-flash.

                                    Please point out where it was said that we deploy SSD for host OS and HDD/Rust for VMs?

                                    1 Reply Last reply Reply Quote 0
                                    • DustinB3403D
                                      DustinB3403 @Joel
                                      last edited by

                                      @Joel said in HyperV Server - Raid Best Practices:

                                      Option1:
                                      2x 240GB SSD Sata 6GB (for OS)
                                      4X 2TB 12Gb/s (for Data)
                                      I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                                      Right there. Post #1

                                      DashrenderD 1 Reply Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender @DustinB3403
                                        last edited by

                                        @DustinB3403 said in HyperV Server - Raid Best Practices:

                                        @Joel said in HyperV Server - Raid Best Practices:

                                        Option1:
                                        2x 240GB SSD Sata 6GB (for OS)
                                        4X 2TB 12Gb/s (for Data)
                                        I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                                        Right there. Post #1

                                        That's Joel not @PhlipElder

                                        DustinB3403D PhlipElderP 2 Replies Last reply Reply Quote 0
                                        • DustinB3403D
                                          DustinB3403 @Dashrender
                                          last edited by

                                          @Dashrender said in HyperV Server - Raid Best Practices:

                                          @DustinB3403 said in HyperV Server - Raid Best Practices:

                                          @Joel said in HyperV Server - Raid Best Practices:

                                          Option1:
                                          2x 240GB SSD Sata 6GB (for OS)
                                          4X 2TB 12Gb/s (for Data)
                                          I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                                          Right there. Post #1

                                          That's Joel not @PhlipElder

                                          Doh, mobile my bad

                                          1 Reply Last reply Reply Quote 0
                                          • PhlipElderP
                                            PhlipElder @Dashrender
                                            last edited by

                                            @Dashrender said in HyperV Server - Raid Best Practices:

                                            @DustinB3403 said in HyperV Server - Raid Best Practices:

                                            @Joel said in HyperV Server - Raid Best Practices:

                                            Option1:
                                            2x 240GB SSD Sata 6GB (for OS)
                                            4X 2TB 12Gb/s (for Data)
                                            I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                                            Right there. Post #1

                                            That's Joel not @PhlipElder

                                            Doh! 😄 SMH!

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post