ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    HyperV Server - Raid Best Practices

    IT Discussion
    14
    55
    7.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • PhlipElderP
      PhlipElder @Dashrender
      last edited by

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

      Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

      As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

      We shall need to agree to disagree.

      TTFN

      DashrenderD 1 Reply Last reply Reply Quote 0
      • DashrenderD
        Dashrender @PhlipElder
        last edited by

        @PhlipElder said in HyperV Server - Raid Best Practices:

        @Dashrender said in HyperV Server - Raid Best Practices:

        @PhlipElder said in HyperV Server - Raid Best Practices:

        @Dashrender said in HyperV Server - Raid Best Practices:

        @PhlipElder said in HyperV Server - Raid Best Practices:

        @DustinB3403 said in HyperV Server - Raid Best Practices:

        @PhlipElder said in HyperV Server - Raid Best Practices:

        We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

        It's set up with:
        2x 240GB Intel SSD DC S4500 RAID 1 for host OS
        2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

        It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

        What you have is wasted SSD performance and cost and storage capacity.

        The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

        Cost wise, it's not that much of a step.

        What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

        As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

        I was thinking more for the guests than the host.

        A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

        EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

        OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

        Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

        Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

        As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

        We shall need to agree to disagree.

        TTFN

        Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

        PhlipElderP 1 Reply Last reply Reply Quote 0
        • PhlipElderP
          PhlipElder @Dashrender
          last edited by PhlipElder

          @Dashrender said in HyperV Server - Raid Best Practices:

          @PhlipElder said in HyperV Server - Raid Best Practices:

          @Dashrender said in HyperV Server - Raid Best Practices:

          @PhlipElder said in HyperV Server - Raid Best Practices:

          @Dashrender said in HyperV Server - Raid Best Practices:

          @PhlipElder said in HyperV Server - Raid Best Practices:

          @DustinB3403 said in HyperV Server - Raid Best Practices:

          @PhlipElder said in HyperV Server - Raid Best Practices:

          We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

          It's set up with:
          2x 240GB Intel SSD DC S4500 RAID 1 for host OS
          2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

          It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

          What you have is wasted SSD performance and cost and storage capacity.

          The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

          Cost wise, it's not that much of a step.

          What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

          As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

          I was thinking more for the guests than the host.

          A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

          EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

          OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

          Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

          Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

          As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

          We shall need to agree to disagree.

          TTFN

          Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

          See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

          SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

          Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

          DustinB3403D 1 Reply Last reply Reply Quote 0
          • DustinB3403D
            DustinB3403 @PhlipElder
            last edited by

            @PhlipElder said in HyperV Server - Raid Best Practices:

            @Dashrender said in HyperV Server - Raid Best Practices:

            @PhlipElder said in HyperV Server - Raid Best Practices:

            @Dashrender said in HyperV Server - Raid Best Practices:

            @PhlipElder said in HyperV Server - Raid Best Practices:

            @Dashrender said in HyperV Server - Raid Best Practices:

            @PhlipElder said in HyperV Server - Raid Best Practices:

            @DustinB3403 said in HyperV Server - Raid Best Practices:

            @PhlipElder said in HyperV Server - Raid Best Practices:

            We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

            It's set up with:
            2x 240GB Intel SSD DC S4500 RAID 1 for host OS
            2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

            It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

            What you have is wasted SSD performance and cost and storage capacity.

            The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

            Cost wise, it's not that much of a step.

            What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

            As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

            I was thinking more for the guests than the host.

            A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

            EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

            OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

            Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

            Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

            As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

            We shall need to agree to disagree.

            TTFN

            Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

            See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

            SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

            Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

            But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

            PhlipElderP 1 Reply Last reply Reply Quote 0
            • PhlipElderP
              PhlipElder @DustinB3403
              last edited by PhlipElder

              @DustinB3403 said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              @Dashrender said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              @Dashrender said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              @Dashrender said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              @DustinB3403 said in HyperV Server - Raid Best Practices:

              @PhlipElder said in HyperV Server - Raid Best Practices:

              We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

              It's set up with:
              2x 240GB Intel SSD DC S4500 RAID 1 for host OS
              2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

              It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

              What you have is wasted SSD performance and cost and storage capacity.

              The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

              Cost wise, it's not that much of a step.

              What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

              As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

              I was thinking more for the guests than the host.

              A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

              EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

              OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

              Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

              Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

              As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

              We shall need to agree to disagree.

              TTFN

              Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

              See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

              SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

              Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

              But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

              Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

              DashrenderD 1 Reply Last reply Reply Quote 0
              • DashrenderD
                Dashrender @PhlipElder
                last edited by

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @DustinB3403 said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @Dashrender said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @Dashrender said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @Dashrender said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                @DustinB3403 said in HyperV Server - Raid Best Practices:

                @PhlipElder said in HyperV Server - Raid Best Practices:

                We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                It's set up with:
                2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                What you have is wasted SSD performance and cost and storage capacity.

                The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                Cost wise, it's not that much of a step.

                What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                I was thinking more for the guests than the host.

                A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                We shall need to agree to disagree.

                TTFN

                Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

                I agree with Dustin - you make it seem like putting the hypervisor on SSD is something that matters - that it's a choice that could be good - and that's so rarely true. Personally it's so rarely true that I wouldn't even consider it personally.

                Now - an all SSD or all HDD - that's totally a different conversation - definitely choose what is right for the customer (or what they choose is right for themselves)... but that is HUGELY different than the hypervisor being on SSD and the VMs being on HDD - that just seems like a complete waste of money.

                PhlipElderP 1 Reply Last reply Reply Quote 0
                • PhlipElderP
                  PhlipElder @Dashrender
                  last edited by

                  @Dashrender said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  @DustinB3403 said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  @Dashrender said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  @Dashrender said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  @Dashrender said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  @DustinB3403 said in HyperV Server - Raid Best Practices:

                  @PhlipElder said in HyperV Server - Raid Best Practices:

                  We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                  It's set up with:
                  2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                  2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                  It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

                  What you have is wasted SSD performance and cost and storage capacity.

                  The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

                  Cost wise, it's not that much of a step.

                  What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

                  As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

                  I was thinking more for the guests than the host.

                  A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

                  EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

                  OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

                  Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

                  Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

                  As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

                  We shall need to agree to disagree.

                  TTFN

                  Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

                  See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

                  SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

                  Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

                  But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

                  Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

                  I agree with Dustin - you make it seem like putting the hypervisor on SSD is something that matters - that it's a choice that could be good - and that's so rarely true. Personally it's so rarely true that I wouldn't even consider it personally.

                  Now - an all SSD or all HDD - that's totally a different conversation - definitely choose what is right for the customer (or what they choose is right for themselves)... but that is HUGELY different than the hypervisor being on SSD and the VMs being on HDD - that just seems like a complete waste of money.

                  Point of clarification: We deploy all 10K SAS RAID 6 or we deploy all-flash.

                  Please point out where it was said that we deploy SSD for host OS and HDD/Rust for VMs?

                  1 Reply Last reply Reply Quote 0
                  • DustinB3403D
                    DustinB3403 @Joel
                    last edited by

                    @Joel said in HyperV Server - Raid Best Practices:

                    Option1:
                    2x 240GB SSD Sata 6GB (for OS)
                    4X 2TB 12Gb/s (for Data)
                    I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                    Right there. Post #1

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @DustinB3403
                      last edited by

                      @DustinB3403 said in HyperV Server - Raid Best Practices:

                      @Joel said in HyperV Server - Raid Best Practices:

                      Option1:
                      2x 240GB SSD Sata 6GB (for OS)
                      4X 2TB 12Gb/s (for Data)
                      I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                      Right there. Post #1

                      That's Joel not @PhlipElder

                      DustinB3403D PhlipElderP 2 Replies Last reply Reply Quote 0
                      • DustinB3403D
                        DustinB3403 @Dashrender
                        last edited by

                        @Dashrender said in HyperV Server - Raid Best Practices:

                        @DustinB3403 said in HyperV Server - Raid Best Practices:

                        @Joel said in HyperV Server - Raid Best Practices:

                        Option1:
                        2x 240GB SSD Sata 6GB (for OS)
                        4X 2TB 12Gb/s (for Data)
                        I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                        Right there. Post #1

                        That's Joel not @PhlipElder

                        Doh, mobile my bad

                        1 Reply Last reply Reply Quote 0
                        • PhlipElderP
                          PhlipElder @Dashrender
                          last edited by

                          @Dashrender said in HyperV Server - Raid Best Practices:

                          @DustinB3403 said in HyperV Server - Raid Best Practices:

                          @Joel said in HyperV Server - Raid Best Practices:

                          Option1:
                          2x 240GB SSD Sata 6GB (for OS)
                          4X 2TB 12Gb/s (for Data)
                          I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                          Right there. Post #1

                          That's Joel not @PhlipElder

                          Doh! 😄 SMH!

                          1 Reply Last reply Reply Quote 0
                          • DustinB3403D
                            DustinB3403 @PhlipElder
                            last edited by

                            @PhlipElder said in HyperV Server - Raid Best Practices:

                            We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                            It's set up with:
                            2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                            2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                            Here it is.

                            DashrenderD PhlipElderP 2 Replies Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender @PhlipElder
                              last edited by Dashrender

                              Dustin found the correct quote.

                              oh how both @PhlipElder and @joel had very similar posts. 😛

                              1 Reply Last reply Reply Quote 0
                              • DashrenderD
                                Dashrender @DustinB3403
                                last edited by

                                @DustinB3403 said in HyperV Server - Raid Best Practices:

                                @PhlipElder said in HyperV Server - Raid Best Practices:

                                We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                                It's set up with:
                                2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                                2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                                Here it is.

                                @PhlipElder if this isn't what you mean - We're happy to correct our understanding.

                                1 Reply Last reply Reply Quote 0
                                • PhlipElderP
                                  PhlipElder @DustinB3403
                                  last edited by PhlipElder

                                  @DustinB3403 said in HyperV Server - Raid Best Practices:

                                  @PhlipElder said in HyperV Server - Raid Best Practices:

                                  We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

                                  It's set up with:
                                  2x 240GB Intel SSD DC S4500 RAID 1 for host OS
                                  2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

                                  Here it is.

                                  Yeah, special case. Note in the quote "We have one of our boxes (R2208GZ4GC)..."

                                  1.9TB SSDs are ours and just enough space to work with for their setup thus the 240GB for host OS. We have another pair of 800GB Intel SSDs set aside as we may actually need more space than anticipated.

                                  Since this is a recovery situation we can't afford any extra time waiting on spindles. The server gets delivered this weekend and the cluster there rebuilt. It's a 2-node asymmetric setup (Intel R1208JP4OC with DataON DNS-1640 JBOD and 24x HGST 10K SAS spindles).

                                  We get our box back after the project is complete.

                                  1 Reply Last reply Reply Quote 0
                                  • JoelJ
                                    Joel
                                    last edited by

                                    This got a little heated :face_screaming_in_fear:
                                    -So can we clarify, back to the OP - Consensus out of the options I have, Option 2 is the best way to go?

                                    6x 2TB 12GB/s Drives in OBR10 for everything and then creating two partitions (1 for the HyperVisor OS) and then (1 for data - to store all my Virtual Machines and Data).

                                    My VMs would be in D:\Hyper-V\VM's
                                    My Virtual Hard Disks (daily data) would be in D:\Hyper-V\Data

                                    J pmonchoP 2 Replies Last reply Reply Quote 0
                                    • J
                                      Jimmy9008 @Joel
                                      last edited by

                                      @Joel said in HyperV Server - Raid Best Practices:

                                      This got a little heated :face_screaming_in_fear:
                                      -So can we clarify, back to the OP - Consensus out of the options I have, Option 2 is the best way to go?

                                      Correct.

                                      1 Reply Last reply Reply Quote 0
                                      • pmonchoP
                                        pmoncho @Joel
                                        last edited by

                                        @Joel said in HyperV Server - Raid Best Practices:

                                        This got a little heated :face_screaming_in_fear:
                                        -So can we clarify, back to the OP - Consensus out of the options I have, Option 2 is the best way to go?

                                        6x 2TB 12GB/s Drives in OBR10 for everything and then creating two partitions (1 for the HyperVisor OS) and then (1 for data - to store all my Virtual Machines and Data).

                                        My VMs would be in D:\Hyper-V\VM's
                                        My Virtual Hard Disks (daily data) would be in D:\Hyper-V\Data

                                        Don't forget to do the cost comparisons of SAS in OBR10 vs SSD in RAID5. You may be surprised to find out that SSD in RAID 5 is cheaper (Stick with SSD 6Gb/s vs 12Gb/s) depending upon your server manufacturer.

                                        1 Reply Last reply Reply Quote 3
                                        • 1
                                          1337
                                          last edited by 1337

                                          Failure rate on harddrives are 2-3 higher than enterprise SSDs. Only reason to use hard drives today are price per GB and then use as few drives as possible.

                                          I'd put all the VMs on one raid 1 SSD and keep the file server files on another RAID 1 with hard drives. Preferably 3.5" if you need lots of storage.

                                          3 Windows server VMs, 1 host and SQL database files shouldn't take much space. 2x240GB or perhaps 2x480GB will suffice for that. Then the 2x4TB or however big you want to go for the file server storage. 12TB enterprise drives are readily available today, around $400 ea.

                                          Option 3:
                                          2 x 240GB SSD (for everything except below, 2x480GB if needed)
                                          2 X 4TB HDD (for file server storage)
                                          RAID 1 arrays. SATA will suffice for everything but for some drives SATA and SAS are priced the same so use whatever.

                                          1 Reply Last reply Reply Quote 0
                                          • PhlipElderP
                                            PhlipElder @Joel
                                            last edited by PhlipElder

                                            @Joel said in HyperV Server - Raid Best Practices:

                                            Hi guys.
                                            Im drawn between two setup scenarios for a new server:

                                            Option1:
                                            2x 240GB SSD Sata 6GB (for OS)
                                            4X 2TB 12Gb/s (for Data)
                                            I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

                                            Options2:
                                            6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).

                                            Is there any better options? What would you do.

                                            Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).

                                            Thoughts welcomed and appreciated.

                                            I suggest using PerfMon to baseline IOPS, Throughput, Disk Latency, and Disk Queue Lengths on the current host to get a feel for pressure on the disk subsystem. That would make the decision making process a bit simpler as the future setup could be scoped to fit today's performance needs and scaled a bit for tomorrow's needs over the solution's lifetime.

                                            EDIT: PerfMon on the host also has guest counters that can further help to scope which VMs demand what.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post