ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Ideas for how to use new, free gear from HPE?

    IT Discussion
    13
    114
    15.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @PSX_Defector
      last edited by

      @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

      Because I'm the perfect candidate for blade systems. High density computing in a small footprint.

      At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.

      PSX_DefectorP 1 Reply Last reply Reply Quote 0
      • PSX_DefectorP
        PSX_Defector @scottalanmiller
        last edited by

        @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

        @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

        Because I'm the perfect candidate for blade systems. High density computing in a small footprint.

        At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.

        In a 42U standard cabinet, you can have:

        64 two socket x86 blades with 2U to spare with Dell/HP for 128 processors. Plus the 2U can be used for networking gear.
        42 two socket x86 1U servers for 86 processors. With no spare space for networking gear.

        Right now, there are no real quad socket x86 1U servers. There was a few in the past, but expensive as shit. And they have been overtaken by the higher density core per socket processors for a while now.

        This is just the x86 world. The ASIC style device is not general purpose, which is what Google and Facebook use. Yeah, I can get more density of "servers" by using ARM for one and done kind of workloads, but that's not general purpose. I would be surprised if anyone in SMB does anything like that. Specialty workloads can get more and more and more into a single U of space, but when your application is SQL Server 2016 with a Sharepoint frontend, you don't need fancy shit.

        Most folks will never see that level of complexity.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @PSX_Defector
          last edited by

          @PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.

          PSX_DefectorP 1 Reply Last reply Reply Quote 0
          • PSX_DefectorP
            PSX_Defector @scottalanmiller
            last edited by

            @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

            @PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.

            In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • MattSpellerM
              MattSpeller
              last edited by

              Sell it all for $30k and buy some gear that'll really fit and work in your environment

              scottalanmillerS 1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller @MattSpeller
                last edited by

                @MattSpeller said in Ideas for how to use new, free gear from HPE?:

                Sell it all for $30k and buy some gear that'll really fit and work in your environment

                That was covered, he's not allowed to sell it.

                MattSpellerM 1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @PSX_Defector
                  last edited by

                  @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                  @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                  @PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.

                  In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.

                  One of the big things that is often overlooked with blades is the extra gear needed to make it work. It moves the storage elsewhere, so you actually get better density of the entire workload for the SMB without blades. Only tons and tons of blades connected to few SAN get those high densities.

                  1 Reply Last reply Reply Quote 0
                  • MattSpellerM
                    MattSpeller @scottalanmiller
                    last edited by

                    @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                    @MattSpeller said in Ideas for how to use new, free gear from HPE?:

                    Sell it all for $30k and buy some gear that'll really fit and work in your environment

                    That was covered, he's not allowed to sell it.

                    Well that sucks

                    I'd need to spend a whole pile of cash just to get half that stuff into my server room, let alone the electrical!

                    1 Reply Last reply Reply Quote 1
                    • scottalanmillerS
                      scottalanmiller
                      last edited by

                      One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.

                      PSX_DefectorP 1 Reply Last reply Reply Quote 0
                      • MattSpellerM
                        MattSpeller
                        last edited by

                        So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you

                        StrongBadS 1 Reply Last reply Reply Quote 0
                        • StrongBadS
                          StrongBad @MattSpeller
                          last edited by

                          @MattSpeller said in Ideas for how to use new, free gear from HPE?:

                          So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you

                          And then buy a bunch of support gear (from us) so that it works.

                          DashrenderD 1 Reply Last reply Reply Quote 0
                          • DashrenderD
                            Dashrender @StrongBad
                            last edited by

                            @StrongBad said in Ideas for how to use new, free gear from HPE?:

                            @MattSpeller said in Ideas for how to use new, free gear from HPE?:

                            So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you

                            And then buy a bunch of support gear (from us) so that it works.

                            LOL Scott said that earlier too 😛

                            1 Reply Last reply Reply Quote 0
                            • PSX_DefectorP
                              PSX_Defector @scottalanmiller
                              last edited by

                              @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                              One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.

                              So? One guy to rule them all is not a viable solution once you move from SMB.

                              And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.

                              The only thing a blade brings one place to view it physically. The switch gear you plug in, it's managed the same old ways. Otherwise, it's pretty much the same equipment.

                              We separate out our teams to play to our strengths. I'm the Microsoft expert, we have a Linux expert, storage expert, networking experts and so on. We all can do some other job, but we focus on our strengths to keep things going.

                              scottalanmillerS 2 Replies Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @PSX_Defector
                                last edited by

                                @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.

                                So? One guy to rule them all is not a viable solution once you move from SMB.

                                Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.

                                PSX_DefectorP 1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @PSX_Defector
                                  last edited by

                                  @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                  And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.

                                  That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.

                                  PSX_DefectorP 1 Reply Last reply Reply Quote 0
                                  • PSX_DefectorP
                                    PSX_Defector @scottalanmiller
                                    last edited by

                                    @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                    @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                    @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                    One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.

                                    So? One guy to rule them all is not a viable solution once you move from SMB.

                                    Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.

                                    Geez, lazy folk.

                                    Considering you can narrow down everything in a blade chassis, it sounds more like folks didn't understand the management rather than blades didn't offer all that could be done.

                                    Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.

                                    scottalanmillerS 2 Replies Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @PSX_Defector
                                      last edited by

                                      @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                      @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                      @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                      @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                      One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.

                                      So? One guy to rule them all is not a viable solution once you move from SMB.

                                      Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.

                                      Geez, lazy folk.

                                      Considering you can narrow down everything in a blade chassis...

                                      how do you do that without comingling responsibilities? HPE was supporting it directly and said that it was the only option. So if you solved that problem, you are beyond what HPE believed their blades could do.

                                      PSX_DefectorP 1 Reply Last reply Reply Quote 0
                                      • PSX_DefectorP
                                        PSX_Defector @scottalanmiller
                                        last edited by

                                        @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                        @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                        And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.

                                        That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.

                                        I only need four uplinks per chassis for the network, another four for fibre channel.. If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.

                                        scottalanmillerS 2 Replies Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller @PSX_Defector
                                          last edited by

                                          @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                          Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.

                                          Well if the vendor is the one that doesn't understand it, bad it is.

                                          1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @PSX_Defector
                                            last edited by

                                            @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                            @scottalanmiller said in Ideas for how to use new, free gear from HPE?:

                                            @PSX_Defector said in Ideas for how to use new, free gear from HPE?:

                                            And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.

                                            That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.

                                            I only need four uplinks per chassis for the network, another four for fibre channel.. If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.

                                            No, you need two to four PER BLADE. If you do anything else, you have switching inside the blade chassis and now the chassis owner has network control and you violate the separation of duties that we were discussion above. That's the issue that HPE could not get us past. They could not come up with a way to maintain the separation between groups without replicating the entire former physical networking world.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post