Ideas for how to use new, free gear from HPE?
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Speak for yourself. It would be very welcomed in my environment.
Because..... you are willing to use blades, already have invested in that so for better or worse this fits your environment even if blades may not be ideal for you and you already own the storage.
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
If I went with even 1U units, I wouldn't have near the amount of processing power that the blade system would provide. A chassis is 10U. With a 1U unit all I could hope for is 10 to 20 sockets. With a blade chassis, I get 16 blades, 16 to 32 processors fully stacked. Not to mention the single management interface for networking, storage, and so forth. Fully racked and stacked cabinet, I get 64 blades. With 1U units, I get 42 at best. If I go IBM, I can get even more with a mix and match of i, z, and x86 all in one chassis. For HP, I can get x86 and Itanium blades. Cisco UCS, only 56 blades on a 42U cabinet total but with some integrated networking. All with a single storage fabric and super easy deployment.
Blades are inappropriate for lots of folks, especially the ones who have just one system right now. For service providers, like us, we need heavy density because cabinet space costs money. Power, cooling, and such are just side benefits.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
In a 42U standard cabinet, you can have:
64 two socket x86 blades with 2U to spare with Dell/HP for 128 processors. Plus the 2U can be used for networking gear.
42 two socket x86 1U servers for 86 processors. With no spare space for networking gear.Right now, there are no real quad socket x86 1U servers. There was a few in the past, but expensive as shit. And they have been overtaken by the higher density core per socket processors for a while now.
This is just the x86 world. The ASIC style device is not general purpose, which is what Google and Facebook use. Yeah, I can get more density of "servers" by using ARM for one and done kind of workloads, but that's not general purpose. I would be surprised if anyone in SMB does anything like that. Specialty workloads can get more and more and more into a single U of space, but when your application is SQL Server 2016 with a Sharepoint frontend, you don't need fancy shit.
Most folks will never see that level of complexity.
-
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
-
Sell it all for $30k and buy some gear that'll really fit and work in your environment
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
One of the big things that is often overlooked with blades is the extra gear needed to make it work. It moves the storage elsewhere, so you actually get better density of the entire workload for the SMB without blades. Only tons and tons of blades connected to few SAN get those high densities.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
Well that sucks
I'd need to spend a whole pile of cash just to get half that stuff into my server room, let alone the electrical!
-
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
-
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
-
@StrongBad said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
LOL Scott said that earlier too
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
The only thing a blade brings one place to view it physically. The switch gear you plug in, it's managed the same old ways. Otherwise, it's pretty much the same equipment.
We separate out our teams to play to our strengths. I'm the Microsoft expert, we have a Linux expert, storage expert, networking experts and so on. We all can do some other job, but we focus on our strengths to keep things going.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
Geez, lazy folk.
Considering you can narrow down everything in a blade chassis, it sounds more like folks didn't understand the management rather than blades didn't offer all that could be done.
Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
Geez, lazy folk.
Considering you can narrow down everything in a blade chassis...
how do you do that without comingling responsibilities? HPE was supporting it directly and said that it was the only option. So if you solved that problem, you are beyond what HPE believed their blades could do.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.
I only need four uplinks per chassis for the network, another four for fibre channel.. If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.
Well if the vendor is the one that doesn't understand it, bad it is.