Ideas for how to use new, free gear from HPE?
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
Blades are tough because they have no storage of their own that is of any use, and so you have to come up with storage to back them with. If you have that already, they are just less than ideal additional servers. But if you don't have that already or if what you have isn't adequate for the performance, capacity or reliability that you'd need from a server then these are totally useless. The storage investment is easily more than the cost of buying a new infrastructure yourself from the beginning that makes sense.
Given that blades are essentially just marketing ploys themselves, getting one as a "gift" isn't too useful. Blades are generally free to any business that is willing to "test" them because they are designed to hook you by forcing you to invest so much that you have an emotional sunk cost fallacy problem that gets the business to buy more and more of the worst stuff because they feel like they have to because they have "so much invested already."
This just keeps getting better and better!
-
Imagine if I gifted you a printer that you don't need and are not allowed to sell, ever. You have no use for it, you never print. But you might, and then you have to buy my ink. So I make money on the surplus printer that I "gave" to you (but retain control over.) Not really a gift. I really just increased your risk and forced you to store my printer for me and reduced my own warehouse and tax burdens.
That's what you have here. It's a burden, I think, given your overall scenario.
-
Blades have been beaten to death in SW. Any vendorgiving them away would hopefully have done their community homework that community is aware of their total lack of applicability and value. Even the Fortune 100 struggle to find any value in blades. I've worked in shops of over 100,000 servers and THEY stated that at their scale they couldn't make blades make sense! I've seen multiple Wall St firms make the same decision. Costly, risky, complicated, fragile.... no benefits, all negatives. Sadly it might be a team that doesn't read the community.
-
Of course, if the SAN storage had been included, this would be a totally different discussion. If a 3PAR had been part of the deal, but it isn't. So to make these production worthy, you'd need a very expensive SAN. And as these are not as good as normal servers and the SAN architecture isn't any good at this size, no matter what you spend you'll never be as well off as if you just took the same money and bought two servers.... which you don't even need.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
Blades have been beaten to death in SW. Any vendor like this giving them away knows full well that that community is aware of their total lack of applicability and value. This isn't a mistake. Even the Fortune 100 struggle to find any value in blades. I've worked in shops of over 100,000 servers and THEY stated that at their scale they couldn't make blades make sense! I've seen multiple Wall St firms make the same decision. Costly, risky, complicated, fragile.... no benefits, all negatives.
I hear ya there. The hospital I worked for several years ago actually got rid of a bunch of their blade servers and replaced them with a huge redundant VMware setup.
-
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
-
@NetworkNerd said in Ideas for how to use new, free gear from HPE?:
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
I don't think management would have any issues at all being filmed. As far as getting rid of the equipment after being part of the video, based on the contest terms, it sounds like there's no way of selling it (at least not in the first 3 years of owning it).
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
It's obviously not a set of equipment meant to "improve" anyone's environment.
Speak for yourself. It would be very welcomed in my environment. With an extra chassis, 1TB of RAM per blade, and stack that baby out with 16 blades, probably can host another 500 or 600 VMs.
Hell, there's an option for this. If it's gonna be UAT, resell space on it. Stack it up, lease it out.
-
@NetworkNerd said in Ideas for how to use new, free gear from HPE?:
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
It's pretty hard to force you to keep equipment.
-
@Shuey said in Ideas for how to use new, free gear from HPE?:
@NetworkNerd said in Ideas for how to use new, free gear from HPE?:
I remember reading something upon entering that contest that you had to be prepared to be filmed by HPE's media team on premise at your company if you won. Hopefully management is fine with that? And could you then (if you wanted to) still get rid of the equipment after being a part of the promotional HPE video?
I don't think management would have any issues at all being filmed. As far as getting rid of the equipment after being part of the video, based on the contest terms, it sounds like there's no way of selling it (at least not in the first 3 years of owning it).
No, you'd just have to return it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
It's obviously not a set of equipment meant to "improve" anyone's environment.
Speak for yourself. It would be very welcomed in my environment. With an extra chassis, 1TB of RAM per blade, and stack that baby out with 16 blades, probably can host another 500 or 600 VMs.
Hell, there's an option for this. If it's gonna be UAT, resell space on it. Stack it up, lease it out.
That's a lot of risk... using a very expensive and fragile platform that needs a big investment to be useful. Lease that out and you need to invest a ton to have the necessary environment to support that. That's the issue here, no matter how you use it you have to invest a lot of money and do so on equipment that is sub-par and over priced. Even if you were going to lease it out, you can get better gear at lower prices (or TCO at least) by not using this gear.
So the price here might be zero. But the cost seems to high to use.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Speak for yourself. It would be very welcomed in my environment.
Because..... you are willing to use blades, already have invested in that so for better or worse this fits your environment even if blades may not be ideal for you and you already own the storage.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Speak for yourself. It would be very welcomed in my environment.
Because..... you are willing to use blades, already have invested in that so for better or worse this fits your environment even if blades may not be ideal for you and you already own the storage.
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
If I went with even 1U units, I wouldn't have near the amount of processing power that the blade system would provide. A chassis is 10U. With a 1U unit all I could hope for is 10 to 20 sockets. With a blade chassis, I get 16 blades, 16 to 32 processors fully stacked. Not to mention the single management interface for networking, storage, and so forth. Fully racked and stacked cabinet, I get 64 blades. With 1U units, I get 42 at best. If I go IBM, I can get even more with a mix and match of i, z, and x86 all in one chassis. For HP, I can get x86 and Itanium blades. Cisco UCS, only 56 blades on a 42U cabinet total but with some integrated networking. All with a single storage fabric and super easy deployment.
Blades are inappropriate for lots of folks, especially the ones who have just one system right now. For service providers, like us, we need heavy density because cabinet space costs money. Power, cooling, and such are just side benefits.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Because I'm the perfect candidate for blade systems. High density computing in a small footprint.
At the few giant environments that I've been in, blades couldn't get higher density than we could get with traditional servers. They sold them on density, but the resulting density was decent, but not quite as good. There is a reason why Google and Facebook go for higher density in other ways as well.
In a 42U standard cabinet, you can have:
64 two socket x86 blades with 2U to spare with Dell/HP for 128 processors. Plus the 2U can be used for networking gear.
42 two socket x86 1U servers for 86 processors. With no spare space for networking gear.Right now, there are no real quad socket x86 1U servers. There was a few in the past, but expensive as shit. And they have been overtaken by the higher density core per socket processors for a while now.
This is just the x86 world. The ASIC style device is not general purpose, which is what Google and Facebook use. Yeah, I can get more density of "servers" by using ARM for one and done kind of workloads, but that's not general purpose. I would be surprised if anyone in SMB does anything like that. Specialty workloads can get more and more and more into a single U of space, but when your application is SQL Server 2016 with a Sharepoint frontend, you don't need fancy shit.
Most folks will never see that level of complexity.
-
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
-
Sell it all for $30k and buy some gear that'll really fit and work in your environment
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector True, but the SMB doesn't worry about processor density in a rack, either. Per rack processor density is only enterprise space and even there, not that common. It's more hosting provider space.
In all the Fortune 500 companies I been around, yeah, density is pretty important. A certain agro-cult in Illinois had some serious density because it saved them cash on the environment. Big red V used blades all over. So did the Death Star, who practically pioneered density by changing switching gear for decades.
One of the big things that is often overlooked with blades is the extra gear needed to make it work. It moves the storage elsewhere, so you actually get better density of the entire workload for the SMB without blades. Only tons and tons of blades connected to few SAN get those high densities.