Ideas for how to use new, free gear from HPE?
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
Sell it all for $30k and buy some gear that'll really fit and work in your environment
That was covered, he's not allowed to sell it.
Well that sucks
I'd need to spend a whole pile of cash just to get half that stuff into my server room, let alone the electrical!
-
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
-
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
-
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
-
@StrongBad said in Ideas for how to use new, free gear from HPE?:
@MattSpeller said in Ideas for how to use new, free gear from HPE?:
So, congratulations here's $60k of gear, let us know when you've spent $30k to upgrade your electrical, UPS and we'll come film you
And then buy a bunch of support gear (from us) so that it works.
LOL Scott said that earlier too
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
The only thing a blade brings one place to view it physically. The switch gear you plug in, it's managed the same old ways. Otherwise, it's pretty much the same equipment.
We separate out our teams to play to our strengths. I'm the Microsoft expert, we have a Linux expert, storage expert, networking experts and so on. We all can do some other job, but we focus on our strengths to keep things going.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
Geez, lazy folk.
Considering you can narrow down everything in a blade chassis, it sounds more like folks didn't understand the management rather than blades didn't offer all that could be done.
Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
One major problem is that when you get big enough to have blades have sensible density, you normally have separate system, networking and storage teams. But Blades mix those together.
So? One guy to rule them all is not a viable solution once you move from SMB.
Yes, and that was one of the reasons that blades got ruled out. They were designed to be "one guy to rule them all." Everything was comingled in one interface and the security between teams could not exist.
Geez, lazy folk.
Considering you can narrow down everything in a blade chassis...
how do you do that without comingling responsibilities? HPE was supporting it directly and said that it was the only option. So if you solved that problem, you are beyond what HPE believed their blades could do.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.
I only need four uplinks per chassis for the network, another four for fibre channel.. If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
Which is really the biggest reason about anything. Folks don't understand, they go "bad!!!!" and that's the end of it.
Well if the vendor is the one that doesn't understand it, bad it is.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
And blades do not necessarily mix them together. We use exclusively software defined networking, from our switching to our firewalls. Would be the same thing without blades.
That is a port problem, you need so many ports to handle that, it kills one of the major selling points of the blades.
I only need four uplinks per chassis for the network, another four for fibre channel.. If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.
No, you need two to four PER BLADE. If you do anything else, you have switching inside the blade chassis and now the chassis owner has network control and you violate the separation of duties that we were discussion above. That's the issue that HPE could not get us past. They could not come up with a way to maintain the separation between groups without replicating the entire former physical networking world.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
If I had 42 1U units, I would need a 48 port switch to just handle the networking, let alone the fibre channel.
With 42 blades, according to HPE, to keep separation of duties we needed at least 84 ports.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
how do you do that without comingling responsibilities?
Remember what I said, the interface is still old school for the actual function.
I can manage the hardware as the datacenter admin, I don't need console access to the blades nor the network/storage gear.
I can manage the hypervisor from the ESX level without ever seeing the hardware.
I can manage the switching without ever seeing ESX.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
how do you do that without comingling responsibilities?
Remember what I said, the interface is still old school for the actual function.
I can manage the hardware as the datacenter admin, I don't need console access to the blades nor the network/storage gear.
I can manage the hypervisor from the ESX level without ever seeing the hardware.
I can manage the switching without ever seeing ESX.
Yup, but the chassis admin gets access to everything, hence the problem. You get comingling. I get the separation that you mention without blades, as well.
-
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
I can manage the hardware as the datacenter admin, I don't need console access to the blades nor the network/storage gear.
But can you grant access to the console to the system admins and to the switching to the network admins and give them control of the physical components of their domains without giving access to other stuff? At least on the HPE blades at the era that we had them, HPE said that you could not and made us move from switched to pass through networking to get around it.
-
@scottalanmiller said in Ideas for how to use new, free gear from HPE?:
@PSX_Defector said in Ideas for how to use new, free gear from HPE?:
I can manage the hardware as the datacenter admin, I don't need console access to the blades nor the network/storage gear.
But can you grant access to the console to the system admins and to the switching to the network admins and give them control of the physical components of their domains without giving access to other stuff? At least on the HPE blades at the era that we had them, HPE said that you could not and made us move from switched to pass through networking to get around it.
Of course.
Big Red V, by extension Digix, had been doing that since G3 era blades. Most of it was API driven, but even if you cracked down into the interface, each blade is limited towards your permissions.
We also are talking about hosting, our guys could do anything while the customer couldn't tell what they were on. If you are talking about SOX stuff, it's just easier to split the environment. If you are talking about individual production teams, that's done via role delegation.
-
HPE didn't have that capability when we abandoned them. But for latency reasons, could have the shared network pipelines either. If you can bundle the networking together, it's not so bad.
-
Can you ask HPE if you can remove the kit list and just pick out your own that totals $60k? If its not useful to you, then what is the point.
I didn't take part in the contest as its US only. Quite often these things are US only. But, I just assumed when I quickly ready the post on SW that you 'had a budget of $60k' to buy what you want from them...Sucks to be told a list which is useless to you.
After they remove the mark up, I doubt this comes close to $60k anyway. Its probably just an order of kit they had which got cancelled so they gave away...
Pfft. If they don't let you sell it, say you are going with Dell as the free Sh*T is useless to you.