Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi
-
it does suck, but shit changes. That's only natural.
-
@dustinb3403 said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@voip_n00b said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dustinb3403 Why?
Because you're required to setup and maintain and additional environment for it.
1 customers won't want to spend more money for something they've previously had included (by hpe) and 2 ESXi needs this functioning to report on the underlying health status.
Explain.... "Well, this happened because you are overpaying for something that you don't need and now it costs even more because they know you will pay because you are already paying just for the sake of paying. Instead of paying more, you could pay less."
And they will instantly say "oh heck no way do we want to SAVE money, spend spend spend"
-
@scottalanmiller said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dustinb3403 said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@voip_n00b said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dustinb3403 Why?
Because you're required to setup and maintain and additional environment for it.
1 customers won't want to spend more money for something they've previously had included (by hpe) and 2 ESXi needs this functioning to report on the underlying health status.
Explain.... "Well, this happened because you are overpaying for something that you don't need and now it costs even more because they know you will pay because you are already paying just for the sake of paying. Instead of paying more, you could pay less."
And they will instantly say "oh heck no way do we want to SAVE money, spend spend spend"
What?
HPE is removing a hardware monitoring provider for VMWare (and presumably everything else). The assumption that anyone who has hardware, must be able to monitor it, ideally through their hypervisor.
Sure shifting to monitoring through the hardware interface, such as ILO or OneView but these approaches add yet another administrative panel that must be used and managed and maintained.
-
Unfortunately tech that goes obsolete always causes problems but it's more technically sound to monitor through the OOB management interface.
It's after all independent of the OS running on the hardware, independent of the server's NICs, independent of most hardware failures and can be used for a lot more than just monitoring.
And in any modern installation, the OOB management should have been setup and in use already.
-
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
Unfortunately tech that goes obsolete always causes problems but it's more technically sound to monitor through the OOB management interface.
It's after all independent of the OS running on the hardware, independent of the server's NICs, independent of most hardware failures and can be used for a lot more than just monitoring.
And in any modern installation, the OOB management should have been setup and in use already.
Absolutely I agree with that, except the only OOBM that existed before OneView was Ilo and smtp emailing. Which is hardly reliable.
And I do agree that moving to an OOBM like OneView makes sense, it doesn't make sense for an ITSP to have to use though, as it's setup per customer, and would be running on the same hardware it's monitoring in most cases.
Edits are corrected typos
-
@dustinb3403 said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
Unfortunately tech that goes obsolete always causes problems but it's more technically sound to monitor through the OOB management interface.
It's after all independent of the OS running on the hardware, independent of the server's NICs, independent of most hardware failures and can be used for a lot more than just monitoring.
And in any modern installation, the OOB management should have been setup and in use already.
Absolutely I agree with that, except the only OOBM that existed before OneView was Ilo and smtp emailing. Which is hardly reliable.
And I do agree that moving to an OOBM like OneView makes sense, it doesn't make sense for an ITSP to have to use though, as it's setup per customer, and would be running on the same hardware it's monitoring in most cases.
Edits are corrected typos
Why can't you have one Oneview hosted centrally and have it communicate with iLO over VPN or whatever?
That's how a centrally managed vCenter is setup isn't it? It's also how you would have to manage a server remotely using iLO.
-
@dustinb3403 said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
Unfortunately tech that goes obsolete always causes problems but it's more technically sound to monitor through the OOB management interface.
It's after all independent of the OS running on the hardware, independent of the server's NICs, independent of most hardware failures and can be used for a lot more than just monitoring.
And in any modern installation, the OOB management should have been setup and in use already.
Absolutely I agree with that, except the only OOBM that existed before OneView was Ilo and smtp emailing. Which is hardly reliable.
And I do agree that moving to an OOBM like OneView makes sense, it doesn't make sense for an ITSP to have to use though, as it's setup per customer, and would be running on the same hardware it's monitoring in most cases.
Edits are corrected typos
How are you doing those things today? If you're using a centralized server to manage all of your clients, why can't you manage iLo the same way?
I agree, in this day and age - that's super risky, i.e. you get compromised and all of your customers are now compromised.
though just because you have 100 passwords, one for each client, that info has to be stored somewhere and perhaps it would be compromised as well - and your clients are still compromised...
-
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
I agree, in this day and age - that's super risky, i.e. you get compromised and all of your customers are now compromised.
though just because you have 100 passwords, one for each client, that info has to be stored somewhere and perhaps it would be compromised as well - and your clients are still compromised...
Risk has to be managed but it's not more risky having 100 customers with one server each on-prem than having 100 servers in one location.
-
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
I agree, in this day and age - that's super risky, i.e. you get compromised and all of your customers are now compromised.
though just because you have 100 passwords, one for each client, that info has to be stored somewhere and perhaps it would be compromised as well - and your clients are still compromised...
Risk has to be managed but it's not more risky having 100 customers with one server each on-prem than having 100 servers in one location.
Oh, I completely disagree. Now if you tell me all the creds for those 100 on prem servers are in one place, then I tend to agree with you, but if they aren't then they are a tiny bit, if not a lot more secure.
In this situation - it really comes down to them being managed by and MSP/ITSP that's the weak link.... If the MSP/ITSP is breached and the hackers get all the creds, be it one cred or 100 creds, then the customers are fooked either way. -
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
I agree, in this day and age - that's super risky, i.e. you get compromised and all of your customers are now compromised.
though just because you have 100 passwords, one for each client, that info has to be stored somewhere and perhaps it would be compromised as well - and your clients are still compromised...
Risk has to be managed but it's not more risky having 100 customers with one server each on-prem than having 100 servers in one location.
Oh, I completely disagree. Now if you tell me all the creds for those 100 on prem servers are in one place, then I tend to agree with you, but if they aren't then they are a tiny bit, if not a lot more secure.
In this situation - it really comes down to them being managed by and MSP/ITSP that's the weak link.... If the MSP/ITSP is breached and the hackers get all the creds, be it one cred or 100 creds, then the customers are fooked either way.I think I was a bit unclear.
What I mean is VPN is just an extension of the LAN. So 100 physically spread but centrally managed servers have the same risk as 100 servers in the same location managed locally.
If the managing thingy is compromised, then every server is potentially compromised as well.
If you on the other hand have a 100 servers physically spread and managed locally and not centrally, well than the risk is a lot smaller. But you don't get any of the benefits of central management either or economies of scale.
As you said it's the central management from the MSP/ITSP that's the weak link.
-
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@pete-s said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dashrender said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
I agree, in this day and age - that's super risky, i.e. you get compromised and all of your customers are now compromised.
though just because you have 100 passwords, one for each client, that info has to be stored somewhere and perhaps it would be compromised as well - and your clients are still compromised...
Risk has to be managed but it's not more risky having 100 customers with one server each on-prem than having 100 servers in one location.
Oh, I completely disagree. Now if you tell me all the creds for those 100 on prem servers are in one place, then I tend to agree with you, but if they aren't then they are a tiny bit, if not a lot more secure.
In this situation - it really comes down to them being managed by and MSP/ITSP that's the weak link.... If the MSP/ITSP is breached and the hackers get all the creds, be it one cred or 100 creds, then the customers are fooked either way.I think I was a bit unclear.
What I mean is VPN is just an extension of the LAN. So 100 physically spread but centrally managed servers have the same risk as 100 servers in the same location managed locally.
If the managing thingy is compromised, then every server is potentially compromised as well.
If you on the other hand have a 100 servers physically spread and managed locally and not centrally, well than the risk is a lot smaller. But you don't get any of the benefits of central management either or economies of scale.
As you said it's the central management from the MSP/ITSP that's the weak link.
aww, yeah, in that case, yep, we agree.
I think this will do nothing but make MSP's and ITSP's even more expensive, as you said, we need to loose the economy of scale for protection reasons.
-
@dashrender However centrally managed doesn't mean site to site VPN. I don't get MSP that have site to site VPNs to their customers. It is not feasible to maintain, it is a high risk and very old school.
-
@dbeato said in Goodbye hardware monitoring on HPE Gen10 and newer equipment running ESXi:
@dashrender However centrally managed doesn't mean site to site VPN. I don't get MSP that have site to site VPNs to their customers. It is not feasible to maintain, it is a high risk and very old school.
of course it doesn't.
using a tool like ScreenConnect - having all customer machines in a single account - means SC's hacked, then ever client is hacked...