Is the Time for VMware in the SMB Over?
-
@coliver said:
@mlnews said:
@coliver said:
VMware could easily go the free with paid support option... there are no companies that are doing support for free at this point in time... although they would lose out on less then half their revenue stream.
That's true, but seems unlikely. The revenue drop is probably more than they could withstand. Citrix is already doing this model, as is Microsoft. VMware lacks the additional revenue to make this work, I think.
You think a crucial part of their revenue comes from new installs? I would assume that is minuscule compared to their on-going support/licensing. They are also spreading into attached markets with their VDI manager/infrastructure, Horizon.
There VDI stuff isn't used much. XenDesktop is a way better VDI solution than VMware's. Also when has anyone needed to use support? Seems pretty rare. It's about like calling Microsoft support. Never need it. I've heard stories online of some people neededing it but don't know anyone who has.
-
@coliver said:
They are also spreading into attached markets with their VDI manager/infrastructure, Horizon.
How valuable is that likely to remain if businesses are forced to "do VDI with another vendor", "do VDI with VMware and everything else with someone else" or "have a uniform environment?"
I think that VDI and Horizon will do little for them, long term, because the value to that rapidly erodes in the light of everything else. And SMBs do very little VDI and by the time that they do, VMware will already not exist in their market.
-
@thecreativeone91 said:
There VDI stuff isn't used much. XenDesktop is a way better VDI solution than VMware's. Also when has anyone needed to use support? Seems pretty rare. It's about like calling Microsoft support. Never need it. I've heard stories online of some people neededing it but don't know anyone who has.
That pretty much sums it up for me. There is better VDI available from the free players which allows you to have a lower cost, lower risk, uniform virtualization environment on top of the alternative VDI.
And support I totally agree. If you have an MSP partner, it is they who would use support and not the customer and they have heavy interest in being competent rather than spending money on support whenever possible. Internal IT running one of these products should not need support and the community support is very good if needed. These are super simple products. The only places I see spending money on support are huge enterprises with deep pockets and only do so because of a combination of playing politics (having someone else to blame is better than doing the right thing for the business) or hiding the incompetence of the department (spending a fortune on "support" to hide the fact that the vendor is doing the work instead of the IT guys.)
-
@thecreativeone91 said:
@coliver said:
@mlnews said:
@coliver said:
VMware could easily go the free with paid support option... there are no companies that are doing support for free at this point in time... although they would lose out on less then half their revenue stream.
That's true, but seems unlikely. The revenue drop is probably more than they could withstand. Citrix is already doing this model, as is Microsoft. VMware lacks the additional revenue to make this work, I think.
You think a crucial part of their revenue comes from new installs? I would assume that is minuscule compared to their on-going support/licensing. They are also spreading into attached markets with their VDI manager/infrastructure, Horizon.
There VDI stuff isn't used much. XenDesktop is a way better VDI solution than VMware's. Also when has anyone needed to use support? Seems pretty rare. It's about like calling Microsoft support. Never need it. I've heard stories online of some people neededing it but don't know anyone who has.
I haven't had the opportunity to play with VDI much... although I applied for a job that works with Horizon. I can understand where XenDesktop comes into play though it is a very mature software from what I have seen.
Good point on the support, I guess I am looking at a hypervisor as a fragile piece of software when all my experience points to the exact opposite.
-
@coliver said:
Good point on the support, I guess I am looking at a hypervisor as a fragile piece of software when all my experience points to the exact opposite.
In theory a hypervisor is tiny, does very little and insanely stable. If it is anything else, it should be avoided. All four big boys are great this aspect. This is partially why the Linux Foundation and Microsoft make the hypervisors free.... they do very little and need very little care and feeding. It's a place where if you don't make it free, someone else will (and has.) That there are two, enterprise, open source and free alternatives (Xen and KVM) already shows this. And in the Type 2 space, VirtualBox is free leaving effectively no room for alternatives there either.
Operating Systems eventually migrate to open source and free over time. Hypervisors do the same but much, much faster.
-
Is the time for on-premise servers in the SMB nearly over? In which case, choice of hypervisor becomes a moot point, right?
So for me, in the short to medium term, I've invested a lot of time and effort into VMware, so I won't be switching to anything else, and in the medium to long term I'll be running VMs in the cloud, so don't really care about hypervisor technology any more.
I know cost isn't a factor as $600 is trivial. I've invested far more than that in my time and effort.
-
@Carnival-Boy said:
Is the time for on-premise servers in the SMB nearly over? In which case, choice of hypervisor becomes a moot point, right?
As much as I love moving away from on-premises servers, I don't believe that the era is nearly over. Here is how I feel, without putting a ton of thought into this question:
- While most workloads in the SMB should not be on-premises, about 10% should be and will remain this way for a long time, slowly lowering but never completely going away. Workloads that will long remain will be heavily cache and security based (proxy servers, DNS, AD, scanning, filtering, machine controllers, etc.)
- While most workloads should not be on-premises, many will remain because SMB folk have slow upgrade cycles and tend to wear tin foil hats (both IT and managers outside of IT) and do not apply logic and business acumen to these decisions in many cases. So on-premises workloads will exist where they should not for a very long time.
- Even when moving off-premises to colo, the needs for virtualization choices remain exactly as they do on-premises and colo represents a large percentage of the off-premises server workloads in the SMB and will continue to do so, while slowly increasing as on-premises lowers but slowly decreasing as the move to VPS and cloud IaaS happens.
- Even in the cloud the choice matters to some degree. Using Xen, while an "under the hood component" allows the best clouds like Amazon to outperform their competition and keeping the pricing low while providing unique features like Xen PV. Using the wrong virtualization platform, while not a problem in and of itself, was a key early indicator that something was wrong with CloudatCost, for example. We knew because they were using VMware that their costs were higher than they should be and their features fewer - which in turn indicated a lack of necessary support skills internally. It is only an indicator, but one that played out very realistically.
-
@Carnival-Boy said:
I know cost isn't a factor as $600 is trivial. I've invested far more than that in my time and effort.
We get $520 by shopping around. And there is huge time and effort in migrating. But you have to include the time and cost of license management, ongoing licensing and loss of features too. For us VMware costs many times the license cost.
-
@scottalanmiller said:
As much as I love moving away from on-premises servers, I don't believe that the era is nearly over. Here is how I feel, without putting a ton of thought into this question:
- While most workloads in the SMB should not be on-premises, about 10% should be and will remain this way for a long time, slowly lowering but never completely going away. Workloads that will long remain will be heavily cache and security based (proxy servers, DNS, AD, scanning, filtering, machine controllers, etc.)
- While most workloads should not be on-premises, many will remain because SMB folk have slow upgrade cycles and tend to wear tin foil hats (both IT and managers outside of IT) and do not apply logic and business acumen to these decisions in many cases. So on-premises workloads will exist where they should not for a very long time.
Aren't colos often much more expensive for SMB than onsite? Granted, the Colos often provide much better services, the SMBs have been getting away without those features in their locally hosted solutions for 2+ decades now, why the sudden need to change?
Additionally, the network performance of onsite at 100 or 1000 Mb is considerable compared to running everything from the cloud (colo).I want to accept the desire to move to the cloud/colo but I can't see how it doesn't drastically increase costs, and possibly drastically affect performance (bandwidth).
-
You can get unmetered lines to many colos. And for example our fiber lines at the county allowed us to have some stuff unmetered, like if we got a colo that was using them directly or hurricane electric.
-
@Dashrender said:
Aren't colos often much more expensive for SMB than onsite?
We found even ten years ago that the cost of power and cooling alone paid for a top, enterprise colo with 24x7 service and big time networking and redundant everything. We've found that there is no way to cost justify on on-premises server under normal conditions because colos are so much cheaper.
There are special cases when this is not the case, like when the on-premises cannot get reasonable network connections. But in general, colo is the cost savings option on top of all of the normal benefits.
-
@Dashrender said:
Granted, the Colos often provide much better services, the SMBs have been getting away without those features in their locally hosted solutions for 2+ decades now, why the sudden need to change?
Couple reasons:
- Cost savings
- Getting away with things doesn't mean that that should continue. Getting away with paying too much or not having adequate insurance or not having what is ideal is one thing, but every business should always attempt to do what is best for it. Just because you don't have to be perfect to succeed doesn't imply that you won't succeed better doing what is better.
-
@Dashrender said:
I want to accept the desire to move to the cloud/colo but I can't see how it doesn't drastically increase costs, and possibly drastically affect performance (bandwidth).
Not sure where you are seeing the cost coming from. How much do you think putting a server into a colo costs?
-
Also remember that the cost of a server in a colo is lower. Servers last longer. Their parts just don't wear out as fast when the temperature is stable and there is low dust. You can replicate this onsite but it is hard to get the quality HVAC, electric and vibration reduction of an enterprise colo. This is why colos often cut downtime by more than half. If you expect an outage from a normal server on-premises every six years, a colo might make this same hardware average go to more like ten to twelve years.
There is a reason why we see six nines of reliability from non-redundant servers. A quality setup really saves a lot of money!
-
@thecreativeone91 said:
You can get unmetered lines to many colos. And for example our fiber lines at the county allowed us to have some stuff unmetered, like if we got a colo that was using them directly or hurricane electric.
Yes, when you are small (say 2-3 servers) using open Internet is normally cheap and adequate. Most colos that we work with give us between 100Mb/s and 1,000Mb/s per server included.
When you get bigger getting a dedicated, super high speed pipe directly to the colo is generally easy and while not cheap, is easily offset by other costs and can give you LAN-like performance even to a remote facility.
-
Colos do more than just these things too. Moving to a colo also means that you are changing your network design and certain things start to happen automagically by the nature of the change in planning....
- You tend to start securing your environment for more general use cases rather than specific ones improving flexibility.
- You tend to be prepared for site failover and work from home options.
- You are well prepared for mutiple sites.
- You tend to focus on sprawl reduction and often trim expenses through better planning.
- You tend to get network upgrades and other features over time, inclusive.
- You get the HVAC, power conditioning, generators, monitoring, 24x7 support staff and other features that you "should" have on-premises but either pay far too much for, do without or attempt but likely cannot do as well.
It is mostly "tends" to be these things. And you can do much of this without a colo, but it would be far more costly and very tempting at some point to skimp on them. You have to not only look at the ongoing costs of on-premises cooling, power and monitoring but also the cost of the extra outages that you are protected against or can be, if you chose to be.
-
@scottalanmiller said:
@Dashrender said:
I want to accept the desire to move to the cloud/colo but I can't see how it doesn't drastically increase costs, and possibly drastically affect performance (bandwidth).
Not sure where you are seeing the cost coming from. How much do you think putting a server into a colo costs?
The last time I saw a quote it was like $100/month per U or more. I currently have 15 U, that would $1500/month, $18K a year. Seems difficult to get over the price tag.
Also, what about the additional bandwidth needed to talk to that datacenter? If I put my AD boxes there, I figure I'll need at least 10/10 bandwidth to ensure a reasonable experience (and that might be low). Assuming I pull all of that over the same ISP line and have no downtime (yeah right on standard business connections). Unless the price is significantly lower, more like $2-5/month/U I'm not sure I see the savings?
-
@scottalanmiller said:
...t also the cost of the extra outages that you are protected against or can be, if you chose to be.
I'm less worried about the outages at the DC than I am at my own site.
For example, to ensure as little downtime as possible we have a 10/10 Mb fiber connection (dual rings - though only one route into the building). In 8 years we've only had 13 mins of non scheduled downtime, and those mins came in two blips last spring. The bad thing is we pay $860/month for this.
On the other hand our other three locations in town all have HFC. All of those locations have had multiple outages over the past 8 years, one of them lasting more than a day. On average I'd say an outage lasts approx 2 hours when it happens on HFC.
We have new options available in town now so I'm considering dumping our Fiber connection and moving to two different ISP connections into my main location and setting up alternative routes at the firewall to cover outages. Moving to dual 50/10 connections will cost me around $500/month versus the $860 I pay now, and will increase my download by more than 5 times (10 if I can use both pipes at the same time)
-
@Dashrender said:
The last time I saw a quote it was like $100/month per U or more. I currently have 15 U, that would $1500/month, $18K a year. Seems difficult to get over the price tag.
Well anyone can find inflated pricing. But that is super misleading. When buying by the U you are paying "per server". That not $100/U. That is $100/1U Server, I suspect.
We get $50/1U server and like $80 for a 2U server. See how it is not "per U"?
Once you go beyond "per server" pricing you get things like 10U, quarter rack, half rack and full rack pricing which is far, far cheaper "per U" than you pay if you pay per each server being racked.
So a half rack might be $450/mo.
And you missed by point about improving your planning. When you run on-premises you deal with "U sprawl" because the space is free and everything else is not. In a datacenter, the space is what is not free. So with a change in planning, I bet you could reduce that to 4U - 6U if you were planning for space as a concern.
What do you have in that 15U?
-
@Dashrender said:
If I put my AD boxes there, I figure I'll need at least 10/10 bandwidth to ensure a reasonable experience (and that might be low).
Is that for a few thousand users? AD uses effectively nothing. A T1 can support hundreds of users no problem. A T1 is 1.544Mb/s. And you don't need symmetric for AD either.