Coming Out of the Closet, SMB Enters the Hosted World
-
All businesses want their infrastructure to be reliable and cost effective, it is the nature of business. Companies spend tens or hundreds of thousands of dollars on high availability hardware and software. However, we know that money alone does not buy reliability. In the words of John Nicholson: High availability is something that you do, not something that you buy. And this could not be more true.
Audiophiles have long known that half of the sound quality of your stereo system comes from the amplifier, source, speakers, cabling and other aspects of the stereo itself; and that the other half comes from the physical room that you put it into and proper setup of the system within the room. Literally fifty percent of the quality of the system comes from being used properly, not the system itself. The same is true of computing systems.
Many factors including stable air temperatures, proper air flow, physical security, proper cable management, quality racks and power distribution units, high quality and high capacity uninterrupted power supplies, quality generators, redundancy for all aspects of power, cooling and Internet access, around the clock staff, air filtration, humidity control, vibration dampening, sensor monitoring and more play key differences between quality environments and terrible ones. With the best environment, even a moderate desktop will often run without interruption for a decade if left undisturbed! A great environment into which to place servers can be far more of a factor for reliability than the build of the server itself.
SMBs often believe that servers and other datacenter equipment will fail every few years, or more. But companies using high quality datacenters see very different numbers with failures more likely to be expected at double or triple those numbers between failures. Even without addressing high availability in the hardware and software, a good datacenter can effectively move a traditional enterprise server with obvious internal redundancies such as RAID, hot swap components and dual power supplies, into numbers that mimic the target numbers of entry level high availability. The environment is just as important, probably more important, than the server hardware itself.
It is too often overlooking, believing that you can go to the store and simply buy a convenient box that will wave away all of the complexities of environmental management and will simply be a panacea to IT reliability needs. This, quite simply, cannot be the case. High quality server hardware and highly reliable software can, to some small degree, combat poor environmental factors but, at best, only work to offset them. This is generally costly and ineffective.
Of course, businesses can attempt to create enterprise class hosting environments on their own premises, but this is extremely costly and requires not just a large, often staggering, up front investment which might be ten times or more the cost of the systems it is designed to protect, but will then require maintenance and staffing, indefinitely. Large cost initially, larger costs ongoing.
It goes, we hope, without saying that many factors play in to a decision around whether computational and storage systems will be kept on premises or off premises and that there can be no singular solution. But on premises systems, especially with the increasingly common availability and affordability of highly quality, high speed WAN links and moves from LAN-based to LAN-less system designs and a desire to continuously improve uptime and security, should now be the default assumption for system deployments for the vast majority of organizations.
The more that organizations seek high availability, the more that they must consider how the lack of ability to provide an adequately protected and stable environment impacts their capacity to deliver systems of this nature to their businesses. This has driven companies to need to consider either hosted cloud computing or colocation to fit their needs. Each is very different, while sharing the capability of offloading the environmental needs from the end organization.
Because of the great cost involved to make on premises systems reliable enough to justify high availability spending, it is very often actually dramatically less costly to use enterprise class colocation for the same equipment while moving large up front cost (capex) into more predictable opex payments that leverage both the time value of money and uncertainty factors that are so critical to IT and business in general.
It is time for the SMB market to join its enterprise brethren in leaving on premises systems behind and moving to the world of large scale, high efficiency and highly reliable systems hosting.
-
*fixed
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
All businesses what their infrastructure to be reliable and cost effective
maybe they want their infrastructure
-
SMBs often believe that servers and other datacenter equipment will fail every few years, or more. But companies using high quality datacenters see very different numbers with failures more likely to be expected at double or triple those numbers between failures. Even without addressing high availability in the hardware and software, a good datacenter can effectively move a traditional enterprise server with obvious internal redundancies such as RAID, hot swap components and dual power supplies, into numbers that mimic the target numbers of entry level high availability. The environment is just as important, probably more important, than the server hardware itself.
It's funny, you're right that they think this, which is so weird, if you just sit down and think about it. Normal SMB equipment lasts for 5+ years in there totally messed up closet, how could a datacenter ever be anything but better than what they have in that uncontrolled closet? At worst it would be the same.
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?
Often, no. Mainframes are so much faster than commodity machines and would remain useful for a very long time. Reliability and IO were their main value propositions and replacing them would be very expensive and often not provide a compelling advancement over what was already there.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.
yeah - I'm thinking about doing just that. I think I have enough 300 GB drives to fill one box, but I wonder if I should even bother. If I could get away with several consumer 480 GB SSDs the thing would probably sing. One of the DL380 G5's has 32 GB RAM, so it can handle a few workloads in a lab.
-
32GB can be a lot of workloads. Even if you only have 300GB SAS drives, that's not bad. For a lab that's great.
-
@Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.
-
@scottalanmiller sure does like to play with words.
-
@travisdh1 said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.
OH - I could stick a pile of Windows on here too if I wanted things only for test.
One of the first things I'm going to do is run an IOPs test on it. See how it compares with the generic numbers for these drives. 8 SAS 300 GB 6 GB/s drives.
-
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
-
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
What do you plan to do where anlab will have a lot of writes?
-
I dont know.
I was wrong on ram... Only have 12 GB.