Thanks to @craig-theriac for making this how to. It was published in another thread a while back, but we felt it needed its own thread to be found and discussed. Thanks!
Best posts made by scale
-
File Recovery from VM Backups on Scale HC3
-
Scale Awarded New Storage Patent
http://www.storagenewsletter.com/rubriques/systems-raid-nas-san/scale-computing-assigned-patent/
Scale Computing, Inc., Indianapolis, IN, has been assigned a patent (9,348,526) developed by White, Philip Andrew, and Hsieh, Hank T., San Francisco, CA, for a “placement engine for a block device.”
The abstract of the patent published by the U.S. Patent and Trademark Office states: ”A system, method, and computer program product are provided for implementing a reliable placement engine for a block device. The method includes the steps of tracking one or more parameters associated with a plurality of real storage devices, RSDs, generating a plurality of RSD objects in a memory associated with a first node, generating a virtual storage device, (VSD) object in the memory, and selecting one or more RSD objects in the plurality of RSD objects based on the one or more parameters. Each RSD object corresponds to a particular RSD in the plurality of RSDs. The method also includes the step of, for each RSD object in the one or more RSD objects, allocating a block of memory in the RSD associated with the RSD object to store data corresponding to a first block of memory associated with the VSD object.“
The patent application was filed on March 28, 2014 (14/229,748).
Not the most exciting thing for IT professionals, but we are pretty excited about the work done and that it has been recognized with a patent.
-
New! – Premium Installation Service
2017 is here. We want to help you start your new year and your new HC3 system with our new ScaleCare Premium Installation service. You’ve probably already heard about how easy HC3 is to install and manage, and you might be asking why you would even need this service. The truth is that you want your install to go seamlessly and to have full working knowledge of your HC3 system right out of the gate, and that is what this service is all about.
First, this premium installation service assists you with every aspect of installation starting with planning, prerequisites, virtual and physical networking configuration, and priority scheduling. You get help even before you unbox your HC3 system to prepare for a worry-free install. The priority scheduling helps you plan your install around your own schedule, which we know can be both busy and complex.
Secondly, ScaleCare Premium Installation includes remote installation with a ScaleCare Technical Support Engineer. This remote install includes a UI overview and setup assistance and if applicable, a walkthrough of HC3 Move software for workload migrations to HC3 of any physical or virtual servers. Remote installation means a ScaleCare engineer is with you every step of the way as you install and configure your HC3 system.
Finally, ScaleCare Premium Installation includes deep dive training of everything HC3 with a dedicated ScaleCare Technical Support Engineer. This training, which normally takes around 4 hours to complete, will make you an HC3 expert on everything from virtualization, networking, backup/DR, to our patented SCRIBE storage system. You’ll basically be a PHD of HC3 by the time you are done with the install.
Here is the list of everything included:
- Requirements and Planning Pre-Installation Call
- Virtual and Physical Networking Planning and Deployment Assistance
- Priority Scheduling for Installations
- Remote Installation with a ScaleCare Technical Support Engineer
- UI Overview and Setup Assistance
- Walkthrough of HC3 Move software for migrations to HC3 of a Windows physical or virtual server
- Training with a dedicated ScaleCare Technical Support Engineer
- HC3 and Scribe Overview
- HC3 Configuration Deep Dive
- Virtualization Best Practices
- Networking Best Practices
- Backup / DR Best Practices
Yes, it is still just as easy to use and simple to deploy as ever, but giving yourself a head start in mastering this technology seems like a no-brainer.
-
Scale Makes Play for Nutanix Entry Level Market from El Reg
The Register has a recent article about our new, entry level cluster that has just recently been announced: Scale Makes Play for Nutanix Entry Level Market
"The HC1100 has 64GB of DRAM per node instead of the HC1000's 32GB; per-node CPU core count increases from four to six (1.7GHz Broadwell E5-2620 v4) cores, and the SATA disks change to four 1TB SAS 7,200rpm drives.
The HC1150 nodes also have 64GB of DRAM, eight 2.1GHz Broadwell cores, three 1TB SAS disks, and a single 480GB SSD. Both HC1100 and HC1150 have two 1GbitE network ports, and their enclosures are 1U high."
-
Four Lessons from the AWS Outage Last Week
The Amazon Web Services (AWS) Simple Storage Service (S3) experienced an outage on Tuesday last week and was down for several hours. S3 is object storage for around 150,000 websites and other services according to SimilarTech. For IT professionals, here are four takeaways from this outage.
#1 – It Happens
No infrastructure in immune to outages. No matter how big the provider, outages happen and downtime occurs. Whether you are hosting infrastructure yourself or relying on a third party, outages will happen eventually. Putting your eggs in someone else’s basket does not necessarily buy you any more peace of mind. In this case, S3 was brought down by a simple typo from a single individual. That is as little as it takes to cause so much disruption. The premiums you pay to be hosted on a massive infrastructure like AWS will never prevent the inevitable failures, no matter how massive any platform becomes.
#2 – The Bigger They Are, the Harder They Fall
When a service is as massive as AWS, problems affect millions of users like customers trying to do businesses with companies using S3. Yes, outages do happen but do they have to take down so much of the internet with them when they do? Like the DDOS attack I blogged about last fall, companies leave themselves open to these massive outages when they rely heavily on public cloud services. How much more confidence in your business would your customers have if they heard about a massive outage on the news but knew that your systems were unaffected?
#3 – It’s No Use Being an Armchair Quarterback
When an outage occurs with your third party provider, you call, you monitor, and you wait. You hear about what is happening and all you can do is shake your fist in the air knowing that you probably could have done better to either prevent the issue or resolve it more quickly if you were in control. But you aren’t in any position to do anything because you are reliant on the hoster. You have no option but to simply accept the outage and try to make up for the loss to your business. You gave up your ability to fix the problem when you gave that responsibility to someone else.
Just two weeks ago, I blogged about private cloud and why some organizations feel they can’t rely on hosted solutions because of any number of failures they would have no control over. If you need control of your solution to mitigate risk, you can’t also give that control to a third party.
#4 – Have a Plan
Cloud services are a part of IT these days and most companies are already doing some form of hybrid cloud with some services hosted locally and some hosted in the cloud. Cloud-based applications like Salesforce, Office365, and Google Docs have millions of users. It is inevitable that some of your services will be cloud-based, but they don’t all have to be. There are plenty of solutions like hyperconverged infrastructure to host many services locally with the simplicity of cloud infrastructure. When outages at cloud providers occur, make sure you have sufficient infrastructure in place locally so that you can do more than just be an armchair quarterback.
Summary
Public cloud services may be part of your playbook but they don’t have to be your endgame. Take control of your data center and have the ability to navigate your business through outages without being at the mercy of third party providers. Have a plan, have an infrastructure, and be ready for the next time the internet breaks.
-
My Experience at MES16
I recently had the pleasure of working my first trade show with Scale Computing at the Midsize Enterprise Summit (MES) in Austin, TX. I’ve worked and attended many trade shows in the past but I was unsure of what to expect because A) new company and coworkers and B) MES has a boardroom format I hadn’t seen before. Let me give you a preview of my summary: It was amazing.
As a premiere sponsor of the event, we had the opportunity to present our solution to all 13 of the boardrooms at the show. MES is a show that Scale Computing has attended regularly for years because of our midmarket focus. As we went from boardroom to boardroom, wheeling our live, running HC3 cluster, we encountered a great mix of attendees from ardent fans and friends, familiar faces, and new faces.
If you’ve been following along with Scale Computing, you know we’ve had a big year in terms of product releases and partnerships. We’ve rolled out, among other things, hybrid storage with flash tiering, DRaaS, VDI with Workspot, and we were able to announce a new partnership at MES with Information Builders for Scale Analytics. We were fortunate enough to have Ty Wang from Workspot with us to promote our joint VDI solution.
As Jeff Ready, our founder and CEO, presented in each boardroom, it was clear that the managers of midmarket IT were understanding our message. There was a definite sense that, like us, our peers working in IT administration were seeing the need for simple infrastrastructure that delivered solutions like virtualization, VDI, DR, analytics, as an alternative to traditional virtualization and vendors like VMware.
In the evenings, when the attendees were able to visit our booth, it was encouraging to hear from so many IT directors and managers that they’re fed up with the problems that our HC3 solution was solving and that our prices that we displayed in our booth were exceeding their expectations. It is really a testament to our entire team that our product and message seemed to resonate so strongly.
I will also note that there was another vendor, whom I will not name, at the show who offers what they call a hyperconverged infrastructure solution. That vendor really brought their “A” game with a much higher level of sponsorship than Scale Computing. This being my first show, I expected us to be overshadowed by their efforts. I couldn’t have been more wrong. When the attendee voting was tallied at the awards ceremony, we walked away with three awards including Best of Show.
It was only one amazing trade show in the grand scheme of things, but it has really cemented in my mind that Scale Computing is changing IT for the midmarket with simplicity, scalability, and availability at the forefront of thought.
-
IT Refresh and My Microwave Oven
This past weekend I replaced my over-the-range microwave oven. While the process of replacing it was pretty unremarkable, it was the process that led me to replace it and the result that were interesting. It got me thinking about the process by which IT groups ultimately choose to refresh infrastructure and solutions.
Let me explain what happened with my old microwave oven.
Event #1 – About 3 years ago or so, the front handle of the microwave broke off. I’m not sure how it happened, my sister and two of my nieces were living with me at the time, but it broke off pretty completely. No big deal. It was not hard to grab the door from underneath and open it and push it closed. It was a minor inconvenience. I wasn’t interested in replacing it.
Event #2 – Around 6 months to a year after the handle broke, the sensor or mechanism on the door that determined whether the door was closed started failing intermittently. When you closed the door, the microwave might or might not start. You might have to open and shut the door multiple times before it started. Annoying. Did the broken door handle and the way we were now opening the door contribute to this fault? Unknown. It was annoying but the microwave still worked. Another level of inconvenience but I was willing to live with it.
Event #3 – Add 6 more months and the carousel failed. It started failing on and off but finally failed completely. Again, the microwave still “worked” in that it emitted microwaves and heated food but now the food needed to be rotated every 15 seconds or so to prevent hotspots. Of course, the fact that I had to open and close the door to rotate the food only made the problem of the failing door sensor more acute. It was becoming pretty inconvenient to use. But it still worked.
That should have been the last straw, right? Nope. Of course, I thought about replacing it. It was somewhere on my to-do list, but by then I had been slowly acclimating myself to the inconvenience and finding workarounds. Workarounds included things like using the conventional oven more and eating out more often. More leftovers were left to spoil in the fridge. I was modifying my behavior to adjust to the inadequacies of the microwave.
Event #4 – My sister and nieces had moved out a year ago or so, and now my girlfriend had moved in. She didn’t demand I replace the microwave or anything. There was no nagging. There was no pressure. But I wanted to replace it because I wanted her to have a reliable microwave oven. So, I finally replaced it.
http://blog.scalecomputing.com/wp-content/uploads/2017/06/IMG_1823-768x576.jpg
My old microwave, “Old Unreliable,” pictured above, was a Frigidaire microwave. I am not knocking Frigidaire in any way. It served me well for many years before this journey to replacement. I have many other Frigidaire appliances I’m still using today.
Why did I wait so long? It was not terribly expensive to replace nor difficult. With “Old Unreliable”, I was costing myself time and money by letting good leftovers go to waste and being predisposed to eating at restaurants because I was inconvenienced by the microwave. I haven’t tried to calculate it but I am sure I racked up restaurant bills over the course of avoiding the old microwave that exceeded the cost of the new microwave, by a lot. All those tasty leftovers gone to waste…
I believe this overall scenario happens pretty regularly in IT. Admins and users have to deal with solutions that are inconvenient to use, prone to failure, and that incur secondary costs in excess management and maintenance.
IT Admins are expected to be able to engineer some workarounds when needed, but the more workarounds needed, the more expertise and knowledge needed, which can become costly. Consider also that constantly working around clunky implementations does not usually lead to efficient productivity or innovation. As with my microwave journey, there is a point where it starts costing more to keep the existing solution rather than investing in a new solution. Those costs are sometimes subtle and grow over time, and like a frog in a pot of water, we don’t always notice when things are heating up.
How much could be gained in productivity, cost saving, and user satisfaction by investing in a new solution? “If it ain’t broke, don’t fix it,” can only take you so far, and does not foster innovation and growth. Rather than becoming comfortable with an inadequate solution and workarounds, consider what improvements could be made with newer technology.
-
RE: Vendor Thank you!
Very happy that we were able to attend, support and participate! Thanks so much for putting this together for everyone.
-
In a Box
“In a box” has become a marketing phrase used to imply a lot of different elements combined together in a neat, single package. The phrase has been used in many other ways before, often implying a restrictive situation or feeling “boxed in.” Then there is the infamous skit from Saturday Night Live with a more literal meaning. In IT, though, "in a box" has a bit of a history.
https://www.scalecomputing.com/uploads/general-images/box.jpg
Brief IT History Overview
Information technology can be extremely complex, even looking at the basic hardware components on which it runs: the infrastructure. Integrating the most basic compute elements such as CPU and RAM with storage, networking, operating systems, hypervisors, and security elements can often require multiple technical experts and days, if not weeks, of work. The costs can be very high, which is great news for the system integrators and service providers often employed to assist with these projects. But does it have to be this way?
No. And it wasn’t always this way. Mainframes used to rule the IT roost before the rise of the standalone server. The mainframe was the massive, powerful machine that could handle all the computing needs for an organization. It wasn’t perfect, of course, and it ultimately gave way to the greater flexibility of the server. The limits of the server then lead to storage area networks, clustering, and more advanced concepts like grid computing.
The server model eventually required virtualization to alleviate the overburdening costs of maintaining physical hardware for each server. Virtualization helped, but it was only the beginning. The complexity of combining all of the various components was still a huge cost sink for organizations. The cloud looked promising but was still very expensive. The only real answer seemed to be the “in a box” movement, otherwise known as converged infrastructure.
The Variations of “Converged”
The concept of converged or “in a box” infrastructure has been around for some time without really catching on, mainly because it failed to deliver on promised value. Here are some of the variations:
The Pre-Configured Rack
Often it is all of the usual components (mainly servers and storage) plugged into a rack and pre-integrated with updated drivers, operating systems, and hypervisors. It’s really no different than what you might put together yourself. Someone has simply put it together for you and priced it as a package. It saves you the trouble of having to choose the individual components separately and wonder whether they are compatible.
Cloud in a Box
This is built on the pre-configured rack concept but goes beyond the hypervisor in pre-configuring cloud services in addition to the virtualization layer. This is designed to allow organizations to easily implement a private cloud on-prem. Like the pre-configured rack, this is more or less what you would build yourself from various vendor components, just pre-configured for you.
Converged Infrastructure
This is a broadly used term but most often refers to some of the datacenter components being combined into a single appliance. This could be a combination of simple server and software-defined storage (SDS) or perhaps networking as well. What separates these from the pre-configured racks is that they are generally sold as a single vendor appliance rather than a collection of different vendor components. That being said, “converged” solutions are generally designed to be a hardware platform for a virtualization hypervisor from a different vendor.
Hyperconverged Infrastructure
Like converged infrastructure, hyperconverged combines various components into a single appliance but also adds the hypervisor. The hypervisor is not a separate vendor component as in converged infrastructure, but is a native component to the single-vendor solution. Hyperconverged provides the most complete single-vendor appliance delivering out-of-the-box virtualization with single-vendor support. Both the converged and hyperconverged appliance-based solutions usually have the added benefit of being easier to scale out as needed.
HC3 Hyperconverged Infrastructure
The HC3 solution from Scale Computing is a true hyperconverged infrastructure that is often referred to (sometimes by us) as a “datacenter in a box”. I've even talked about it often as a private cloud solution (cloud in a box), satisfying most if not all the requirements of private cloud/hybrid cloud for most organizations. It combines servers, storage, virtualization, and disaster recovery into a single appliance that can be clustered for high availability and easily scaled out. It is the easiest infrastructure solution to deploy and manage, which is why it is consistently rated and awarded as the best solution for the midmarket (where ease-of-use is so highly valued).
Even with as much of the datacenter as we have fit into the HC3 architecture, we haven’t combined every possible component. We still rely on additional network components such as physical network switches and power supply systems that are probably best left separate. It is these additional components that create the complete datacenter component and why we have partnered with other technology vendors like APC by Schneider Electric.
Schneider Electric is offering pre-validated and pre-configured datacenter solutions combining Scale Computing HC3 with APC Smart-UPS. This solution provides both the award-winning ease-of-use of HC3 combined with the award-winning reliability of APC power. You can read more about the partnership between Scale Computing and Schneider Electric in our press release and more about the reference architectures on the Schneider Electric website.
While a complete “datacenter in a box” solution may or may not become a reality in the future, we believe hyperconverged infrastructure like HC3 is the right next step toward the future of IT. We’ll continue to partner with excellent vendors such as Schneider Electric to keep providing the best datacenter solutions on the market.
-
Announcing the Scale Computing Store!
That's right, you heard it correctly. Step right up, ladies and gentlemen, the Scale Computing Swag and Apparel Store is now open for business. Stop in and find yourself something you need or get a gift for that special someone.
-
The Customer is Always Right
In the age of information, customer satisfaction is not something limited to word of mouth. Customer experiences can go viral in both triumphant and terrifying ways. Providing customers with an outstanding experience is even more important today, when so many products can be purchased online with little or no human interaction. Customer satisfaction is not just important, it is vital.
At Scale Computing, we place customer satisfaction as our highest priority from product design all the way through to product support. It's really just about solving some of the problems and issues that have made IT a burden on organizations over the last couple decades. We make IT easier and our customers will attest.
Here are just a few things our customers have had to say in 2018.
https://www.scalecomputing.com/uploads/general-images/463-0BE-F66.png
https://www.scalecomputing.com/uploads/general-images/F52-15B-E0A.png
https://www.scalecomputing.com/uploads/general-images/3C3-327-827.png
If you are interested in our Scale Computing solutions for your organization and are interested in speaking with other customers like you, let us know and we'll be happy to get you in touch.
-
VDI Calculator Becomes Open Source
Adam Leibovici was known for his VDI cost calculator and now, without having time to maintain it himself, has released the tool to GitHub under the Apache 2.0 license so that it is open and free for the community to use and maintain. Thought that this would be a tool that MangoLassi readers would be interested in and find useful when evaluating or sizing or planning for VDI deployments.
-
Scale with Increased Capacity
2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.
First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.
Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.
Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.
Original Post: http://blog.scalecomputing.com/scale-with-increased-capacity/
-
RE: Random Thread - Anything Goes
http://poorlydrawnlines.com/wp-content/uploads/2013/08/apocalypse_253.png
Saw this on FB and giggled today and thought that we would share.
-
HC3 VM File Level Recovery with Video
Many of you have asked us recently about individual file recovery with HC3 and we’ve put together some great resources on how it works. We realize file recovery is an important part of IT operations. It is often referred to as operational recovery instead of disaster recovery, because the loss of a single file is not necessarily a disaster. It is an important part of IT and an important function we are able to highlight with HC3.
First off, we have a great video demo by our Pontiff of Product Management, Craig Theriac. @craig-theriac
Additionally, we have a comprehensive guide for performing file level recovery on HC3 from our expert ScaleCare support team. This document, titled “Windows Recovery ISO”, explains every detail of the process from beginning to end. To summarize briefly, the process involves using a recovery ISO to recover files from a VM clone taken from a known good snapshot. As you can see in the video above, the process can be done very quickly, in just a matter of minutes.
http://blog.scalecomputing.com/wp-content/uploads/2017/01/Screenshot-2017-01-10-12.14.53-233x300.png
Full disclosure: We know you’d prefer to have a more integrated process that is built into HC3, and we will certainly be working to improve this functionality with that in mind. Still, I think our team has done a great job providing these new resources and I think you’ll find them very helpful in using HC3 to its fullest capacity. Happy Scaling!
-
RE: Toilets of the World
There are many new toilet and bidet combinations today. It's like... hyperconverged.
-
How Important is DR Planning?
Disaster Recovery (DR) is a crucial part of IT architecture but it is often misunderstood, clumsily deployed, and then neglected. It is often unclear whether the implemented DR tools and plan will actually meet SLAs when needed. Unfortunately it often isn’t until a disaster has occurred that an organization realizes that their DR strategy has failed them. Even when organizations are able to successfully muddle through a disaster event, they often discover they never planned for failback to their primary datacenter environment.
http://blog.scalecomputing.com/wp-content/uploads/2017/01/plan-ahead-300x175.jpg
Proper planning can ensure success and eliminate uncertainty, beginning before implementation and then enabling continued testing and validation of the DR strategy, all the way through disaster events. Planning DR involves much more than just identifying workloads to protect and defining backup schedules. A good DR strategy include tasks such as capacity planning, identifying workload dependencies, defining workload protection methodology and prioritization, defining recovery runbooks, planning user connectivity, defining testing methodologies and testing schedules, and defining a failback plan.
At Scale Computing, we take DR seriously and build in DR capabilities such as backup, replication, failover, and failback to our HC3 hyperconverged infrastructure. In addition to providing the tools you need in our solution, we also offer our DR Planning Service to help you be completely successful in planning, implementing, and maintaining your DR strategy.
Our DR Planning Service, performed by our expert ScaleCare support engineers, provides a complete disaster recovery run-book as an end-to-end DR plan for your business needs. Whether you have already decided to implement DR to your own DR site or utilize our ScareCare Remote Recovery Service in our hosted datacenter, our engineers can help you with all aspects of the DR strategy.
The service also includes the following components:
- Setup and configuration of clusters for replication
- Completion of Disaster Recovery Run-Book (disaster recovery plan)
- Best-practice review
- Failover and failback demonstration
- Assistance in facilitating a DR test
-
The Four Things That You Lose with Scale Computing HC3
Choosing to convert to hyperconvergence is a big decision and it is important to carefully consider the implications. For a small or midsize datacenter, these considerations are even more critical. Here are 4 important things that you lose when switching to Scale Computing HC3 hyperconvergence.
1. Management Consoles
When you implement an HC3 cluster, you no longer have multiple consoles to manage separate server, storage, and virtualization solutions. You are reduced to a single console from which to manage the infrastructure and perform all virtualization tasks, and only one view to see all cluster nodes, VMs, and storage and compute resources. Only one console! Can you even imagine not having to manage storage subsystems in a separate console to make the whole thing work? (Note: You may also begin losing vendor specific knowledge of storage subsystems as all storage is managed as a single storage pool alongside the hypervisor.)
2. Nights and Weekends in the Datacenter
Those many nights and weekends you’ve become accustomed to working, spent performing firmware, software, or even hardware updates to your infrastructure, will be lost. You don’t have to take workloads offline with HC3 to perform infrastructure updates so you will just do these during regular hours. No more endless cups of coffee along with the whir of cooling fans to keep you awake on those late nights in the server rooms. Your relationship with the nightly cleaning staff at the office will undoubtedly suffer unless you can find application layer projects to replace the nights and weekends you used to spend on infrastructure.
3. Hypervisor Licensing
You’ll no doubt feel this loss even during the evaluation and purchasing of a new HC3 cluster. There just isn’t any hypervisor licensing to be found because the entire hypervisor stack is included without any 3rd party licensing required. There are no license keys, nor licensing details, nor licensing prices or options. The hypervisor is just there. Some of the other hyperconvergence vendors provide hypervisor licensing but it just won’t be found at Scale Computing.
4. Support Engineers
You’ve spent many hours developing close relationships with a circle of support engineers from your various server, storage, and hypervisor vendors over months and years but those relationships simply can’t continue. No, you will only be contacting Scale Computing for all of your server, storage, virtualization, and even DR needs. You’ll no doubt miss the many calls and hours of finger pointing spent with your former vendor support engineers to troubleshoot even the simplest issues.
The original article is on our blog, but I copied all of the content here for you guys!
-
Groundhog Day
Today is Groundhog Day, a holiday celebrated in the United States and Canada where the length of the remaining winter season is predicted by a rodent. According to folklore, if it is cloudy when a groundhog emerges from its burrow on this day, then the spring season will arrive early, some time before the vernal equinox; if it is sunny, the groundhog will supposedly see its shadow and retreat back into its den, and winter weather will persist for six more weeks. (Wikipedia)
Today the groundhog, Punxsutawney Phil, saw his shadow. Thanks, Phil.
Groundhog Day is also the name of a well-loved film starring Bill Murray (seen above) where his character, Phil, is trapped in some kind of temporal loop repeating the same day over and over. I won’t give the rest away for anyone who has not seen the movie, but it got me thinking. What kind of day would you rather have to live over and over as an IT professional? I’m guessing it does not include the following:
-
Manually performing firmware and software updates to your storage system, server hardware, hypervisor, HA/DR solution, or management tools.
-
Finding out one of your solution vendor updates broke a different vendor’s solution.
-
Having to deal with multiple vendor support departments to troubleshoot an issue none of them will claim responsibility for.
-
Dealing with downtime caused by a hardware failure.
-
Having to recover a server workload from tape or cloud backup.
-
Having to deal with VMware licensing renewals.
-
Thanklessly working all night to fix an issue and only receiving complaints about more downtime.
These are all days none of us want to live through even once, right? But of course, many IT professionals do find themselves reliving these days over and over again because they are still using the same old traditional IT infrastructure architecture that combines a number of different solutions into a fragile and complex mess.
At Scale Computing we are trying to break some of these old cycles with simplicity, scalability, and affordability. We believe, and our customers believe, that infrastructure should be less of a management and maintenance burden in IT. I encourage you to see for yourself how our HC3 virtualization platform has transformed IT with both video and written case studies here.
We may be in for six more weeks of winter but we don’t need to keep repeating some of the same awful days we’ve lived before as IT professionals. Happy Groundhog Day!
-
-
4 Hidden Infrastructure Costs for the SMB
Infrastructure complexity is not unique to enterprise datacenters. Just because a business or organization is small does not mean it is exempt from the feature needs of big enterprise datacenters. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hit the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.
1 – Training and Expertise
Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it. Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.
2 – Support Run-Around
A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics. Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing. Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue. Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.
3 – Admin Burn-Out
The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage. Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.
4 – Brain Drain
Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.
Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.
Original Article: http://blog.scalecomputing.com/4-hidden-infrastructure-costs-for-the-smb/