Best posts made by scale
-
RE: Scale Computing General News
Scale Computing Leads The Edge Charge With First Hyper-Converged System Built For Edge Computing on CRN.
Scale Computing is doubling down on its investments around the edge with the launch of the industry's first hyper-converged system that is purpose-built for edge computing environments.
Jeff Ready, CEO of Scale Computing, said that the company's new offering, HC3 Edge, gives customers the benefits of the company's HC3 on-premises appliance-based hyper-converged infrastructure on small form factors that meet edge requirements.
"We are seeing strong demand from our customers for HCI [Hyper-Converged] at the edge, especially among our large retail, manufacturing, and healthcare verticals where a combination of factors, including IoT adoption, are increasing on-prem demands," said Ready, CEO of Scale Computing. "Customers need flexibility both now and in the future. What we are providing at Scale is a single platform spanning on-premises, cloud, and edge computing that will serve our customers well in the decade to come."
-
Veeam - Full VM Restore into HC3 - Windows and Linux Recovery
Virtually any backup product that provides "Bare Metal Recovery" can be used to backup and restore HC3 VM's either for ongoing protection or to provide P2V or V2V Migration. (veeam, acronis, unitrends, barracuda, storagecraft are just some of the systems that have been tested and while this focuses on full vm restores - of course all also make file level restores from backup easy as well)
In the case of Veeam - this could mean also mean restoring hypervisor level backups of VM's made while running on VMware or Hyper-V systems or OS level backups of systems running on physical / cloud or other hypervisors such as HC3 ... all would work the same way.
The key is allowing the Veeam Recovery Media (ISO) to access the HC3 virtual hard drives which is easily accomplished by loading the Scale HC3 performance (virtio) drivers during the recovery process as shown here. Further, if you are restoring a VM from a backup made on another platform, Veeam provides the option to permanently "inject" the virtio drivers into the restored image on HC3 after the recovery.
to get their recovery ISO access to the virtio disk - need to select and inject virtio drivers. It will then see the virtual disks provided by the HC3 VM you want to restore into and allow the backup data to be restored into the VM.
https://us.v-cdn.net/6029942/uploads/editor/ws/b25ng61vm354.png
then it's just a normal full system recovery ...
https://us.v-cdn.net/6029942/uploads/editor/9t/3ikpw09ion4a.png
at the end of the process, the VM will reboot into the just restored OS image
If the original backup was of a VM running on HC3 where the virtio drivers are already present - it is not necessary to "inject" new drivers during the recovery as they are already installed in the restored image.
I've also done some testing with the Veeam Backup Agent for Linux and the Veeam Linux Based System Recovery ISO as well and will post some examples of that.
-
‘Hyperconverged Solution of the Year’ – That’s Us!
http://blog.scalecomputing.com/wp-content/uploads/2017/12/Screenshot-2017-12-07-13.37.51-768x386.png
We built our HC3 system with simplicity in mind, integrating storage, servers and virtualisation into one easy-to-use solution – a true data centre in a box. Last month, on November 23rd, the HC3 system was awarded “hyperconverged solution of the year” at the 2017 SVC Awards.
The annual SVC Awards took place in London to celebrate and reward this year’s leading companies in storage, cloud, and digitalisation. The awards recognised end user, channel, technology and innovation success. With thousands of public votes cast and the IT industry’s leading executives in attendance, the ceremony highlighted the achievements and excellence of organisations leading the way in the technology sector.
Combining servers, storage, virtualisation, and backup/DR, HC3 is an all-in-one, hyperconverged infrastructure solution for IT. The HC3 appliance-based design is simple, scalable, and highly available with a low price and low cost of ownership.
Scale Computing is leading the market with innovation in hyperconverged infrastructure, edge computing, hybrid cloud, and highly efficient storage architecture featuring NVMe. HC3 is not only a solution designed to lower costs and reduce infrastructure footprint – it is a whole new way of thinking about IT infrastructure incorporating cloud, on-prem, and IoT into a complete infrastructure solution.
We have always been committed to delivering a leading infrastructure solution, designed to meet customer needs. We are honoured to have been awarded hyperconverged solution of the year, which recognises and reinforces our continued efforts in the industry.
Scale Computing is constantly looking to work with new partners internationally. Please click here to find out more information.
-
How do I backup my VMs on Scale HC3?
A: There are several options available to HC3 users, including the native HC3 backup capabilities.
HC3 features a full set of native features to allow users to backup, replicate, failover, restore, and recover virtual machines. Snapshot-based, incremental backups can be performed between HC3 systems without any additional software or licensing. Many HC3 users implement a second HC3 cluster or a single node to serve as a backup location or failover site. The backup location can be as second HC3 system that is onsite or remote. The backup location can be used just to store backups, or to fail them over if the primary HC3 system fails. HC3 VM backups can be restored to the primary HC3 system sending only the data that is different. Backup scheduling and retention can be configured granularly for each VM to meet SLAs.
Scale Computing also offers the ScaleCare Remote Recovery Service as a cloud-based backup for HC3 systems supporting all of the native HC3 features. For users who lack a secondary backup site, the remote recovery service acts as a backup site for any VMs that need protection. VMs can be recovered instantly on remote recovery platform to run in production until they can be restored back to the primary site. The Remote Recovery Service also includes a runbook to assist in DR planning and execution from implementation to recovery. ScaleCare engineers assist in the Remote Recovery Service in planning, implementation, DR testing, and recovery.
HC3 VMs can also be backed up using virtually any third-party backup software that supports your guest operating system and applications. If you are migrating an existing physical machine to a VM, you likely don’t need to change your backup at all. Backup solutions, including Veeam, that include backup agents can be used with the guest operating system allowing them to be backed up over the network to a backup server or other location depending on the solution. (other popular ones we see and in some cases have tested include Unitrends, Acronis, Storagecraft, Barracuda)
Some HC3 users choose to use HC3 native export features to export VM snapshots or backups to store on third party backup servers or storage. This extra backup method can be useful for long-term storage of VM backups. These exported backups can be imported into any other HC3 system for recovery. (Note - while exports currently can't be scheduled in the UI, they can be done of live machines at any time and the ScaleCare support team may be able to set up a simple scheduling process for these "under the hood" ... contact support to discuss)
You can read more about HC3 backup and disaster recovery in our whitepaper,
Disaster Recovery Strategies with Scale Computing. -
Create a WinPE ISO with VirtIO Drivers included for Recovery or Restore Processes
In situations where it is necessary to boot a VM to a rescue environment, and a Windows recovery environment is preferred, Microsoft has made it extremely easy to create a CD image that can be uploaded to an HC3 cluster and used as a boot drive for a VM.
These steps were used on a Windows 10 host, and Microsoft will likely have much more comprehensive information and would be better suited for assistance in case of issues or disparities...
It is assumed that these steps will be run on an HC3 cluster, where the Scale Tools CD is mounted and accessible to a Windows VM.
First, download and install the Windows Assessment Deployment Kit as listed on Microsoft's WinPE walkthrough.
According to that walkthrough, the Deployment Tools and Preinstallation Environment components are required for installation.
Once complete, start the "Deployment and Imaging Tools Environment" application that was installed with elevated privileges (Start -> type 'deployment', right click and select "Run as administrator") and use the following commands:
- copype amd64 C:\WinPE_amd64
- dism /mount-image /imagefile:"c:\winpe_amd64\media\sources\boot.wim" /index:1 /mountdir:"c:\winpe_amd64\mount"
- dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\net\w10\netkvm.inf"
- dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\serial\w10\vioser.inf"
- dism /add-driver /image:"c:\winpe_amd64\mount" /driver:"e:\drivers\stor\w10\viostor.inf"
- dism /unmount-image /mountdir:"c:\winpe_amd64\mount" /commit
- MakeWinPEMedia /ISO C:\WinPE_amd64 C:\WinPE_amd64\VirtIO-WinPE_amd64.iso
The machine architecture, filenames and paths above are all dependent upon the environment and configuration choices.
Lastly, upload the created ISO (C:\WinPE_amd64\VirtIO-WinPE_amd64.iso) to the HC3 cluster, insert the ISO into a VM's empty CDROM, and start the VM.
With the VM booted to the PE ISO, a SMB share can be mounted and files copied as needed, or other recovery operations completed. For example, to mount a SMB share from a remote host:
net use * \fileserver\share * /user:USERNAME@DOMAIN
Further customization and capabilities can be applied and configured prior to the above step 7, as needed, but extend beyond the initial needs of this post.
Information for the above was gathered from the following Microsoft pages:
WinPE: Create a Boot CD, DVD, ISO, or VHD
WinPE: Add drivers -
RE: How Can I Convert My Existing Workloads to Run on Scale HC3
qemu-img is actually what HyperCore OS uses internally when it is doing both import and export of VM's to/from HC3. As a result, if you use the "foreign VM import" process referenced above, it leverages the fact that if you simply rename a vmdk for example to a qcow2 file extension (that HC3 expects) and then import it, qemu-img will actually detect that the disk contained in file is really a VMDK and do that conversion automatically for you saving a step!
One other benefit of letting HC3 do the conversion is that it will convert to the right qcow2 format for that HC3 version automatically. if you are doing the pre-conversion using qemu-img on windows (or linux for that matter), you may want to run the qemu-img info on an empty HC3 exported qcow to see what flags it has and try to match them. depending on the version of HC3 and version of qemu for windows you are using I have seen cases mostly with older versions of HC3 where you need to specify the compat version to match something like this:
qemu-img convert -p -O qcow2 -o preallocation=metadata,compat=0.10 source-image.qcow2 output-image.qcow2
(this was an older version of HC3)
On a very new version of HC3 as of this post it looks like compat: 1.1. I got tired messing with stuff like that and the extra step so now I always start with renaming the virtual disk files to .qcow2 extension and try letting HC3 figure it out first at least which generally works. (VHDX may be the exception ... and of course you have to get into the "right" vmdk format in some cases as there are lots of different vmdk formats)
Another tip / FAQ - if you ever have a .OVA file, generally a virtual appliance, that is just a tar archive that you can expand and inside there will be a virtual disk file, usually .vmdk but sometimes .img format that you can convert/import into HC3 using the above processes.
Of course ALL of this is just getting HC3 to see the virtual disk. The OS on that virtual disk has to have the right drivers_ active_ to be able to boot on HC3 which either means that it has virtio drivers pre-installed and set to boot (if "performance" drivers are selected when creating the VM) or IDE drivers (if "compatible" drivers are selected ... and for windows mergeide.reg was run before migration.) Linux is generally just automatic but Windows will result in a 7B BSOD if a driver for the boot disk isn't active on the imported virtual disk.
-
4 IT Pitfalls to Avoid in 2018
Technology can be a great investment if you invest wisely. As technology changes, it is always a good idea to check if the ideas you had a year ago are still valid in the coming year. Here are a few ideas to think about in 2018 so you can avoid pitfalls like Pitfall Harry from the classic Activision game Pitfall pictured above.
https://www.scalecomputing.com/uploads/general-images/A2600_Pitfall.png
SAN Technology
Don’t buy a SAN. I repeat. Do not buy a SAN. Whether you’ve bought a SAN in the past or not, it is now a dying technology. SANs have been a staple of datacenter infrastructure for the last couple decades but technology is moving on from the SAN. A big part of this reason is the rise of flash storage and storage speeds overtaking the speeds that controller-based SAN architectures can provide.
NVMe is the new generation of flash storage and is designed to allow storage to interact directly with the CPU, bypassing controllers and storage protocols. We are entering into computing resource territory where storage is no longer the slowest resource in the compute stack and architectures will need to clear the compute path of controllers and protocols for optimal speeds.
Whether the SAN is physical or virtual, it still has controllers and protocols weighing it down. Even many new virtual or software-defined storage architectures still follow the SAN model and have virtualized the controller as a virtual storage appliance (VSA) which is a VM acting as a storage controller. You may not be ready for NVMe right now, or in 2018, but don’t let a 2018 investment in dying SAN technology keep you from moving to NVMe when you may need it in 2-3 years.
Instead, look for controller-less storage architectures like SCRIBE from Scale Computing. In testing, Scale Computing was able to achieve storage latency as low as 20 microseconds (not milliseconds) with NVMe storage. Controller-based SAN technologies could never come close to these speeds.
Going All-In on Cloud
One of the recurring themes I heard in 2017 was, “Everyone should have a cloud strategy.” Still true in 2018, but from what I saw in 2017, many interpreted this as abandoning on-prem and migrating entirely to cloud computing. There are clearly many cloud providers who could be pushing this notion of an all-in cloud strategy but the reality is that those that have already been executing cloud strategies are largely landing on some kind of hybrid cloud architecture.
The cloud is a beautiful resource and most organizations are probably already using it in one way or another, even if it may be Salesforce, Office 365, web scaling, a few VMs in AWS or Azure for dev and test, or IoT. The benefit vs. cost varies not only by service but also because how these services are used is different from business to business. It can be easy to jump into a cloud-based service without fully understanding the cost or performance characteristics and in many cases it may not be easy to escape once you’ve committed.
If you are considering cloud, it is important to evaluate the solution thoroughly for each aspect of your IT needs. Understand not only the cost but the performance capabilities vs. on-prem solutions. There are many systems, like manufacturing, that don’t easily tolerate some of the latency and even outages that can come with cloud computing. On-prem solutions for these systems that we refer to as edge computing may be a requirement.
It is very likely that a combination of on-prem solutions (like hyperconverged infrastructure) and cloud-based solutions may be the best overall strategy for your IT department. Cloud is just one more tool in the IT toolbox to provide the services your business needs.
Over-Centralizing the Datacenter
The pendulum always seems to swing back and forth between centralized datacenters and distributed datacenters. When cloud computing was becoming more mainstream, the pendulum seemed to swing toward the centralized approach. As I just discussed about cloud computing, the pendulum now seems to be swinging back away from centralization with the rise of edge computing and micro-datacenters. These on-prem solutions can provide greater availability and performance than cloud for a number of use cases.
The benefits of centralizing are attractive because it could lower operational costs by consolidating systems under one roof. However, there are far better remote management systems available these days that can also lower the operational costs of remote site infrastructure. In addition, simplified infrastructure solutions like hyperconverged infrastructure to create micro-datacenters are much easier to deploy and manage than traditional infrastructures.
As the pendulum continues to swing, we will likely see most organizations landing closer to the middle with a combination of solutions. The IT department of the near future will likely include IoT devices, micro-datacenters, cloud-based computing, and more traditional datacenter components all combined in an overall IT infrastructure strategy.
Premium (and Legacy) Hypervisors
As virtualization continues to evolve with technologies like cloud, containers, IoT, hyperconvergence, and beyond, the need for the hypervisor as a premium solution is diminishing. Hypervisor licensing became a big business with high licensing costs and those initial hypervisors did make virtualization mainstream by pulling together the traditional infrastructure components like servers and SANs. That traditional approach has now reached a plateau.
For cloud, hyperconverged infrastructure, and containers, hypervisors have become a commodity and big premium hypervisors with features you may never need are often not the best fit. Hypervisors that have been designed specifically to be lightweight and more efficient for technologies like hyperconverged infrastructure or cloud are part of a growing market trend. Traditional or legacy hypervisors that were designed to work with servers and SANs over a decade ago are not necessarily the best investment for the future.
Summary
Unlike Pitfall Harry, a misstep will most likely not get you eaten by an alligator but it may end up costing your organization in the long run. Only you know what is best for your organization but it is important to consider your strategies carefully before blowing your IT budget. The experts at Scale Computing will be happy to help you understand the benefits of hyperconverged infrastructure and datacenter modernization into 2018 and beyond. For more information contact us at [email protected].
-
Best of Show - Midmarket CIO Forum
This is not the first time I have blogged about winning awards at the Midmarket CIO Forum. Our midmarket customers and their peers seem to just naturally recognize the value of our infrastructure solutions. So maybe it isn't too surprising that when the Midmarket CIO Forum introduced a new Best of Show award this year, Scale Computing came out on top along with a win for Best Midmarket Strategy.
https://www.scalecomputing.com/uploads/general-images/Best_in_Show_CIO.jpg
I've been in the IT industry for 20 years now and have been involved in many award submissions over those years at all levels of the industry. I've also been in meeting and involved in projects designed to help win awards. If I have learned anything in those years, it is that setting out to win an award is a losing strategy.
The only time I've been involved in award-winning solutions is when the only objective has been to provide a great solution to customers. The concept of winning in IT should go no further than making IT easier to implement, easier to manage, and cost less. That is what we strive to do at Scale Computing. The fact that we are recognized by industry CIOs is just icing on the cake.
https://www.scalecomputing.com/uploads/general-images/Awards_CIO.jpg
-
RE: Scale Radically Changes Price Performance with Fully Automated Flash Tiering
Also, a more informal posting from our blog:
Turning HyperConvergence Up to 11
People seem to be asking me a lot lately about incorporating flash into their storage architecture. You probably already know that flash storage is still a lot more expensive than spinning disks. You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.
Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.
Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.
HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.
http://blog.scalecomputing.com/wp-content/uploads/2016/04/Screenshot-2016-04-19-13.07.06.png
HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.
If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.
-
Scale Computing and Information Builders Bring BI to the Appliance Market
Scale and Information Builders have teamed up to make an appliance for Business Intelligence and Analytics to move this complicated and often difficult area of computing into the reach of small and, more often, medium businesses looking to harness this competitive advantage without investing heavily into the architectural design and implementation aspects of these systems. With Scale and Information Builders you get a customer built, designed and implemented appliance that allows you to get up and running with your BI and analytics needs as quickly as possible.
The product is called Scale Analytics and starts at less than $50,000 !
-
RE: Scale Radically Changes Price Performance with Fully Automated Flash Tiering
@Breffni-Potter said in Scale Radically Changes Price Performance with Fully Automated Flash Tiering:
What's the TL:DR version?
When is it? Why should we be excited? What is it?
When is it: Today (SAM has one already, so it's not theory, it's really on the market.)
Why Should You Be Excited: Simple GUI (just a slider) that makes high capacity spinning disk and high performance SSD tiering as easy as setting your desired performance priority.
What is it? Hyperconverged, Fully Automated Spinner / SSH Tiering system!
-
Press Release: Scale Computing Radically Simplifies Disaster Recovery with Launch of DRaaS Offering
Our public press release here: https://www.scalecomputing.com/press_releases/scale-computing-radically-simplifies-disaster-recovery-with-launch-of-draas-offering/
Scale Computing Radically Simplifies Disaster Recovery with Launch of DRaaS Offering
Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions for midsized companies, today launched its ScaleCare Remote Recovery Service, a Disaster Recovery as a Service (DRaaS) offering that provides offsite protection for businesses at a price that fits the size and budget of their datacenter needs.
Building on the resiliency and high availability of the HC3 Virtualization Platform, ScaleCare Remote Recovery Service is the final layer of protection from Scale Computing needed to ensure business continuity for organizations of all sizes. ScaleCare Remote Recovery Service is a cost-effective alternative to backup and offsite shipping of physical media or third-party vendor hosted backup options. Built into the HC3 management interface, users can quickly and easily set up protection for any number of virtual machines to Scale Computing’s SAEE-16 SOC 2 certified, PCI compliant, remote datacenter hosted by LightBound.
“The ScaleCare Remote Recovery Service has put my mind at ease when it comes to recovery,” said David Reynolds, IT manager at Lectrodryer LLC. “Setting up automatic monthly, weekly, daily and minute snapshots of my VMs is unbelievably easy. All these are pushed to the cloud automatically and removed on the date you set them to expire. Highly recommended.”
ScaleCare Remote Recovery Service provides all the services and support businesses need without having to manage and pay for a private remote disaster recovery site. Whether protecting only critical workloads or an entire HC3 environment, users pay for only the VM protection they need without any upfront capital expense.
Built on snapshot technology already built into the HC3 HyperCore architecture, ScaleCare Remote Recovery Service allows users to customize their replication schedules to maximize protection, retention and bandwidth efficiency. After the initial replica is made, only changed blocks are sent to the remote data center. Remote availability of failover VMs within minutes, failback to on-site local HC3 clusters and rollbacks to point-in-time snapshots as needed provide ultimate data protection and availability.
“Remote data protection is the best course of action organizations can take to alleviate the proverbial placing of all of their eggs in one basket, but hosting a site for disaster recovery purposes is often not something midrange companies are prepared to handle themselves,” said Jeff Ready, CEO and co-founder of Scale Computing. “With remote, continuous replication and failover features already baked into the HC3 Virtualization Platform, launching the ScaleCare Remote Recovery Service is an ideal way for us to ensure our customers have the remote disaster recovery they need without the costs and headaches of doing it themselves.”
Pricing for ScaleCare Remote Recovery Service starts at $100 per month per virtual machine. For more information or to sign up for services, interested parties may contact Scale Computing via the company’s website at http://www.scalecomputing.com or by calling 1-877-SCALE-59.
Thanks for putting up with the press release postings, guy!
-
Scale UK Case Study: Penlon
Location: Abingdon, Oxfordshire, UK
Industry: Medical ManufacturingKey Challenges
- Existing solution was difficult and complex to manage
- Inability to easily migrate data
- The legacy solution was outdated and updates were proving difficult
- No ability to scale out
- Concerns over the reliability of business continuity
Scale Computing Solution
Penlon selected Scale Computing’s HC3 cluster, to support over 40 virtual machines.Business Benefits
- No licensing costs
- Improved data centre capabilities
- Dramatically reduced time management
- Added ability to scale out the IT infrastructure and plan IT budgets
- Reduction in RPO and RTO
- Complete business continuity for the IT environment
Penlon is a leading medical device manufacturing company, based in Abingdon. Established in 1943, the company has a long-standing reputation for quality and service within the medical industry for manufacturing and distributing products and systems for anesthesia, intubation, oxygen therapy and suction. Penlon operates internationally, with a presence in over 90 countries, spanning across Europe, North America, Middle East and Asia.
As a traditional medical manufacturing company, Penlon constantly reviews its product design, manufacturing processes and IT systems. As part of this Penlon looked to review its IT systems in order to deliver on two main objectives; simplified management and business continuity.
Having previously moved to virtualisation in order to save time and to create a more streamlined and enhanced IT environment, Penlon wanted to simplify the management and complexity of its infrastructure whilst guaranteeing business continuity for its customers.
IT Challenges
Penlon had been previously relying on a traditional VMware environment, but over time it was proving too complex and difficult to manage. In particular, the complexity of the system meant that Penlon struggled to migrate data and install updates. Without regular updates the company was left vulnerable to downtime and costly outages, as there was no way to ensure they had the most up to date environment.Tony Serratore, IT Manager, at Penlon commented, “The systems were vastly difficult to manage and when it came to updates we had to ensure everything was in sync. If the system seemed to be working we would not even think about installing upgrades as it was too complex and came with risks. But this wasn’t a long term solution.”
Not only was the existing VMware environment difficult to update, but the system was high maintenance. The IT team would have to allocate resources to ensure its smooth running, costing them both time and money. “The choice was to either stay with our current system running the risk of downtime or we could look for a new solution that was simple, cost effective, and easy to use,” explained Serratore.
Identifying the key requirements
Of paramount importance, not just to the IT team but to the company, was the need to guarantee continuity. Penlon also wanted the added ability to scale, adding capacity as and when needed. Serratore commented, “As an international company Penlon is constantly looking to expand its business. However, planning budgets for IT infrastructure ahead can be difficult. We wanted a solution which would work with us and support our growing business, offering the flexibility and agility we needed.”Proving the concept
After evaluating the market and considering a number of other vendors such as Simplivity, Penlon opted for the Scale Solution. The company was introduced to Scale Computing through reseller NAS UK and opted for a two week Proof of Concept (POC) of the Scale Computing hyperconverged HC3 cluster. Serratore explained, “After registering our interest, we received a POC product within two days. Not only were we impressed with the technology but we felt Scale Computing would value us as a customer if we made that investment.”After running a successful POC, Penlon opted for the HC 4000 and HC 1000 cluster, which offered disaster recovery, high availability, cloning, replication and snapshots providing complete business continuity.
Enjoying scalability, simplified management and business continuity
The HC3 clusters dramatically reduced management time, allowing the IT department to focus on other challenges rather than IT infrastructure. In addition, the technology offered scale-out architecture providing Penlon the room to expand. Serratore noted, “The Scale solution fits perfectly with our IT roadmap as we can add capacity as and when needed. We don’t need to over provision and can simply expand our environment when needed. With Scale we can align IT strategy with business growth.”“After implementing Scale, we reduced our management time by hours. Previously we would have spent time managing our VMware environment but the Scale solution is so easy to use we have been able to dramatically reduce management time,” continued Serratore. “Our RPO and RTO dramatically reduced from three days to a matter of minutes. We can now use this time to focus on other IT priorities making a real difference to the business.”
“With Scale we have effectively been able to build a data centre in a server room, without cloud based services. The technology provides servers, storage and virtualisation in one solution with complete transparency,” concluded Serratore.
Original Location: https://www.scalecomputing.com/case_studies/penlon/
-
Scale Computing – A Year in Review 2016
It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.
“And the award goes to…”
Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.
http://blog.scalecomputing.com/wp-content/uploads/2016/09/DAY-3_Scenes_MES_west122-768x512.jpg
Scenes from the 2016 Midsize Enterprise SummitNews Flash!
2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.
Newer, Stronger, Faster
When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.
Going Solo
In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.
http://blog.scalecomputing.com/wp-content/uploads/2016/12/hc1100_node_720.png
Cloud-based DR? Check
2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.
Better Together
2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.
The Doctor is In
It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017. Keep checking our blog for my latest posts.
http://blog.scalecomputing.com/wp-content/uploads/2016/12/IMG_1563-e1483040070399-284x300.jpg
Just me, Dr. PSincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.
Happy New Year!
-
Don’t Double Down on Infrastructure – Scale Out as Needed
There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.
There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets. It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.
The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.
Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.
Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure. With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.
HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.
This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out. Hyperconverged Infrastructure is the solution.
Original Location: http://blog.scalecomputing.com/dont-double-down-on-infrastructure-scale-out-as-needed/
-
RE: Scale Computing CEO On Attacking VMware's Virtualization Licensing Model
CRNTV has a segment with @jeffready as well:
http://www.crn.com/crntv/index1.htm?searchVideoContent=5303250057001
Sorry that we can't embed it to watch here.
-
Turning Hyperconvergence up to 11
People seem to be asking me a lot lately about incorporating flash into their storage architecture. You probably already know that flash storage is still a lot more expensive than spinning disks. You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.
Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.
Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.
HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.
HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.
Watch this short video demo of HC3 HEAT:
If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.