ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Best
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • Love Your Scale? Drop Us a Review!

      Gartner (I know, they don't much love) has a new online review system and we'd love it if some of our Mango Fans (we can't safely abbreviate that) were to post some reviews of your experiences with the Scale HC3!

      Of course, we'd love to see reviews on MangoLassi as well: there is a reviews section, you know.

      posted in Scale Legion scale scale hc3 gartner review
      scaleS
      scale
    • 7 Reasons Why I Work at Scale Computing

      I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization. Here are some of the reasons I joined Scale and why I love working here.

      1 – Our Founding Mission

      Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

      2 – Focus on the Administrator

      Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget. HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

      3 – Second to None Support

      I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts. We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

      4 – 1500+ Customers, 5500+ Installs

      Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more. Customer success is our driving force. Our solution is driving that success.

      5 – Innovative Technology

      We designed the HC3 solution from the ground up. Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

      6 – Simplicity, Scalability, and Availability

      These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime. I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

      7 – Disaster Recovery, VDI, and Distributed Enterprise

      HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

      Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era.

      Originally posted on the Scale Blog: http://blog.scalecomputing.com/7-reasons-why-i-work-at-scale-computing/

      posted in Self Promotion scale scale blog
      scaleS
      scale
    • The Role of Hypervisors in Modern Virtualization, Webinar March 16th

      The modern virtualization landscape includes newer technologies like cloud, hyperconvergence, and containers. As the hypervisor becomes more of a commodity underlying these technologies, the market dominance of VMware has eroded and KVM, Xen, and Hyper-V have been on the rise.

      Join us with special guests System Administrator & Scale Customer, Trevor Pott and Scale Computing Co-Founder, Jason Collier as we discuss how the landscape of virtualization is evolving, and why many are choosing solutions based on hypervisors like KVM over the long-time market leader, VMware.

      Date and time: Thursday, March 16, 2017 2:00 pm Eastern Daylight

      Duration: 1 hour

      Register Here

      posted in Scale Legion trevor potts jason collier scale scale hc3 webinar hyperconverged hyperconvergence hypervisors virtualization kvm vmware xen containerization
      scaleS
      scale
    • Backup Is No Joke

      Today is World Backup Day and a reminder to everyone about how important it is to backup your data. Why today? What better day than before April Fools Day to remember to be prepared for anything. You don’t want to be the fool who didn’t have a solid backup plan.

      But what is a backup? Backing up business critical data is more complex than many people realize which may be why backup and disaster recovery plans fall apart in the hour of need. Let’s start with the basic definition: A backup is a second copy of your data you keep in case your primary data is lost or corrupted. Pretty simple. Unfortunately, that basic concept is not nearly enough to implement an effective backup strategy. You need some additional considerations.

      1. Location – Where is your backup data stored? Is it on the same physical machine as your primary data? Is it in the same building? The closer your backup is to the primary data, the more chance your backup will suffer the same fate as your primary data. The best option is to have your backup offsite, physically removed from localized events that might cause data loss.
      2. Recovery Point Objective – If you needed to recover from your backup, how much recent data would you lose? Was your last backup taken an hour ago, a day ago, or a week ago? How much potential revenue could be lost along with the data you can’t recover? Taking backups as frequently as possible is the best way to prevent data loss.
        Recovery Time Objective – How long will it take to recover your data? If you are taking backups every hour but it takes you several hours or longer to recover from a backup, was the hourly backup effective? Recovery time is as important as recovery point. Have a plan for rapid recovery.
      3. System Backup – For a long time, backups only captured user and application data. Recovery was painful because the OS and applications needed to be rebuilt before restoring the data. These days, entire servers are usually what is backed up, increasing recovery speed.
      4. Multiple Points in Time – Early on, many learned the hard way that keeping one backup is not enough. Multiple backups from different points in time were required for a number of reasons. Sometimes backups failed, sometimes data needed to be recovered from further back in time, and for some businesses, backups need to be kept for years for compliance. The more backups, the more points in time that data can be recovered from.
      5. Backup Storage – One of the greatest challenges to backup over the decades has been storage. Keeping multiple copies of your data quickly starts consuming multiples of storage space. It just isn’t economical to require 10x or more of the storage of your primary data for backup. Incremental backups, compression, and deduplication have helped but backups still take lots of space. Calculating the storage requirements for your backup needs is essential.

      Are snapshots backups? Sort of, but not really. Snapshots do provide recovery capabilities within a local system, but generally go down with the ship in any kind of real disaster. That being said, many backup solutions are designed around snapshots and use snapshots to create a real backup by copying the snapshot to an offsite location. These replicated snapshots are indeed backups that can be used for recovery just like any other form of backup.

      Over the decades, there have been a variety of hardware, software, and service-based solutions to tackle backup and recovery. Within the last decade, there has been an increasing movement to include backup and recovery capabilities within operating systems, virtualization solutions, and storage solutions. This movement of turning backup into a feature rather than a secondary solution has only been gaining momentum.

      With the hyperconvergence movement, where virtualization, servers, storage, and management are brought together into a single appliance-based solution, backup and disaster recovery are being included as well. Vendors like Scale Computing are providing all of the backup and disaster recovery capabilities you need. Scale Computing even offers their own cloud-based DRaaS as an option.

      As we recently had April Fools Day, let’s remember that backup is no joke. Businesses rely on data and it is our job as IT professionals to protect against the loss of that data with backup. Take some time to review your backup plans and find out if you need to be doing more to prevent the next data loss event lurking around the corner.

      posted in Scale Legion scale scale hc3 backup disaster recovery scale blog hyperconvergence hyperconverged
      scaleS
      scale
    • Scale Computing Looks to Grow UK and Ireland Presence with CMS Distribution Agreement

      ChannelBiz UK has an article on our push with CMS to increase our presence in the UK and Irish markets. For anyone in the region interested in Scale products, hopefully this is welcome news. More presence, more opportunities! Let us know how we can help you guys in the isles!

      posted in Scale Legion scale scale hc3 uk ireland
      scaleS
      scale
    • Liverpool School of Tropical Medicine Chooses Scale HC3

      Exciting article on Scale Computing this week in ComputerWeekly online magazine doing a case study on how the Liverpool School of Tropical Medicine in the UK replacing their HP server infrastructure with Scale HC3 hyperconverged infrastructure for a significant cost savings. The LSTM estimated an £80,000 savings (that's $103K for the Yanks) over simply updating their existing HP servers and SAN inverted pyramid design.

      Not only was there significant cost savings, but the old infrastructure was based on shared backplane blade servers and single MSA SAN. Very high risk situation, as many here would point out. With Scale's HCI solution, LSTM no longer had to shoulder that risk, cost or the high management of dealing with LUNs and all that comes with an IPOD solution. Lower cost and high availability - a major win - win.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged case study computerweekly
      scaleS
      scale
    • CRN Top 100 Executives 2017 Recognition - Scale

      It’s been ten years since Scale Computing first opened its doors, with a single objective in mind—to help an underserved segment of the IT world.

      Screen-Shot-2017-08-08-at-12.46.48-PM-300x158.png

      Scale Computing has done just that. We’ve helped small businesses, universities and organizations achieve their IT infrastructure goals, at a fraction of a price of our competitors. When we began we were focused on serving small to mid-sized business segment.

      Since that time, we’ve continued to expand our offerings through large institutions, government entities, and the global enterprise, including some of the world’s largest HCI deployments. In the last quarter, we’ve closed five Fortune 500 deals and we expect to close many more in the coming year.

      xchange_top100_260x120.png

      Since 2007, there has been a distinctive shift from traditional legacy external storage to a hyperconverged infrastructure. HCI is now one of the fastest growing segments in the IT infrastructure market.

      Scale Computing has been a pioneer long before it was fashionable, even coining the term “hyperconvergence.” We worked hard to develop our patented HC3 systems to be high performing, affordable and accessible.

      Our intelligent, driven employees and Midwestern work ethic have brought us to where we are today. We are never satisfied with the status quo and will always push for innovation in technology.

      Scale continues to drive innovation in the hyperconverged industry, most recently announcing the HC5150D, boasting 3x the capacity of Scale’s award-winning HC1150 system.

      In the coming months, we have some exciting news and we couldn’t be more thrilled for what’s ahead.

      I’m proud to have been named one of CRN’s Top 25 Innovators of 2017, but it’s really testament to my amazing team at Scale Computing. They continue to surpass my expectations in pushing the technology and support envelope, so this honor actually belongs to the entire company.

      By @JeffReady

      posted in Scale Legion scale scale blog crn channel
      scaleS
      scale
    • You Have Questions, We Have Answers

      Hyperconverged infrastructure is a fairly new but rapidly growing technology. It’s natural to want to research new technologies like hyperconvergence and unless you are an insider, you likely have questions. Here at Scale Computing, we are experts in hyperconverged infrastructure. We were not only an early innovator in hyperconvergence but we were the first to combine a purpose built hypervisor with a hypervisor-embedded storage system designed specifically for hyperconverged infrastructure.

      piqK4M9dT-300x191.gif

      Between our Product Team, our R&D Team, our Systems Engineers, our ScaleCare Support Team, and really everyone else, we have the expertise you need to learn whether hyperconvergence is right for your IT department. Our award-winning HC3 hyperconverged infrastructure solution is not only one of the most innovative, but definitely the easiest to use. It is so easy to use, in fact, that not everyone can believe it. We are here to assure you that it is real and we’ll answer all your questions to prove it.

      We’ve taken the time to compile some our most frequently asked questions into our helpful new FAQ for HC3. Hopefully this document helps answer some of your initial questions. As you can imagine, we get a variety of questions ranging from general hyperconvergence concepts to specific questions about third party solutions. If you don’t find an answer in the FAQ, we’ll be happy to answer your questions by email, phone, or in person at one of the many events we host or attend.

      If you are an existing customer, new customer, or just extra curious, we also have put together our helpful new FAQ for ScaleCare Support. This document will answer nearly all your questions about our expert support, how to use it, and what’s included. Our ScaleCare Support Engineers are experts at everything HC3 and much more. We are happy to help answer any questions about your new or existing HC3 system.

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence hyperconverged scalecare faq
      scaleS
      scale
    • Scale River Boat Tour at SpiceWorld Austin 2017

      Scale Computing will once again be hosting it's annual boat tour in Austin during Spiceworld.

      Sign up here:

      https://www.eventbrite.com/e/scale-computing-boatbat-tour-2017-spiceworld-tickets-38648213848

      This years event will be on Wednesday, October 11 (last day of Spiceworld). We generally meet immediately after or during the final happy hour to be sure that everyone can get over to the boat. We WILL NOT be providing transportation. The event is within walking distance or a short Uber ride away (I think Uber is back in Austin?).

      As in past years we will be providing It's All Good BBQ, beer, and soft drinks.

      Registration will be limited to 100 people when it opens. We are planning to have registration opened early next week. We will be sure to post an update on the vendor page when registration officially opens. Tell your friends to be sure they are following the Scale page to be sure they secure a spot!

      We look forward to seeing you all in a couple of weeks.

      posted in Scale Legion spiceworld spiceworld 2017 scale scale boat tour
      scaleS
      scale
    • How Can I Convert My Existing Workloads to Run on Scale HC3

      A: There are several options for converting existing workloads to run on HC3. For Windows and Linux VMs, Scale has partnered with Carbonite and their Double-Take product to offer HC3 Move which can be used to migrate physical (P2V) and virtual (V2V) workloads onto HC3. It requires near zero downtime and gives the user ultimate control of deciding when to cut over from their source machine onto the HC3 platform.

      In addition to HC3 Move, any backup solution that supports full system bare metal recovery can be used to transfer workloads onto HC3. In some cases virtual machine formats like VMDK can be directly imported to HC3 from other hypervisors (Veeam Example).

      For those users who would like assistance in the migration, Scale also offers services that can do everything from showing end users how to use the HC3 Move tool while performing a single migration (Quickstart service) to performing the entire migration in a full services engagement.

      For more information on other Migration processes and troubleshooting, on the Scale customer or partner portal search for the following documents:

      x2v concepts - P2V/V2V Concepts for the Scale HC3 Cluster

      • General discussion of migration issues and troubleshooting
        on partner portal or the on customer portal

      on that page you will also find information on some additional tools including:

      clonezilla - P2V and V2V Migrations with Clonezilla and HC3

      • Free open source tool that can migrate existing physical and virtual machines into HC3 VMs
        and

      P2V/V2V with Acronis and the Scale HC3 Cluster

      Other documents available include:

      foreign vm import - Import a Foreign VM or Appliance into HC3 Using the HC3 Web Interface

      • free way to directly import many offline virtual disk formats into HC3 - vmdk, vhd, ova
        on partner portal
        on customer portal

      More information on Migration to HC3.

      posted in Scale Legion scale scale hc3 v2v migration
      scaleS
      scale
    • RE: What Are You Doing Right Now

      @RojoLoco said in What Are You Doing Right Now:

      @Dashrender said in What Are You Doing Right Now:

      @RojoLoco said in What Are You Doing Right Now:

      @scottalanmiller said in What Are You Doing Right Now:

      @RojoLoco said in What Are You Doing Right Now:

      @scottalanmiller said in What Are You Doing Right Now:

      @aidan_walsh said in What Are You Doing Right Now:

      @scottalanmiller said in What Are You Doing Right Now:

      @aidan_walsh Nice, this is who we have in Rochester....

      https://blackbuttondistilling.com/

      I'm not usually one for flavoured spirits but that Apple Pie Monshine sounds great.

      Oh no, it's disgusting. Moonshine is like a new bizarre fake term in the US for "utterly shit sprits". No one would put the label moonshine on a product if it was palatable. THat's for the hipster kids to think that they are drinking something illegal when, in fact, it's just overpriced garbage.

      So true. The only real moonshine comes from a bootlegger up in the hills, in an unlabeled mason jar. I am very fortunate to be semi-connected to some rural moonshine aficionados, and what they bring over on holidays is the truth.

      Same here, I drink actual moonshine and it is awesome. What is labeled that is stores is just a gimmick. Nothing like moonshine in taste OR in actuality.

      Exactly. Real 'shine has a super clean flavor that is inimitable. And if you want to test the authenticity, pour a tiny bit in the jar lid and light it... should be a pure blue flame, anything else is bs.

      I expect to see some at ML Con2!

      You forget the first rule of moonshine club...

      Probably a common problem in that club - many things become forgotten.

      posted in Water Closet
      scaleS
      scale
    • RE: The Four Things That You Lose with Scale Computing HC3

      We recommend that you have four switches, two switches in a high availability pair for the backplane and two in a high availability pair for the normal network traffic. Of course, the goal here is to achieve a totally high availability system not just for the Scale HC3 cluster, but for the network itself. Having your Scale cluster up and running won't do you any good if the network it is attached to is down. But the system will run with less.

      posted in Self Promotion
      scaleS
      scale
    • VDI with Workspot and HC3

      One of the questions we often get for our HC3 platform is, “Can it be used for virtual desktop infrastructure (VDI)?” Yes, of course, it can. In addition to solutions we support like RDS or Citrix, we are very excited about our partnership with Workspot and their VDI 2.0 solution. But first, I want to explain a bit about why we think VDI on HC3 makes so much sense.

      VDI greatly benefits from simplicity in infrastructure. The idea behind VDI is to reduce both infrastructure management and cost by moving workloads from the front end to the bank end infrastructure. This makes it much easier to control resource utilization and manage images. HC3 provides that simple infrastructure that from box to running VMs takes less than an hour. Also, the entire firmware and software including hypervisor can be updated or scaled out with additional capacity without downtime. Your desktops will never be as highly available as on HC3. Simple, scalable, and available are the ideas HC3 is built on.

      So why Workspot on HC3? Workspot brought together some of the original creators of VDI to reinvent it as a next generation solution.The CTO of Workspot is one of the founding engineers to code the VMware View VDI product! What makes it innovative though? By leveraging cloud management infrastructure, Workspot simplifies VDI management for the IT generalist while supporting BYOD for the modern workplace. Workspot on HC3 can be deployed in under an hour, making it possible to deploy a full VDI solution in less than a day.

      alt text

      We did validation testing with Workspot on HC3and were able to run 175 desktop VMs on a 3-node HC1150 cluster using LoginVSI as a benchmark for performance. We also validated an 3-node HC4150 cluster with 360 desktops with similar results. You can see a more detailed description of the reference architecture here. By adding more nodes, and even additional clusters, the capacity can be expanded almost infinitely but more importantly, just as much as you need, when you need it. We think these results speak for themselves as positioning this solution as a perfect fit for the midmarket, where HC3 already shines.

      Maybe you’ve been considering VDI but have been hesitant because of the added complexity of having to create even more traditional virtualization infrastructure in your datacenter. It doesn’t have to be that way. Workspot and Scale Computing are both in the business of reducing complexity and cost to make these solutions more accessible and more affordable. Just take a look and you’ll see why we continue to do things differently than everyone else.

      Click here for the press release.

      posted in Scale Legion
      scaleS
      scale
    • Scale Webinar: VDI Made Easy with Hyperconvergence

      Traditional desktop management, including traditional VDI solutions, are complex with too many moving parts and high cost of ownership. Modern VDI technology combined with hyperconverged infrastructure simplify VDI and make it accessible within the budgets of small and mid-size datacenters.

      Scale Computing and Workspot will show you how hyperconvergence can simplify VDI and make it budget friendly in a one-time only webinar on Thursday, June 9 at 11:00 AM (EDT). In this webinar you will learn about:

      • How simple it is to deploy a hyperconverged infrastructure solution with VDI
      • How a simple VDI solution can support any type of user and enable BYOD securely
      • How using VDI on hyperconverged infrastructure further simplifies management and maintenance
      • How the user experience of hyperconverged VDI compares to traditional solutions
      • How you can get hyperconverged VDI at half the cost of traditional desktop solutions

      Register

      posted in Self Promotion scale scale hc3 workspot vdi virtualization webinar
      scaleS
      scale
    • RE: Introducing the Single Node Scale HC3 Appliance

      @dafyre said in Introducing the Single Node Scale HC3 Appliance:

      Will end-users be able to grow slowly? IE: Grow from one node to 2... and then eventually buy a third?

      or would it be a jump straight from 1 node to 3 nodes?

      Unfortunately at this time there is no means of growing from one to two. The jump is from one to three and one node at a time from then on. This is because of the need of a witness to avoid split brain of the cluster. One node avoids this by not having high availabilty, three and more nodes handle this by always having a witness. At two nodes there are complications that do not exist otherwise. So at this time, there is no two node option.

      Except, of course, if you were in a situation where two nodes was useful through replication. That works with two nodes.

      posted in Scale Legion
      scaleS
      scale
    • Scale HC3 New and Improved Real-Time Per VM Statistics

      This one is actually from last month, but it never got posted and is still pretty relevant so sharing it with you.

      When we designed HC3 clusters, we made them fault-tolerant and highly available so that you did not need to sit around all day staring at the HC3 web interface in case something went wrong. We designed HC3 so you could rest easy knowing your workloads were on a reliable infrastructure that didn’t need a babysitter. But still, when you need to manage your VM workloads on HC3, you need fast reliable data to make management decisions. That’s why we have implemented some new statistics along with our new storage features.

      If you haven’t already heard the news(click here), we have integrated SSD flash storage into our already hyper-efficient software defined storage layer. We knew this would make you even more curious about your per VM IOPS so we added that statistic both as a cluster wide statistic and a per VM statistic, refreshed continuously in real-time.

      Up until now you have been used to the at-a-glance monitoring of statistics for CPU utilization, RAM utilization, and storage utilization for the cluster and now you will see the cluster-wide IOPS statistic right alongside what you were already seeing. For individual VMs, you are now going to see real-time statistic for both storage utilization and IOPS, right on the main web interface view.

      0_1466533699646_Screenshot-2016-04-25-12.20.41.png

      Why are we doing this now? The new flash storage integration and automated tiering architecture allows you to tune the priority of flash utilization on the individual virtual disks in your VMs. Monitoring the IOPS for each VM will help guide you as you tune the virtual disks for maximum performance. You’ll not only see the benefits of the flash storage more clearly in the web interface but you will see the benefits of tuning specific workloads to make the best use of the flash storage in your cluster.

      Take advantage of these new statistics when you update your HyperCore software and you’ll see the benefit of monitoring your storage utilization more granularity. Talk to your ScaleCare support engineers to learn how to get this latest update.

      Original post: http://blog.scalecomputing.com/new-and-improved-real-time-per-vm-statistics/

      posted in Self Promotion scale storage iops hyperconvergence scale hc3 hypercore
      scaleS
      scale
    • IT Infrastructure: Deploy. Integrate. Repeat.

      Original Post: http://blog.scalecomputing.com/it-infrastructure-deploy-integrate-repeat/

      Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

      This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

      Take storage for example. SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

      Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability. No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

      Not all HCI systems are equal, though, as some still adhere to some separate components. Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

      Not only does HCI reduce vendor management and complexity but when done correctly, embeds storage in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved. Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

      HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface. Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs. And for scaling out, new appliances can be added to a cluster without disruption as well.

      The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent. Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

      Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

      posted in Scale Legion scale systems architecture scale hc3 scale blog
      scaleS
      scale
    • The VSA is the Ugly Result of Legacy Vendor Lock-Out

      VMWare and Hyper-V with the traditional Servers+Switches+SAN architecture – widely adopted by enterprise and the large mid-market – works. It works relatively well, but it is complex (many moving parts, usually from different vendors), necessitates multiple layers of management (server, switch, SAN, hypervisor), and requires the use of storage protocols to be functional at all. Historically speaking, this has led to either the requirement of many people from several different IT disciplines to effectively virtualize and manage a VMWare/Hyper-V based environment effectively, or to smaller companies taking a pass on virtualization as the soft and hard costs associated with it put HA virtualization out of reach.

      0_1467048816273_legacy-300x171.jpg

      With the advent of Hyperconvergence in the modern datacenter, HCI vendors had a limited set of options when it came to the shared storage part of the equation. Lacking access to the VMKernel and NTOS kernel, they could only either virtualize the entire SAN and run instances of it as a VM on each node in the HCI architecture (horribly inefficient), or move to hypervisors that aren’t from VMWare or Microsoft. The first choice is what most took, even though it has a very high cost in terms of resource efficiency and IO path complexity as well as nearly doubling the hardware requirements of the architecture to run it. They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access. Likewise, they found this approach (known as VSA or Virtual SAN Appliance) to be easier than tackling the truly difficult job of building an entire architecture from the ground up, clean sheet style.

      The VSA approach – virtualize the SAN and its controllers – also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine on each box. This did in fact simplify things like implementation and management by eliminating the separate physical SAN (but not its resource requirements, storage protocols, or overhead – in all actuality, it reduplicates those bits of overhead on every node, turning one SAN into 3 or 4 or more). However, it didn’t do much to simplify the data path. In fact, quite the opposite. It complicated the path to disk by turning the IO path from:

      application->RAM->disk

      into :

      application->RAM->hypervisor->RAM->SAN controller VM->RAM-> hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk->network to next node->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk.

      This approach uses so much resource that one could run an entire SMB to MidMarket datacenter on just the CPU and RAM being allocated to these VSA’s

      0_1467048905297_VSA-300x156.jpg

      This “stack dependent” approach did, in fact, speed up the time-to-market equation for the HCI vendors that implement it, but due to the extra hardware requirements, extra burden of the IO path, and use of SSD/flash primarily as a caching mechanism for the now tortured IO path in use, this approach still brought a solution in at a price point and complexity level out of reach of the modern SMB.

      HCI done the right way – HES

      The right way to do an HCI architecture is to take the exact opposite path than all of the VSA based vendors. From a design perspective, the goal of eliminating the dedicated servers, storage protocol overhead, resources consumed, and associated gear is met by moving the hypervisor directly into the OS of a clustered platform that runs storage directly in userspace adjacent to the kernel (known as HES or in-kernel). This leverages direct I/O, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by virtualization.

      0_1467048927135_scribe-300x186.jpg

      This approach turns the IO path back into :

      application -> RAM -> disk -> backplane -> disk

      This complete stack owner approach, in addition to regaining the efficiency promised by HCI, allows for features and functionalities (that historically had to be provided by third parties in the legacy and VSA approaches) to be built directly into the platform, allowing for true single vendor solutions to be implemented and radically simplifying the SMB/SME data center at all levels – lower cost of acquisition, lower total TCO. This makes HCI affordable and approachable to the SMB and Mid-Market. This eliminates the extra hardware requirements, the overhead of SAN, and the overhead of storage protocols and re-serialization of IO. This returns efficiency to the datacenter.

      When the IO Path is compared side by side, the differences in the overhead and the efficiency become obvious, and the penalties and pain caused by legacy vendor lock-in start to really stand out, with VSA based approaches (in a basic 3 node implementation) using as much as 24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters.

      0_1467048953985_diff-300x123.jpg

      Original post: http://blog.scalecomputing.com/the-vsa-is-the-ugly-result-of-legacy-vendor-lock-out/

      posted in Self Promotion scale scale hc3 scale blog hyperconvergence
      scaleS
      scale
    • RE: Support Tips/Tricks and maybe a Treat or two!

      @Kelly said in Support Tips/Tricks and maybe a Treat or two!:

      -ordering my first Scale cluster tomorrow

      A perfect holiday gift idea for all of your techie loved ones! Or for yourself. Congrats!

      posted in Scale Legion
      scaleS
      scale
    • HEAT Up I/O with a Flash Retrofit

      If your HC3 workloads need better performance and faster I/O, you can soon take advantage of flash storage without having to replace your existing cluster nodes. Scale Computing is rolling out a service to help you retrofit your existing HC2000 /2100 or HC4000/4100 nodes with flash solid state drives (SSD) and update your HyperCore version to start using hybrid flash storage without any downtime. You can get the full benefits of HyperCore Enhanced Automated Tiering (HEAT) in HyperCore v7 when you retrofit with flash drives.

      You can read more about HEAT technology in my blog post Turning Hyperconvergence to 11

      Now, before you start ordering your new SSD drives for flash storage retrofit, let’s talk about the new storage architecture designed to include flash. You may already be wondering how much flash storage you need and how it can be divided among the workloads that need it, or even how it will affect your future plans to scale out with more HC3 nodes.

      The HC3 storage system uses wide striping across all nodes in the cluster to provide maximum performance and availability in the form of redundancy across nodes. With all spinning disks, any disk was a candidate for redundant writes from other nodes. With the addition of flash, redundancy is intelligently segregated between flash and spinning disk storage to maximize flash performance.

      A write to a spinning disk will be redundantly written to a spinning disk on another node, and a write to an SSD will be redundantly written to an SSD on another node. Therefore, just as you need at least three nodes of storage and compute resources in an HC3 cluster, you need to a minimum of three nodes with SSD drives to take advantage of flash storage.

      Consider also, with retrofitting, that you will be replacing an existing spinning disk drive with the new SSD. The new SSD may be of different capacity that the disk it is replacing which might affect your overall storage pool capacity. You may already in a position to add overall capacity where larger SSD drives are the right fit or adding an additional flash storage node along with the retrofit is the right choice. You can get to the three node minimum of SSD nodes by any combination of retrofitting or adding new SSD tiered nodes to the cluster.

      Retrofitting existing clusters is being provided as a service which will include our Scale Computing experts helping you assess your storage needs to determine the best plan for you to incorporate flash into your existing HC3 cluster. Whether you have a small, medium, or large cluster implementation, we will assist you in both planning and implementation to avoid any downtime or disruption.

      However you decide to retrofit and implement flash storage in your HC3 cluster, you will immediately begin seeing the benefits as new data is written to high performing flash and high I/O blocks from spinning disk are intelligently moved to flash storage for better performance. Furthermore, you have full control of how SSD is used on a per virtual disk basis. You’ll be able to adjust the level of SSD usage on a sliding scale to take advantage of both flash and spinning disk storage where you need each most. It’s the flash storage solution you’ve been waiting for.

      posted in Self Promotion storage ssd heat scale scale hc3 hyperconvergence
      scaleS
      scale
    • 1 / 1