ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Best
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • 3-Node Minimum? Not So Fast

      http://blog.scalecomputing.com/3-node-minimum-not-so-fast/

      For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration. Why now?

      Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

      0_1478556856718_Screen-Shot-2016-07-18-at-2.06.52-PM-300x183.png

      In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

      Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

      0_1478556872680_Replication-300x143.png

      Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

      Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

      posted in Scale Legion scale scale hc3 hyperconvergence virtualization
      scaleS
      scale
    • Explain Hyperconvergence Like I Am Five

      It’s supposedly the wave of the future, but you’re not sure if hyperconvergence is a new type of server architecture or the name of one of the Decepticons from Transformers.

      If you’ve always wanted to know what the heck hyperconvergence is all about, but want things explained from the beginning and without all the marketing buzzwords, then clear you calendar for January 21st at 11 a.m. CT!

      Join Scale Computing as they go back (WAY back) and explain hyperconvergence as if you were still five.

      Find out how:

      • Hyperconvergence is like Optimus Prime because it handles compute, storage, networking and virtualization by itself
      • Hyperconvergence simplifies data center operations, and lets users do more with less
      • Hyperconvergence helps organizations avoid fingerpointing among OS, hypervisor, and storage vendors

      Make sure you register for this exciting Spiceworks Webinar before it rolls out, you won't want to miss it!

      What questions do you have about hyperconvergence?

      posted in Self Promotion scale scale hc3 hyperconvergence virtualization storage rain spiceworks webinar
      scaleS
      scale
    • What do DDOS attacks mean for Cloud users?

      Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

      alt text

      This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime. What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

      As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

      Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

      The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

      As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

      posted in Scale Legion scale scale blog ddos security hyperconvergence
      scaleS
      scale
    • Cloud Computing vs. Hyperconvergence

      As IT departments look to move beyond traditional virtualization into cloud and hyperconverged infrastructure (HCI) platforms, they have a lot to consider. There are many types of organizations with different IT needs and it is important to determine whether those needs align more cloud or HCI. Before I dig into the differences, let me go over the similarities.

      Both cloud and HCI tend to offer a similar user experience highlighted by ease of use and simplicity. One of the key features of both is simplifying the creation of VMs by automatically managing the pools of resources. With cloud, the infrastructure is all but transparent as the actual physical host where the VM is running is far removed from the user. With live migration capabilities and auto provisioning of resources, HCI can provide nearly the same experience.

      As for storage, software defined storage pooling has made storage management practically as transparent in HCI as it is in cloud. In many ways, HCI is nearly a private cloud, but without the complexity of traditional underlying virtualization architecture, HCI makes infrastructure management turnkey and lets administrators focus on the workloads and applications, just like the cloud, but keeps everything on prem and not managed by a third party.

      Still, there are definite differences between cloud and HCI so let’s get to those. I like to approach these with a series of questions to help guide between cloud and on prem HCI.

      Is your business seasonal?

      If your business is seasonal, the pay as you go Opex pricing model of cloud might make more sense as well as the bursting ability of cloud. If you need lots of computing power but only during short periods of the year, cloud might be best. If you business follows a more typical schedule of steady business throughout the year with some seasonal bumps, then an on prem Capex investment in HCI might be the best option.

      Do you already have IT staff?

      If you already have IT staff managing an existing infrastructure that you are looking to replace, an HCI solution will be both easy to implement and will allow your existing staff to change focus from infrastructure management to implementing better applications, services, and processes. If you are currently unstaffed for IT, cloud might be the way to go since you can get a number of cloud based application services for users with very little IT administration needed. You may need some resources to help make a variety of these services work together for your business, but it will likely be less than with an on prem solution.

      Do you need to meet regulatory compliance on data?

      If so, you are going to need to look into the implications of your data and services hosted and managed off site by a third party. You will be reliant on the cloud provider to provide the necessary security levels that meet compliance. With HCI, you have complete control and can implement any level of security because the solution is on prem.

      Do you favor Capex or Opex?

      Pretty simple here. Cloud is Opex. HCI can be Capex and is usually available for Opex as well through leasing options. The cloud Opex is going to be less predictable because many of the costs are based on dynamic usage, where the Opex with HCI should be completely predictable with a monthly leasing fee. considering further the Opex for HCI is usually in the form of lease-to-own so it drops off dramatically once the lease period ends as opposed to cloud Opex which is perpetual.

      Can you rely on your internet connection?

      Cloud is 100% dependent on internet connectivity so if your internet connection is down, all of your cloud computing is unavailable. The internet connection becomes a single point of failure for cloud. With HCI, internet connection will not affect local access to applications and services.

      Do you trust third party services?

      If something goes wrong with cloud, you are dependent on the cloud provider to correct the issue. What if your small or medium sized cloud provider suddenly goes out of business? Whatever happens, you are helpless, waiting, like an airline passenger waiting on the tarmac for a last minute repair. With HCI, the solution is under your control and you can take action to get systems back online.

      Let me condense these into a little cheat sheet for you.

      alt text

      One last consideration that I don’t like to put into the question category is the ability to escape the cloud if it doesn’t work out. Why don’t I like to make it a question? Maybe I just haven’t found the right way to ask it without making cloud sound like some kind of death trap for your data, and I’m not trying to throw cloud under the bus here. Cloud is a good solution where it fits. That being said, it is still a valid consideration.

      Most cloud providers have great onboarding services to get your data to the cloud more efficiently but they don’t have any equivalent to move you off. It is not in their best interest. Dragging all of your data back out of the cloud over your internet connection is not a project anyone would look forward to. If all of your critical data resides in the cloud, it might take a while to get it back on prem. With HCI it is already on prem so you can do whatever you like with it at local network speeds.

      I hope that helps those who have been considering a choice between cloud and HCI for their IT infrastructure. Until next time.

      posted in Scale Legion cloud hyperconvergence scale blog
      scaleS
      scale
    • Introducing the Single Node Scale HC3 Appliance

      Since the news was quietly leaked here the other day, we wanted to take a moment and tell you about the new, single node Scale HC3 appliance and officially answer questions as they may arise. Scale Computing now offers our Scale HC3 platform in a single node configuration. This allows customers to deploy the Scale HC3 to situations where the capacity or high availability of the three node (or greater) cluster is not warranted such as ROBO (remote office / branch office) locations and SMB or even SOHO (small office / home office) customers that cannot justify the cost of high availability but do want Scale's flexibility, support and ease of use.

      The single node configuration comes with the same easy to use all inclusive management interface, complete support and advanced storage layer that you expect from Scale, just in a smaller package without high availability. We hope that this is of interest to many small businesses or company divisions that would benefit from all that Scale offers but have been unable to do so due to the entry point of having to have three nodes for a minimum cluster.

      Customers starting with a single node configuration will be able to transparently upgrade to three or more nodes when they time comes for them to grow their environments, as well.

      Single node configurations can replicate with other single node configurations (making a two node configuration possible in some ways) as well as with high availability cluster configurations making them very well suited to remote offices.

      Pricing: An official price list is not available yet, but the single node configuration is priced as the same as the per node prices of the existing cluster configurations. The single node configuration is only an update to our software to allow for single node operation and not new hardware, so the single nodes are the same as individual nodes in a cluster. The single node configuration, therefore, starts at just 33% the price of our normal starting price for a three node cluster.

      posted in Scale Legion scale scale hc3
      scaleS
      scale
    • Technology Becomes Obsolete. Saving Does Not.

      The list of technological innovations in IT that have already passed into obsoletion is long. You might recall some not so ancient technologies like the floppy disk, dot matrix printers, ZIP drives, the FAT file system, and cream-colored computer enclosures. Undoubtedly these are still being used somewhere by someone but I hope not in your data center. No, the rest of us have moved on. Technologies always fade and get replaced by newer, better technologies. Saving money, on the other hand, never goes out of style.

      social-card-travel-money.jpg

      You see, when IT pros like you buy IT assets, you have to assume that the technology you are buying is going to be replaced in some number of years. Not replaced because it no longer operates. It gets replaced because it is no longer being manufactured or supported and has been replaced by newer, better, faster gear. This is IT. We accept this.

      The real question here is, are you spending too much money on the gear you are buying now when it is going to be replaced in a few years anyway? For decades, the answer is mostly yes, and there are a two reasons why. Over-provisioning and complexity.

      Over-Provisioning

      When you are buying an IT solution, you know you are going to keep that solution for a minimum of 3-5 years before it gets replaced. Therefore you must attempt to forecast your needs for 3-5 year out. This is practically impossible but you try. Rather than risk under-provisioning, you over-provision to prevent yourself from having to upgrade or scale out. The process of acquiring new gear is difficult. There is budget approval, research, more guesstimating future needs, implementation, and risk of unforeseen disasters.

      But why is scaling out so difficult? Traditional IT architectures involve multiple vendors providing different components like servers, storage, hypervisors, disaster recovery, and more. There are many moving parts that might break when a new component is added into the mix. Software licensing may need to be upgraded to a higher, more expensive tier with infrastructure growth. You don’t want to have to worry about running out of CPU, RAM, storage, or any other compute resource because you don’t want to have to deal with upgrading or scaling out what you already have. It is too complex.

      Complexity

      Ok, I just explained how IT infrastructure can be complex with so many vendors and components. It can be downright fragile when it comes to introducing change. Complexity bites you when it comes to operational expenses as well. It requires more expertise, more training, and tasks become more time consuming. And what about feature complexity? Are you spending too much on features that you don’t need? I know I am guilty of this in a lot of ways.

      I own an iPhone. It has all kinds of features I don’t use. For example, I don’t use Bluetooth. I just don’t use external devices with my phone very often. But the feature is there and I paid for it. There are a bunch of apps and feature on my phone I will likely never use, but all of those contributed to the price I paid for the phone, whether I use them or not.

      I also own quite a few tools at home that I may have only used once. Was it worth it to buy them and then hardly ever use them? There is the old saying, “It is better to have it and not need it than to need it and not have it.” There is some truth to that and maybe that is why I still own those tools. But unlike IT technologies, these tools may well be useful 10, 20, even 30 years from now.

      How much do you figure you could be overspending on features and functionality you may never use in some of the IT solutions you buy? Just because a solution is loaded with features and functionality does not necessarily mean it is the best solution for you. It probably just means it costs more. Maybe it also comes with a brand name that costs more. Are you really getting the right solution?

      There is a Better Way

      So you over-provision. You likely spend a lot to have resources and functionality that you may or may not ever use. Of course you need some overhead for normal operations, but you never really know how much you will need. Or you accidently under-provision and end up spending too much upgrading and scaling out. Stop! There are better options.

      If you haven’t noticed lately, traditional Capex expenditures on IT infrastructure are under scrutiny and Opex is becoming more favorable. Pay-as-you-go models like cloud computing are gaining traction as a way to prevent over-provisioning expense. Still, cloud can be extremely costly especially if costs are not managed well. When you have nearly unlimited resources in an elastic cloud, it can be easy to overprovision resources you don’t need, and end up paying for them when no one is paying attention.

      Hyperconverged Infrastructure (HCI) is another option. Designed to be both simple to operate and to scale out, HCI lets you use just the resources you need and gives you the ability to scale out quickly and easily when needed. HCI combines servers, storage, virtualization, and even disaster recovery into a single appliance. Those appliances can then be clustered to pool resources, provide high availability, and become easy to scale out.

      HC3, from Scale Computing, is unique amongst HCI solution in allowing HCI appliances to be mixed and matched within the same cluster. This means you have great flexibility in adding just the resources you need whether it be more compute power like CPU and RAM, or more storage. It also helps future proof your infrastructure by letting you add newer, bigger, faster appliances to a cluster while retiring or repurposing older appliances. It creates an IT infrastructure that can be easily and seamlessly scaled without having to rip and replace for future needs.

      The bottom line is that you can save a lot of money by avoiding complexity and over-provisioning. Why waste valuable revenue on total cost of ownership (TCO) that is too high. At Scale Computing, we can help you analyze your TCO and figure out if there is a better way for you to be operating your IT infrastructure to lower costs. Let us know if you are ready to start saving. www.scalecomputing.com

      posted in Scale Legion scale scale hc3 scale blog
      scaleS
      scale
    • MS SQL Server Best Practice Guide on Scale HC3

      Below is a snippet of our Microsoft SQL Best Practices Reference Sheet available via our Customer and Partner support portals. This is designed to help any users on HC3 better understand common best practices with SQL maintenance and setup, as well as how to best utilize your HC3 system for your SQL servers.

      Pre-Installation and Migration Tasks

      • Spend time before transitioning to the HC3 system to understand your needs throughout the year. Month-end, quarter-end, and year-end activities could be more resource intensive than daily requirements. Plan to utilize the HC3 system HEAT capabilities for high-utilization “seasons.”
      • Run the Sql Server Best Practice analyzer on existing databases to look for possible improvements prior to migrating to the HC3 system.
      • Spend time testing SQL Server configurations prior to deploying to live operations on the HC3 system.
      • Don’t oversize your installation and deprive other VMs of necessary resources.
      • Make sure all applicable guest OS patches are applied before migration.

      Windows Guest Configuration

      • Make sure Receive Side Scaling (RSS) is enabled. It is configured to be enabled by default.
      • Format data and log file drives as NTFS with a 64 KB allocation unit size. To verify that your drive has been formatted properly, run fsutil fsinfo ntfsinfo from the command line.
      • Set power management to High Performance in the guest OS.
      • Use 64-bit version of Windows guest OS.
      • Do not configure data or log file drives as Dynamic drives in disk management.
      • Add your SQL service account to the “Perform Volume Maintenance Task” in Windows Security Policy to use Instant File Initialization (IFI).
      • Reduce the size of your page files to the minimum possible. The OS should be configured with a sufficient amount of physical memory.

      SQL Installation Guidelines

      • Keep the OS, data files, log files, and backups on separate drives so that you can assign a different HEAT flash priority to data and log file drives if necessary.
      • Don’t set databases to grow by a percentage. Use set increments.
      • Be sure to right-size your database.
      • Use the 64-bit version of SQL Server.
      • Spread high IOP databases across multiple VMs as opposed to multiple instances on the same SQL server.
      • Make sure all applicable SQL Server patches are applied.

      Much more information in this guide is available using the following links (or by logging in and searching "SQL")

      Customer portal
      Partner portal

      posted in Scale Legion scale scale hc3 ms sql server database
      scaleS
      scale
    • Scale HC3 VirtIO Performance Drivers

      HC3 uses the KVM hypervisor which can provide para-virtualized devices to the guest OS which will decrease latency and improve performance for the virtual devices. Virtio is the standard used by KVM. We recommend selecting performance drivers for any supported OS which creates Virtio block devices. Emulated block devices are also supported for legacy operating systems.

      Virtio driver support has been built into the Linux Kernel since 2.6.25. Any Linux distro utilizing a 2.6.25 or later distro will natively support Virtio network and storage block devices presented by HC3. Older kernels can potentially allow the Virtio modules to be backported. Any modern Linux distro should be on a Kernel version late enough to natively support Virtio.

      Virtio drivers for Windows OSs are available for guest and server platforms starting at Windows XP and Windows Server 2003. Any Windows OS beyond those will have Virtio driver support as well. Any OS older than XP or Server 2003 will have to use the emulated non-performance block device type and will experience decreased performance compared to more modern OSs.

      At Scale Computing, we periodically update the Virtio performance drivers provided with HC3 via firmware updates. We recommend only using the included Virtio ISO or one provided by Scale Support. Untested Virtio drivers could cause an inability to livemigrate VMs or other issues. New Virtio drivers will not be automatically added to guest VMs. You will need to mount the ISO CD to the VM and manually install the updated drivers via device manager. You can also utilize group policy to roll out updates of virtio drivers when they are available

      posted in Scale Legion scale scale hc3 virtio kvm virtualization hyperconvergence hyperconverged
      scaleS
      scale
    • Job Posting: Operations Coordinator at Scale Computing

      Link to application: https://boards.greenhouse.io/scalecomputing/jobs/257375#.V6H_UTsrLIU

      Ensure the following functions are executed with efficiency and accuracy. Document and keep up-to-date all processes pertaining to responsibilities. Create in-house process training or documents that are accessible to those outside of the Operations department, such as support and sales.Establish and report metrics on responsibilities to maintain minimum levels of operation and find efficiencies.

      Important skills & qualifications: 4 year degree or relevant experience. Proficiency in MS Office Suite products. Experience with CRM tool recommended. Ability to work in a fast paced, interrupt driven workplace while maintaining organization, attention to detail and flexibility.

      Responsibilities listed below can change at any time and other projects may be assigned as time and ability warrant. Responsibilities will be assigned on a gradual basis. Moderate lifting may be required. Position reports to the Director of Operations.

      Sales

      • Manage trade-in returns
      • Manage service contract renewals
      • Assign 3rd party licensing and work with Ops Specialist on renewals

      Support/Services

      • Place Support replacement requests within SLA
      • Follow up and process all return cases within 30 days of part shipment
      • Contact customers, and work with finance to invoice if needed, for outstanding returns
      • Follow up and process all Quality cases with contract manufacturer
      • Manage RMA returns that go back directly to contract manufacturer
      • Weekly/monthly/quarterly replacement shipping metrics
      • Reimage nodes as needed

      Shipping/Receiving

      • Ship sales/replacement requests
      • Assist marketing/accounting with other shipments if needed
      • Check in returns, manage daily DAM reporting
      • Resolve minor shipping issues with UPS
      • Schedule roadshow & migration cluster shipping

      Inventory

      • Maintain replacement part inventory and place purchase orders with contract manufacturer
      • Manage spare part dashboard on SalesForce.com
      • Review contract manufacturer portal for replacement order status
      • Track Scale Internal assets
      • Monthly Inventory Audit

      Asset Management – Asset Lifecycle maintenance

      • Maintain asset integrity when parts are shipped to/returned from Customers (i.e. update accounts, move entitlements, etc.)
      • Work with Sales Support Renewal Manager to ensure all Customer entitlements are accurate
      • Manage all internal inventory in Scale’s on-site warehouse

      Accounting/Finance

      • Upload DAM reporting to Confluence
      • Update weekly contract manufacturer stocking and shipment reports
      • Manage monthly RMA reconciliation
      • Review and approve UPS/third party logistics invoices

      Other

      • Record and manage contract manufacturer MQT (manufacturer quality tracker)
      • Place orders with 3rd party vendors as assigned
      • Weekly & monthly metrics pertaining to above responsibilities
      posted in Job Postings job job posting scale
      scaleS
      scale
    • Scale Radically Changes Price Performance with Fully Automated Flash Tiering

      Sorry for the Press Release copy, but we wanted to get a uniform announcement out about our new automated flash tiering technology and HEAT.

      Scale Computing Radically Changes Price-Performance in the Datacenter with Fully Automated Flash Tiering

      Scale Computing, the leader in hyperconverged technology across the mid-market today announced the integration of flash-enabled automated storage tiering into its award-winning HC3 platform.

      This update to Scale’s converged HC3 system adds hybrid storage including SSD and spinning disk with HyperCore Enhanced Automated Tiering (HEAT). Scale’s HEAT technology uses a combination of built-in intelligence, data access patterns, and workload priority to automatically optimize data across disparate storage tiers within the cluster.

      “Hyperconvergence is nothing if not about simplicity and cost. But it is also about performance, especially in the SMB to mid-size enterprises where most, if not all workloads will simultaneously run on a single cluster of nodes,” said Arun Taneja, Founder and Consulting Analyst of the Taneja Group. “Introducing flash into a hard disk based system is easy; the question is how do you do it so that it maintains low cost and simplicity while boosting performance. This is what Scale has done in these new models. The only decision the IT admin and the business user need to make is to determine the importance of the application and its priority. After that flash is invisible to them. The only thing visible is better application performance. This is how it should be.”

      Scale Computing’s HC3 platform brings storage, servers, virtualization, and high availability together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 solutions lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications optimized and running.

      This update to the HC3 HyperCore storage architecture combines Scale’s HEAT technology with SSD-hybrid nodes that add a new tier of flash storage to new or existing HC3 clusters. HEAT technology combines intelligent automation with simple, granular tuning parameters to further define flash storage utilization on a per virtual disk basis for optimal performance.

      Through an easy-to-use slide bar, users can optionally tune flash priority allocation to more effectively utilize SSD storage where needed from no flash at all for a virtual disk, to virtually all flash by “turning it to 11.” Every workload is different and even a small amount of flash prioritization tuning, combined with the automated, intelligent I/O mapping, can have a big impact on the overall performance of flash storage in the HC3 cluster.

      Unlike other storage systems that use flash storage only for disk caching, Scale’s HC3 virtualization platform adds flash capacity and performance to the total storage pool. Customers will immediately and automatically take advantage of the flash I/O benefits without any special knowledge about flash storage.

      “Like any organization, we have applications that need maximum performance, applications where performance isn’t a priority, and still others where higher performance would be helpful but not mission critical,” said Mike O’Neil, Director of IT at Hydradyne. “But unlike some organizations, we weren’t in a position to dedicate the resources needed to support these differing workloads. With Scale, we will have an architecture in place that immediately and automatically allows VMs to take advantage of flash storage without us even thinking about storage or virtualization configuration.”

      Scale’s HyperCore architecture dramatically simplifies VM storage management without VSAs (Virtual Storage Appliances), SAN protocols and file system overhead. VMs have direct access to virtual disks, allowing all storage operations to occur as efficiently as possible. HyperCore applies logic to stripe data across multiple physical storage devices in the cluster to aggregate capacity and performance. The HyperCore backplane network lets any node and any VM access any disk and is performance optimized to scale as nodes are added.

      “With this release, we radically change the economics and maximize the value of flash storage for all customer segments, from the SMB to the enterprise,” said Jeff Ready, CEO of Scale Computing. “Many vendors use a flash write-cache as a way to mask otherwise sluggish performance. Instead, we have built an architecture that intelligently adjusts to the changing workloads in the datacenter, to maximize the performance value of flash storage in every environment.“

      Scale is deploying its new HEAT technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Available in 4- or 8-drive units, Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node.

      The new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added.

      For additional information or to purchase, interested parties can contact Scale Computing representatives at https://www.scalecomputing.com/scale-computing-pricing-and-quotes

      posted in Self Promotion scale scale hc3 scale heat scale hc3 hc2150 hyperconvergence hypercore
      scaleS
      scale
    • What is Real Hyperconverged Infrastructure?

      You’ve probably heard a multitude of things around hyperconvergence or hyperconverged infrastructure as these are becoming hot new industry buzzwords, but what do these terms really mean? Are vendors that say they have hyperconverged infrastructure really living up to the promises of true hyperconvergence or is it just marketing hype?

      The terms “hyperconvergence” and “hyperconverged infrastructure” originated in a meeting between Jeff Ready and Jason Collier of Scale Computing and Arun Taneja of the Taneja Group. According to these three, the term was coined to mean the inclusion of a virtualization hypervisor into a complete infrastructure solution that included storage, compute, and virtualization. Some have thought that hyperconverged is synonymous with terms like ultraconverged or superconverged but that was not the intention.

      If we hold this intended definition of hyperconvergence from its creators as the standard, what does it mean to be a real hyperconverged solution? Many solutions that call themselves hyperconverged rely on third party hypervisors such as VMware or Hyper-V for virtualization. The hypervisor software in that case is developed and licensed from a completely different vendor. That doesn’t seem to fit the definition of hyperconvergence at all.

      Many vendors that use the hyperconverged label are merely peddling traditional virtualization architecture designed around traditional servers and SAN storage. This 3-2-1 architecture that built a platform for virtualization from a minimum of 3 servers, 2 switches, and 1 SAN appliance has been repackaged by some as a single vendor solution without any real convergence at all, and definitely no hypervisor. It is important to differentiate these traditional architectures from the next generation architectures that the term hyperconvergence was intended for.

      Before hyperconvergence there was already a concept of converged infrastructure that combined server compute with storage as a single hardware platform onto which virtualization could be added. If a solution is not providing the hypervisor directly but relying on a third party hypervisor, it seems to fall back into the converged category, but not hyperconverged.

      One of the key benefits of hyperconvergence is not having to rely on third party virtualization solutions, and being independent of the costs, complexity, and management of these third parties. The idea was no more hypervisor software licensing to pay for and one less vendor to deal with for support and maintenance. Hyperconvergence should be a complete virtualization and infrastructure solution from a single vendor. Maybe it is a marketing necessity for some vendor solutions to use a third party hypervisor to get to market due to funding. These solutions are not fulfilling the promise of true hyperconvergence for their customers.

      Hypervisors have been around for long enough now that they are now a commodity. The idea of having to license and pay for a hypervisor as a separate entity should be giving IT solution purchasers pause as they look to implement new solutions. Cloud providers are not requiring customers to license hypervisors so why would so-called hyperconvergence vendors do this? We are hearing more and more from IT managers about their displeasure over the “Virtualization Tax” they’ve been paying for too long. The hype cycle for virtualization is over and users are ready to stop opening their checkbooks for an operating environment that should be included at no extra charge.

      posted in Scale Legion scale hyperconvergence scale blog virtualization
      scaleS
      scale
    • Scale Computing Expands the Reach of Hyperconvergence with New Single Node Appliance

      Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions for midsized companies, today introduced a single-node configuration of its HC3 virtualization platform, designed to deliver affordable, flexible hyperconverged infrastructure for distributed enterprise and Disaster Recovery (DR) use cases.

      IT departments are often challenged to deploy disaster recovery and remote office infrastructures on a tight budget while maintaining enterprise capabilities. While hyperconvergence provides a wide range of benefits, it is generally only available in multi-node clustered configurations, often over specifying the needs of DR sites or Remote/Branch Offices (ROBOs). Deploying this single node configuration alongside Scale’s HC3 clusters enables IT departments to deliver DR and remote office infrastructure on a tight budget while maintaining enterprise capabilities.

      “We are seeing increasing technology demands from the SMB and midmarket sectors, in fact, 78% of SMBs are saying that technology is more important to them today than previous years,” said Anurag Agrawal, CEO, and Analyst at Techaisle. “The SMB/midmarket often do not have the means in way of budget, knowledge or IT staff to be able to implement much needed newer technologies. Techaisle’s survey shows that 72% of SMBs want IT vendors to simplify technology and 61% are ignoring some technologies due to complexity even though they may be useful for business success. Scale Computing continues to be a champion in this space allowing small to midsize companies the immediate ability to leverage the benefits of hyperconvergence, flash, analytics, VDI and DRaaS within a single platform by removing the complexity and cost normally associated with adopting these technologies.”

      Scale Computing’s HC3 platform brings storage, servers, virtualization and management together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 products lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications running.

      For disaster recovery and remote availability, it is common to target a smaller infrastructure footprint to conserve budget. With the single-node appliance configuration, there is no need to maintain an entire duplicate of production infrastructure when a smaller infrastructure is enough to run critical workloads until primary sites are restored. This not only reduces the disaster recovery infrastructure cost but further simplifies disaster recovery planning by focusing on key critical workloads rather than non-essentials.

      In the distributed enterprise, remote sites often only need a handful of workloads and even a minimum cluster with multiple nodes can be excessive. Because these sites often have little or no IT expertise present, the simplicity of hyperconvergence is an even greater benefit. HC3’s remote management capabilities make it even easier for the IT staff, usually located at a central office, to avoid costly travel when supporting remote sites.

      “At Scale Computing, we’ve always offered HC3 in clusters of three or more appliances but we have also recognized that there are some instances we could address where even a three-node cluster is an overkill,” said Jeff Ready, CEO, and co-founder of Scale Computing. “We decided to offer a single-node appliance configuration that can be deployed alongside HC3 clusters to enable distributed enterprise and disaster recovery use cases that provide more flexibility and cost savings than traditional cluster configurations.”

      posted in Scale Legion scale scale hc3 virtualization hyperconvergence press release
      scaleS
      scale
    • Jeff Ready on Executive People

      A Dutch IT interview with Scale CEO @JeffReady as he talks about who Scale Computing is and what the Scale HC3 appliance can do.

      Youtube Video

      posted in Scale Legion interview scale scale hc3 jeff ready youtube
      scaleS
      scale
    • Why HC3 IT Infrastructure Might Not Be For You

      Scale Computing makes HC3 hyperconverged infrastructure appliances and clusters for IT organizations around the world with a focus on simplicity, scalability, and availability. But HC3 IT infrastructure solution might not be for you for a few reasons.

      Screenshot-2017-01-18-14.40.37.png

      You want to be indispensable for your proprietary knowledge.
      You want to be the only person who truly understands your IT Infrastructure. Having designed your infrastructure personally and managing it with your own home-grown scripts, only you have the knowledge and expertise to keep it running. Without you, your IT department is doomed to fail.

      HC3 is probably not for you. HC3 was designed to be so simple to use that it can be managed by even a novice IT administrator. HC3 would not allow you to control the infrastructure with proprietary design and secret knowledge that only you could possess. Of course, if you did go with HC3, you’d be a pioneer of new technology who would be an ideal asset for any forward thinking IT department.

      You are defined by your aging certifications.
      You worked hard and paid good money to get certifications in storage systems, virtualization hypervisors, server hardware, and even disaster recovery systems that are still around. You continue to use these same old technologies because you are certified in them, and that gives you leverage for higher salary. Newer technologies hold less interest because they wouldn’t allow you to take advantage of your existing certifications.

      HC3 is probably not for you. HC3 is based on new infrastructure architecture that doesn’t require any expensive certifications. Any IT administrator can use HC3 because it was designed to remove reliance on legacy technologies that were too complex and required excessive expertise. HC3 won’t allow you to leverage your certifications in these legacy technologies. Of course, with all of the management time you’d save using HC3, you’d be able to learn new technologies and expand your skills beyond infrastructure.

      You like going to VMworld every year.
      You’ve been using VMware and going to VMworld since 2006 and it is a highlight of your year. You always enjoy reuniting with VMworld regulars and getting out of the office. It isn’t as useful as it was earlier on but you still attend a few sessions along with all of the awesome parties. Life just wouldn’t be the same without attending VMworld.

      HC3 is probably not for you. HC3 uses a built-in hypervisor, alleviating the need for VMware software and VMware software licensing. Without VMware, you probably won’t be able to justify your trip to VMworld as a business expense. Of course, with all the money you will likely save going with HC3, your budget might be open to going to even more conferences to help you develop new skills and services to help your business grow even faster.

      You prefer working late nights and weekends.
      The office and better yet, the data center, are a safe place for you. Whether you don’t have the best home life or you prefer to avoid awkward social events, you find that working late nights and weekends doing system updates and maintenance a welcome prospect. We get it. Real life can be hard. Solitude along with the humming of fans and spinning disks offers an escape from the real world.

      HC3 is probably not for you. HC3 is built to eliminate the need to take systems offline for updates and maintenance tasks so these can be done at any time, including during normal business hours. HC3 doesn’t leave many infrastructure tasks that need to be done late at night or on weekends. Of course, if you did go with HC3, you’d probably have more time to and energy to sort out your personal life and make your home and your social life more to your liking.

      Original post: http://blog.scalecomputing.com/why-hc3-it-infrastructure-might-not-be-for-you/

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged scale blog
      scaleS
      scale
    • The Price Is Right

      The Price Is Right is of the longest running game show on television and one of the most beloved. I grew up watching it hosted by Bob Barker and it is still going today, hosted by Drew Carey. The show features a variety of challenges for players but most of them involve guessing at the retail price of various products ranging from groceries all the way up to vehicles and vacation packages. The concept of guessing at prices reminded me of shopping for IT solutions.

      TPIR-768x432.jpg

      I’m sure most of you know what I am talking about. You start researching various hardware and software solutions but you quickly find that the price is not readily available. You have to contact the vendor for pricing. Why? Often they can’t even give you a ballpark estimate. Why? The answer is simple, but awful. They want to charge you the highest price possible and the only way to do that is withhold pricing until they have sufficiently worked you over with a double whammy of sales and marketing.

      IT is a cost center. We all accept this. Organizations don’t want to spend any more on IT than is necessary, but it is necessary, at least to a point. These vendors want to artificially build up that need for more and more before they hit you with a price because they want you to spend more.

      Personally, I hate this practice of withholding pricing. I want to have an idea of what a solution costs up front when I am researching. I don’t need a sales guy smooth talking me to soften the blow of the price. I’m an adult. I know how money works. This practice is all too common in IT solution sales. That’s why I love Scale Computing. We are different.

      Screenshot-2017-06-01-12.06.40-768x480.png

      Did you see what I did there? Pricing for our HC3 systems. Not all the pricing. We have a lot of configuration options and it would be a feat of engineering to try to show everything. Base pricing to give you a starting point. Pricing that includes 1 year of maintenance and support. Why are we different? Well, we just think our pricing is fair to begin with. We don’t want you to have to guess. Don’t guess. Those are per node prices and we gave you a couple examples to get you started. We just want you to get a great solution at a great price.

      Can you afford it? We will work with you to get you exact pricing on the configuration you need and nothing more. We can do an assessment of what you need and show you some of the costs of integration, management, maintenance, and support that come with or without our HC3 solution. If the numbers don’t add up, that’s fine. We won’t sell you a solution that you can’t afford, don’t want, or won’t work for you. We think you will want it and probably can afford it. In fact, you might find out that you can’t afford NOT to have it.

      By the way, that pricing is available in our HC3 Sales Brochure right on our website. For more information on some of the tertiary costs of IT ownership, check out this white paper, “How HC3 Lowers the Total Cost of Infrastructure”.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • What Shirts Taught Me About Scale Computing

      When I started at Scale Computing everything seemed pretty normal. There was a startup vibe. There were Nerf guns everywhere. There was an open office concept going on. It wasn’t until a few days went by that I noticed something that seemed odd. My new coworkers were wearing Scale Computing branded shirts to the office nearly every single day.

      Were there some special events going on? Special visitors to the office? No, it turned out that these guys and gals just preferred to wear these shirts. No one talked about it, and it was not suggested or required. It was just part of the culture.

      IMG_2095-768x512.jpg

      Ok, at my former employer, where I worked for 17 years, I had built up quite a wardrobe of company branded shirts. Because of that, I did end up wearing them fairly often. It wasn’t the same as what was happening at Scale. People were making a point to wear their shirts to represent Scale Computing. From this gesture, I really got the feeling that my coworkers believed in the company in a way I had not experienced before.

      Of course, it wasn’t just the shirts. It was more. It was the shared idea that we could create the best solution in IT infrastructure. It was the positive attitudes. It was the high-fives. It was the encouragement between coworkers that lifted everyone up. It was the cheers every time we closed a deal. It was everything. But for me, it started with the shirts. And I wasn’t the only one who noticed.

      Recently when having lunch with a partner who was visiting our office, the partner asked why everyone was wearing Scale branded shirts in the office. What was the occasion? It gave me a great sense of pride to explain that it was just a normal day at Scale Computing. It is more common to see a Scale employee in a Scale shirt than not. It’s just who we are.

      I know these are just shirts. Every company has shirts. I just know they mean more at Scale Computing. Wearing my Scale shirt is a source of pride in a company I believe in, where I love working, and where I feel at home.

      Originally posted on the Scale Blog.

      posted in Scale Legion scale scale hc3 scale blog
      scaleS
      scale
    • Optimizing Windows for Scale HC3

      From personal experience - Windows has a lot of default settings that can be optimized to make them run better in a VM, stay smaller over time, reduce I/O load, reduce snapshot and replication size, etc.

      One easy example is reducing the size of the windows OS page file... especially if you are taking frequent snapshots or replicating (there is also a flag available that support can set that will exclude a virtual disk from being snapshotted / replicated at all). Adding a little bit more ram to the VM is much better than having the VM swap to a pagefile.

      But there are many other optimizations such as turning off 8.3 file name creation on NTFS, disabling background defragmentation tasks, etc.

      One thing customers sometimes want to try is pre-allocating (full formatting) NTFS volumes ... we don't see any benefit to doing that and strongly recommend against that for many reasons (creating extra work for HC3 deduplication is one ... it's just going to find all those blocks with the "same" nothing written to them later anyway)

      Another don't - don't defragment your NTFS file systems (and be very careful and purposeful about application level defrag / reindexing / compacting, as well) ... a lot of things that made sense with spinning disks and local, non mirrored RAID systems don't help at all with a system like HC3 designed to aggregate the I/O of storage devices spanning multiple nodes.

      Most of the tricks that are suggested for other virtualization platforms provide the same benefits when applied to VM's running on HC3. I even have used VMware's free OS Optimization tool on HC3 VM's and seen it reduce snapshot and replication size substantially. But it's just a list of windows settings changes / registry changes that you could apply other ways (and obviously isn't VMware dependent).

      https://labs.vmware.com/flings/vmware-os-optimization-tool#summary

      Some of the optimizations likely only matter for virtual desktop VM's but others are more general.

      As always YMMV, backup, snapshot and test for yourself!

      posted in Scale Legion scale scale hc3 windows virtualization kvm
      scaleS
      scale
    • Experts Roundtable: What the Heck is Hyperconvergence

      Join us (Scale) tomorrow (February 25th) at 1PM CST for a Spiceworks hosted webinar "Experts Roundtable" where we look at what hyperconvergence is and answer technical questions about the space. This is a multi-vendor technical roundtable, not a Scale-specific presentation. We hope that many of our Mango constituents can join us there!

      Register Here for What the Heck is Hyperconvergence

      Full description:

      If you've heard the term 'Hyperconvergence' enough to turn a drinking game out of it, but still aren't exactly sure what it is or who it's for, you're going to want to check this event out.

      By converging compute, storage, networking, AND virtualization resources into a single box, you can shrink your time to deployment and reduce your overhead and time spent on maintenance.

      In this Experts Round Table, we're putting together a team of hyperconvergence experts Hewlett Packard Enterprise, Nutanix, Scale Computing, Stratoscale, and VMware who are setting aside their own personal sales pitches to help get you the answers you need and help spread awareness and education about the hyperconvergence revolution!

      We will discuss:

      • Is it REALLY cost-effective for most businesses? ​
      • For which customers and scenarios does hyperconvergence make the most sense?
      • How does hyperconvergence impact your existing server room, and do you need a complete overhaul to get started?
      • Any questions you want to bring to the table!
      • Live Event Bonus: One lucky attendee of the meetup will win a Jawbone Jambox !
      posted in Self Promotion scale hyperconvergence webinar spiceworks
      scaleS
      scale
    • The TCO of HC3

      We know some of you HC3 users have direct experience with how HC3 has impacted IT costs in your organization. We'd love to hear some examples of how HC3 has made a difference in how you spend money, time, and resources in your own IT shops. What can you share?

      posted in Scale Legion
      scaleS
      scale
    • Scale Computing Brings First Fully Featured Sub-$25,000 Flash Solution to SMB Market

      Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions for midsized companies, today announced an SSD-enabled entry to its HC1000 line of hyperconverged infrastructure (HCI) solutions for less than $25,000, designed to meet the critical needs in the SMB market for simplicity, scalability and affordability.

      The HC1150 combines virtualization with servers and high performance flash storage to provide a complete, highly available datacenter infrastructure solution at the lowest price possible. Offering the full line of features found in the HC2000 and HC4000 family clusters, the entry level HC1150 provides the most efficient use of system resources – particularly RAM – to manage storage and compute resources, allowing more resources for use in running additional virtual machines. The sub-$25,000 price point also includes a year of an industry-leading premium support at no additional cost.

      “The SMB and midmarket communities are those that Scale Computing has long championed as worthy of enterprise-class features and functionality. The challenge is that those communities also require that the solution be affordable and be easy to use as well as manage,” said George Crump, President and Founder of Storage Switzerland. “Scale Computing has raised the performance bar to its offerings with the addition of SSDs to its entry-level hyperconverged appliances but it did so without stripping out functionality. Scale Computing is ready to be a leader in this space by enhancing their product family while keeping costs within reach.”

      Scale Computing’s HC3 platform brings storage, servers, virtualization, and high availability together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 solutions lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications optimized and running. The integration of flash-enabled automated storage tiering into
      Scale’s converged HC3 system adds hybrid storage including SSD and spinning disk with HyperCore Enhanced Automated Tiering (HEAT). Scale’s HEAT technology uses a combination of built-in intelligence, data access patterns, and workload priority to automatically optimize data across disparate storage tiers within the cluster.

      The HC1150 was not the only new addition to the HC1000 family. The new HC1100 which replaces the previous HC1000 model, provides a big increase in compute and performance. Improvements include an increase in RAM per node from 32GB to 64GB; an increase in base CPU per node from 4 cores to 6 cores; and a change from SATA to 7200 RPM, higher capacity NL-SAS drives. With the introduction of the HC1100 comes the first use of Broadwell Intel CPUs into the HC1000 family. All of the improvements in the HC1100 over the HC1000 model come with no increase in cost over the HC1000. Additionally, the HC1150 scales with all other members of the HC3 family for the ultimate in flexibility and to accommodate future growth.

      “While some vendors are beginning to look to the SMB marketplace as a way to supplement languishing enterprise sales, we have long been entrenched with the small businesses, school districts and municipalities to provide them with user-friendly technology and reasonable IT infrastructure costs to ensure that they can accomplish as much as larger organizations,” said Jeff Ready, CEO and co-founder of Scale Computing. “We have helped more than 1,500 customers with fully featured hyperconverged solutions that are as easy as plugging in a piece of machinery and managing a single server. Our latest HC1150 further fulfills that promise by combining virtualization with high-performance flash to provide the most complete, highly available HCI solution at the industry-best price.”

      Scale Computing’s HC1150, as with its entire line of hyperconverged solutions, is currently available through the company’s channel with end user pricing starting at $24,500. For additional information or to purchase, interested parties can contact Scale Computing representatives at https://www.scalecomputing.com/scale-computing-pricing-and-quotes.

      Sorry for the press release, but wanted to get this info out there. This is the system that @Aconboy was talking about in a thread last week.

      https://www.scalecomputing.com/press_releases/scale-computing-brings-first-fully-featured-sub-25000-flash-solution-to-smbsme-market/

      posted in Self Promotion scale hyperconvergence scale hc3 scale hc3 hc1150 ssd flash storage
      scaleS
      scale
    • 1
    • 2
    • 3
    • 4
    • 5
    • 13
    • 14
    • 1 / 14