@BRRABill said in IOT Security Challenge:
I already have my submission ready.
It's a pretty box that you can throw all that insecure crap in, and then set on your curb.
Now, what will I do with my $25K?
That's exactly enough for a small HC3 cluster!
@BRRABill said in IOT Security Challenge:
I already have my submission ready.
It's a pretty box that you can throw all that insecure crap in, and then set on your curb.
Now, what will I do with my $25K?
That's exactly enough for a small HC3 cluster!
Just a reminder that this is going on tomorrow. Would love to see some ML folks join us! We will, of course, be talking about what Scale offers specifically, but also some general information about planning for DR. Hope to see you all there!
It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.
“And the award goes to…”
Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit
News Flash!
2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.
Newer, Stronger, Faster
When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.
Going Solo
In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

Cloud-based DR? Check
2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.
Better Together
2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.
The Doctor is In
It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017. Keep checking our blog for my latest posts.

Just me, Dr. P
Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.
Happy New Year!
Several years ago (in the waning days of the last decade and early days of this one), we here at Scale decided to revolutionize how datacenters for the SMB and Mid Market should function. In the spirit of “perfection is not attained when there is nothing left to add, but rather when there is nothing left to remove”, we set out to take a clean sheet of paper approach to how highly available virtualization SHOULD work. We started by asking a simple question – If you were to design, from the ground up, a virtual infrastructure, would it look even remotely like the servers plus switches plus SAN plus hypervisor plus management beast known as the inverted pyramid of doom? The answer, of course, was no, it would not. In that legacy approach, each piece exists as an answer/band-aid/patch to the problems inherent in the previous iteration of virtualization, resulting in a Rube-Goldbergian machine of cost and complexity that took inefficiency to an entirely new level.
There had to be a better way. What if we were to eliminate the SAN entirely, but maintain the flexibility it provided in the first place (enabling high availability)? What if we were to eliminate the management servers entirely by making the servers (or nodes) talk directly to each other? What if we were to base the entire concept around a self aware, self healing, self load balancing cluster of commodity X64 server nodes? What if we were to take the resource and efficiency gains made in this approach and put them directly into running workloads instead of overhead thereby significantly improving density while lowering cost dramatically? We sharpened our pencils and got to work. The end result was our HC3 platform.
Now, at this same time, a few other companies were working on things that were superficially similar, but designed to tackle an entirely different problem. These other companies set out to be a “better delivery mechanism for VMWare in the large enterprise environment”. They did this by taking the legacy solution SAN component and virtualizing an instance of SAN (storage protocols, CPU and RAM resource consumption and all) as a virtual machine running on each and every server in their environment. The name they used for this across the industry was “Server SAN”.
Server SAN, while an improvement in some ways over the legacy approach to virtualization, was hardly what we here at Scale had created. What we had done was the elimination of all those pieces of overhead. We had actually converged the entire environment by collapsing those old legacy stacks (not virtualizing them and replicating them over and over). Server San just didn’t describe what we do. In an effort to create a proper name for what we had created, we took some of our early HC3 Clusters to Arun Taneja and the Taneja group back in 2011 and walked them through our technology. After many hours in that meeting with their team and ours, the old networking term “Hyperconverged” was resurrected specifically to describe Scale’s HC3 platform – the actual convergence of all of the stacks (storage, compute, virtualization, orchestration, self-healing, management, et.al.) and elimination of everything that didn’t need to be there in the legacy approach to virtualization, rather than the semi-converged approach that the Server San vendors had taken.
Like everything else in this business, the term caught fire, and it’s actual meaning became obscured through it’s being co-opted by a multiplicity of other vendors stretching it to fit their products – I am fairly sure I saw a “hyperconverged” coffee maker the other week, but now you know where the term actually came from and what it really means from the people that coined it’s modern use in the first place.
Original post: http://blog.scalecomputing.com/the-origin-of-modern-hyperconvergence/
Original Post: http://vmblog.com/archive/2016/12/20/scale-computing-2017-predictions-a-strengthening-of-continuing-trend.aspx
At Scale Computing, we specialize in hyperconverged infrastructure. These predictions, therefore, are heavily influenced by our view of the virtualization, cloud, and hardware infrastructure markets especially as they relate to our target customer (small to mid-size companies). We see a strengthening of continuing trends that are reshaping how customers are viewing virtualization and cloud and how the market is reacting to these trends.
Here are the predictions:
Rising Adoption Rates for Hyperconverged Infrastructure
Hyperconverged Infrastructure will become increasingly popular as an alternative to traditional virtualization architecture composed of separate vendors for storage, servers, and hypervisor. IT shops will increasingly move to shed the complexity of managing components of infrastructure in silos and adopt simpler hyperconverged infrastructure solutions as a way to streamline IT operations. There may likely be a much sharper rise in adoption of hyperconverged infrastructure in the SMB market where the simplicity (requiring less management) can have a bigger budget impact.
Increased Commoditization of the Hypervisor
Virtualization will continue moving further down the path of commoditization with movement toward licensing-free virtualization. As cloud and hyperconverged platforms continue including hypervisor as a feature of an infrastructure solution rather than as premium software product, the desire to pay for hypervisor directly will decrease. Rather than fight for an on-premises, traditional 3-2-1 deployment model, traditional hypervisor vendors will look to create alliances with public cloud providers to maintain their stronghold on the hypervisor market. Beyond 2017, this will eventually lose out to licensing-free virtualization options as the management software and ecosystems around these hypervisors catch up.
Increased Commoditization of Disaster Recovery and Backup
The trend of including disaster recovery and backup capabilities as features of infrastructure and software solutions will continue to rise. Customers will further lose their appetite for 3rd party solutions when they can achieve adequate protection and meet SLAs through built-in DR/Backup and DRaaS solutions. Customers will be expecting more and more that DR/backup be available as a feature especially when looking at cloud and hyperconverged solutions.
Increased Hybridization of Cloud
IT shops are already moving toward hybrid cloud models by adopting cloud-based applications like Office 365, Salesforce, and cloud-based DRaaS solutions in addition to their on-prem infrastructure and applications. Whether or not these shops adopt a "true" private cloud/hybrid cloud approach, they will be a part of a trend that will solidify the hybrid cloud/on-prem architecture as the norm rather than going all-in one way or another.
Fast Adoption of NVMe for SSD Storage
As more storage is implemented as all flash or hybrid flash, NVMe adoption will increase rapidly in 2017 in storage solutions. Flash storage is becoming a key factor in increasing storage performance and SSDs are becoming as commoditized as spinning disks. Big data is putting pressure on storage and compute platforms to deliver faster and faster performance. NVMe looks very promising for providing better SSD performance and will be a big part of computing performance enhancement in 2017. Adoption in the SMB will lag other segments due to price sensitivity and a general lack of high-performance needs, but adoption in other areas of the market will pave the way for the SMB to take advantage of the increased performance in future years.
About the Author
As a founder, Jason is responsible for the evangelism marketing of the company. Previously, Jason was VP of Technical Operations at Corvigo where he oversaw sales engineering, technical support, internal IT and datacenter operations. Prior to Corvigo, Jason was VP of Information Technology and Infrastructure at Radiate. There he architected and oversaw the deployment of the entire Radiate ad-network infrastructure, scaling it from under one million transactions per month when he started to more than 300 million at its peak.

INDIANAPOLIS, IN--(Marketwired - December 22, 2016) - Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions, today announced that it was honored with an Editor's Choice Award by Virtualization Review for being one of the products the publication has liked best over the past year.
Virtualization Review author Trevor Pott chose Scale Computing's HC3
platform for the award based on its success in delivering the promise of hyperconvergence by bringing compute and storage together without conflict. "Scale clusters just work, are relatively inexpensive, and deal with power outages and other unfortunate scenarios quite well," he writes.
The Editor's Choice Award from Virtualization Review is the latest accolade the company has received during 2016 for its innovative product line, visionary leadership and focus on the success of those in the midmarket. Among the highlights are:
(MES) West 2016 Conference. The three MES West XCellence Awards reflect Scale's success at delivering the best midmarket products, services, programs and presentations that address the unique challenges and opportunities facing the midmarket.Scale Computing's award-winning HC3 platform brings storage, servers, virtualization and management together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 products lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications running. HC3 products make the deployment and management of a highly available and scalable infrastructure as easy to manage as a single server.
"Since the company's inception, we have been fortunate to be recognized by leading trade publications, users and professional groups with dozens of awards honoring our commitment to making virtualization easy and delivering the technology's benefits to an often overlooked marketplace," said Jeff Ready, CEO and co-founder of Scale Computing. "When all is said and done, these awards are a reflection of the continued and selfless dedication of our entire team here at Scale. There is not one person on our staff that has not made his or her mark on improving the company, which in turn allows us to produce superior results for our customers. I am thankful for the recognition we've received throughout 2016 and look forward to an even more successful 2017."
@cakeis_not_alie recently posted this one...
https://virtualizationreview.com/articles/2016/12/01/key-factors.aspx
What does everyone think of his findings?
5th of January, 2017

Complexity, cost, and time are three factors that often leave your Disaster Recovery (DR) strategy incomplete, insufficient, or non-existent. Scale Computing's DR Planning Service provides customers - who are either currently using, or looking to deploy - a remote HC3 cluster or Single Node System.
Join us on Thursday, 5 January, to learn about the built-in disaster recovery features and the services we provide including:
Sounds like Steam was the big winner this Christmas. Again.
Glad to hear that you are starting to be able to recover some of your data. Definitely let us know if we can help in any way!
@Kelly said in Support Tips/Tricks and maybe a Treat or two!:
-ordering my first Scale cluster tomorrow
A perfect holiday gift idea for all of your techie loved ones! Or for yourself. Congrats!
@matthewaroth35 said in Setup 3 node cluster:
Well i have scale 3 node cluster already .. I want to build another one like scale out of my old hp g7 servers
Always great to get to "meet" our customers! Hope that it is working well for you. Sounds promising that you want more Scale functionality 
2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.
First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.
Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.
Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Original Post: http://blog.scalecomputing.com/scale-with-increased-capacity/
Thanks to @craig-theriac for making this how to. It was published in another thread a while back, but we felt it needed its own thread to be found and discussed. Thanks!
Awesome guys, have fun on the webinar!
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
I should mention that our Scale has built in backups, too. Just image based and nowhere as advanced as Veeam does, but free and inclusive.
Thanks for the mention, SAM. Also worth mentioning for Veeam fans that Veeam agent-based backups are available and work just fine on Scale HC3 solutions. So you can stick with Veeam, even if the methods change, if you were to move to Scale HC3.
Can't thank everyone enough for all of the love around here.
http://blog.scalecomputing.com/3-node-minimum-not-so-fast/
For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration. Why now?
Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.
Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.
Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.