ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Posts
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Best 264
    • Controversial 0
    • Groups 0

    Posts made by scale

    • Back to School – Infrastructure 101 – Part 3

      This is my third and final post in this series. I’ve covered SAN and server virtualization and now I’d like to share my thoughts on the challenges of SMB IT shops vs enterprise IT.

      To start, I should probably give some context on the size of an SMB IT shop. Since we are talking about infrastructure, I am really referring to IT departments that have less than a handful of administrators assigned to infrastructure, with the most common IT shop allocating only one or two resources to infrastructure. Since the makeup of businesses varies so much in terms of numbers of IT users vs. external services, etc, all of the lines do get a little blurred. It is not a perfect science but here’s hoping my points will be clear enough.

      SMB

      Small and medium businesses, sometimes referred to as small and midmarket, have some very unique challenges compared to larger enterprise customers. One of those challenges is being a jack of all trades, master of none. Now, there are some very talented and dedicated administrators out there who can master many aspects of IT over time but often the day to day tasks of keeping the IT ship afloat make it impossible for administrators to gain expertise in any particular area. There just isn’t the budget nor training time to have enough expertise on staff. Without a large team of persons who bring together many types of expertise, administrators must make use of technology solutions that help them do more with less.

      Complexity is the enemy of the small IT department during all phases of the solution lifecycle including implementation, management, and maintenance. Complex solutions that combine a number of different vendors and products can be more easily managed in the enterprise but become a burden on smaller IT shops that must stretch their limited knowledge and headcount. Projects then turn into long nights and weekends and administrators are still expected to manage normal business hour tasks. Some administrators use scripting to automate much of their IT management and end up with a highly customized environment that becomes hard to migrate away from when business needs evolve.

      Then there is the issue of brain drain. Smaller IT shops cannot easily absorb the loss of key administrators who may be the only ones intimately familiar with how all of the systems interconnect and operate. When those administrators leave for whatever reason, suddenly at times, they leave a huge gap in knowledge that cannot easily be filled. This is much less of a problem in the enterprise where an individual administrator is one of a team and has many others who can fill in that gap. The loss of a key administrator in the SMB can be devastating to the IT operations going forward.

      To combat brain drain in the SMB, those IT shops benefit from fewer vendors and products to simplify the IT environment, requiring less specialized training and with the ability of a new administrator quickly coming up to speed on the technology in use. High levels of automation built in to the vendor solution for common IT tasks and simple, unified management tools help the transition from one administrator to the next.

      For SMB, budgets can vary wildly from shoestring on up. The idea of doing more with less is much more on the minds of SMB administrators. SMBs are not as resilient to unexpected costs associated with IT disasters and other types of unexpected downtime. Support is one of the first lines of insurance for SMBs and dealing with multiple vendors and support run-around can be paralyzing at those critical moments, especially for SMBs who could not budget for the higher levels of support. Having resilient, reliable infrastructure with responsive, premium support can make a huge difference in protecting SMBs from various types of failure and disaster that could be critical to business success.

      Ok, enough about the SMB, time to discuss the big guys.

      Enterprise

      Both SMB and enterprise organizations have processes, although the level of reliance on process in much higher in the enterprise. An SMB organization can typically adapt process easily and quickly to match technology, where an enterprise organization can be much more fixed in process and technology must be changed to match the process. The enterprise therefore employs a large number of administrators, developers, consultants, and other experts to create complex systems to support their business processes.

      The enterprise can withstand more complexity because they are able to have more experts on staff who can focus management efforts on single silos of infrastructure such as storage, servers, virtualization, security, etc. With multiple administrators assigned to each silo, there is guaranteed management coverage to deal with any unexpected problems. Effectively, the IT department (or departments) in the enterprise have a high combined level of expertise and manpower, or have the budget to bring in outside consultants and service providers to fill these gaps as a standard practice.

      Unlike with SMB, simplicity is not necessarily a benefit to the enterprise since they need the flexibility to adapt to business process. Infrastructure can therefore be a patchwork of systems serving different needs from high performance computing, data warehousing, data distribution, disaster recovery, etc. Solutions for these enterprise operations must be extensible and adaptable to the user process to meet the compliance and business needs of these organizations.

      Enterprise organizations are usually big enough that they can tolerate different types of failures better than SMB, although as we have seen in recent news, even companies like Delta Airlines are not immune to near catastrophic failures. Still, disk failures or server failures that could bring an SMB to a standstill might barely cause a ripple in a large enterprise given the size of their operations.

      Summary

      The SMB benefits from infrastructure simplicity because it helps eliminate a number of challenges and unplanned costs. For the enterprise, the focus is more on flexibility, adaptability, and extensibility where business processes reign supreme. IT challenges can be more acute in the SMB simply because the budgets and resources are more limited in both headcount and expertise. Complex infrastructure designed for the enterprise is not always going to translate into effective or viable solutions for SMB. Solution providers need to be aware that the SMB may need more than just a scaled down version of an enterprise solution.

      posted in Self Promotion scale infrastructure
      S
      scale
    • RE: Got a Very Touching Message on SW Tonight

      Great job, guys.

      posted in IT Careers
      S
      scale
    • RE: Count Down!

      @Minion-Queen awesome!

      posted in MangoCon
      S
      scale
    • RE: Replacing Evernote?

      Sounds like OneNote is the best option then. Has an iOS app and it integrates with Sharepoint, OneDrive and other stuff well.

      posted in IT Discussion
      S
      scale
    • RE: RDS load balancing and user profiles?

      If you were using a high availability platform, like a Scale HC3 (just as an example off of the top of my head) you could easily get by with just a single RDS instance and a single file server instance, which would potentially reduce the licensing needs for Windows (this depends on how you want to use it), and if RDS were to be on a node that failed it would be automatically migrated to a working node. And the same for the file server. So you get automatic, instant recovery for failure without needing a complex load balancing scenario or external high availability tools.

      RDS and file servers are ideal roles for platform high availability like Scale HC3 provides.

      posted in IT Discussion
      S
      scale
    • RE: Potty Plotter

      Love this idea, how funny. But brilliant, too.

      posted in Water Closet
      S
      scale
    • RE: Stanford Study Shows Walking Improves Creativity

      Makes for an interesting problem when you combine this with open office spaces. Or even with cubicles.

      posted in News
      S
      scale
    • RE: Happy 25th Linux

      And happy birthday to Linux, as well.

      posted in News
      S
      scale
    • RE: Back to School – Infrastructure 101 – Part 2: Virtualization

      Thanks

      posted in Self Promotion
      S
      scale
    • RE: World Wide Web Turns 25 Today

      How time flies!

      posted in IT Discussion
      S
      scale
    • Back to School – Infrastructure 101 – Part 2: Virtualization

      I covered SAN technology in my last Infrastructure 101 post, so for today I’m going to cover server virtualization and maybe delve into containers and cloud.

      Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let’s look at a bit of history.

      Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.

      While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

      There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn’t really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.

      Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.

      The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

      While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion’s share of the market. Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge. But maybe it no longer matters.

      Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

      This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

      IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

      I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn’t need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn’t have to worry about licensing it separately. They say you don’t understand something unless you can explain it to a 5 year old. It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

      Original post: http://blog.scalecomputing.com/back-to-school-infrastructure-101-part-2/

      posted in Self Promotion scale infrastructure scale blog virtualization
      S
      scale
    • RE: Simplivity - anyone use them?

      @JaredBusch fair enough, I found it through Google but would have a better idea of what to search for to bring it up. I'll make someone aware that this should be under the green button as well.

      posted in IT Discussion
      S
      scale
    • RE: Simplivity - anyone use them?

      @JaredBusch said in Simplivity - anyone use them?:

      I complained to @scale for this exact same thing when they sponsored a SpiceCorp meetup in St Louis a year or so ago. The rep making the presentation immediately gave me a rough number though. Now they still don't have it on the website, but at least they were open about the rough MSRP.

      We have this pricing on the website, is this what you would be looking for or is something more detailed what you had in mind?

      https://www.scalecomputing.com/wp-content/uploads/2014/10/hc3-sales-brochure.pdf

      posted in IT Discussion
      S
      scale
    • HEAT Up I/O with a Flash Retrofit

      If your HC3 workloads need better performance and faster I/O, you can soon take advantage of flash storage without having to replace your existing cluster nodes. Scale Computing is rolling out a service to help you retrofit your existing HC2000 /2100 or HC4000/4100 nodes with flash solid state drives (SSD) and update your HyperCore version to start using hybrid flash storage without any downtime. You can get the full benefits of HyperCore Enhanced Automated Tiering (HEAT) in HyperCore v7 when you retrofit with flash drives.

      You can read more about HEAT technology in my blog post Turning Hyperconvergence to 11

      Now, before you start ordering your new SSD drives for flash storage retrofit, let’s talk about the new storage architecture designed to include flash. You may already be wondering how much flash storage you need and how it can be divided among the workloads that need it, or even how it will affect your future plans to scale out with more HC3 nodes.

      The HC3 storage system uses wide striping across all nodes in the cluster to provide maximum performance and availability in the form of redundancy across nodes. With all spinning disks, any disk was a candidate for redundant writes from other nodes. With the addition of flash, redundancy is intelligently segregated between flash and spinning disk storage to maximize flash performance.

      A write to a spinning disk will be redundantly written to a spinning disk on another node, and a write to an SSD will be redundantly written to an SSD on another node. Therefore, just as you need at least three nodes of storage and compute resources in an HC3 cluster, you need to a minimum of three nodes with SSD drives to take advantage of flash storage.

      Consider also, with retrofitting, that you will be replacing an existing spinning disk drive with the new SSD. The new SSD may be of different capacity that the disk it is replacing which might affect your overall storage pool capacity. You may already in a position to add overall capacity where larger SSD drives are the right fit or adding an additional flash storage node along with the retrofit is the right choice. You can get to the three node minimum of SSD nodes by any combination of retrofitting or adding new SSD tiered nodes to the cluster.

      Retrofitting existing clusters is being provided as a service which will include our Scale Computing experts helping you assess your storage needs to determine the best plan for you to incorporate flash into your existing HC3 cluster. Whether you have a small, medium, or large cluster implementation, we will assist you in both planning and implementation to avoid any downtime or disruption.

      However you decide to retrofit and implement flash storage in your HC3 cluster, you will immediately begin seeing the benefits as new data is written to high performing flash and high I/O blocks from spinning disk are intelligently moved to flash storage for better performance. Furthermore, you have full control of how SSD is used on a per virtual disk basis. You’ll be able to adjust the level of SSD usage on a sliding scale to take advantage of both flash and spinning disk storage where you need each most. It’s the flash storage solution you’ve been waiting for.

      posted in Self Promotion storage ssd heat scale scale hc3 hyperconvergence
      S
      scale
    • RE: Turning Hyperconvergence up to 11

      @DustinB3403 said in Turning Hyperconvergence up to 11:

      That is awesome!

      Thank you. This post was actually a bit late. I was posting another article and went to link to this one and realized that I had not posted it yet! So a little behind, but still great info.

      posted in Self Promotion
      S
      scale
    • Turning Hyperconvergence up to 11

      People seem to be asking me a lot lately about incorporating flash into their storage architecture. You probably already know that flash storage is still a lot more expensive than spinning disks. You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.

      Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.

      Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.

      HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.

      0_1471445370111_Screenshot-2016-04-19-13.07.06.png

      HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.

      Watch this short video demo of HC3 HEAT:

      Youtube Video

      If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.

      posted in Self Promotion scale scale hc3 hyperconvergence ssd storage
      S
      scale
    • Back to School – Infrastructure 101

      As a back to school theme, I thought I’d share my thoughts on infrastructure over a series of posts. Today’s topic is SAN.

      Storage Area Networking (SAN) is a technology that solved a real problem that existed a couple decades ago. SANs have been a foundational piece of IT infrastructure architecture for a long time and have helped drive major innovations in storage. But how relevant are SANs today in the age of software-defined datacenters? Let’s talk about how we have arrived at modern storage architecture.

      First, disk arrays were created to house more storage than could fit into a single server chassis. Storage needs were outpacing the capacity of individual disks and the limited disk slots available in servers. But adding more disk to a single server led to another issue, available storage capacity was trapped within each server. If Server A needed more storage and Server B had a surplus, the only way to redistribute was to physically remove a disk from Server B and add it to Server A. This was not always so easy because it might be breaking up a RAID configuration or there simply might not be the controller capacity for the disk on Server A. It usually meant ending up with a lot of over-provisioned storage, ballooning the budget.

      SANs solved this problem by making a pool of storage accessible to servers across a network. It was revolutionary because it allowed LUNs to be created and assigned more or less at will to servers across the network. The network was fibre channel in the beginning because ethernet LAN speeds were not quite up to snuff for disk I/O. It was expensive and you needed fibre channel cards in each server you needed connected to the SAN, but it still changed the way storage was planned in datacenters.

      Alongside SAN, you had Network Attached Storage (NAS) which had even more flexibility than SAN but lacked the full storage protocol capabilities of SAN or Direct Attached Storage. Still, NAS rose as a file sharing solution alongside SAN because it was less expensive and used ethernet.

      The next major innovation was iSCSI which originally debuted before it’s time. The iSCSI protocol allowed SANs to be used over standard ethernet connections. Unfortunately the ethernet networks took a little longer to become fast enough for iSCSI to take off but eventually it started to replace fibre channel networks for SAN as 1Gb and 10Gb networks became accessible. WIth iSCSi, SANs became even more accessible to all IT shops.

      The next hurdle for SAN technology was the self-inflicted. The problem was that now an administrator might be managing 2 or more SANs on top of NAS and server-side Direct Attached Storage (DAS), and these different components did not play well together necessarily. There were so many SANs and NAS vendors that used proprietary protocols and management tools that it was once again a burden on IT. Then along came virtualization.

      The next innovation was virtual SAN technology. There were two virtualization paths that affected SANs. One path was trying to solve the storage management problem I had just mentioned, and the other path was trying to virtualize the SAN within hypervisors for server virtualization. These paths eventually crossed as virtualization became the standard.

      Virtual SAN technology initially grew from outside SAN, not within, because SAN was big business and virtual SAN technology threatened traditional SAN. When approaching server virtualization, though, virtualizing storage was a do or die imperative for SAN vendors. Outside of SAN vendors, software solutions were seeing the possibility with iSCSI protocols to place a layer of virtualization over SAN, NAS, and DAS and create a single, virtual pool of storage. This was a huge step forward in accessibility of storage but it came at a cost of both having to purchase the virtual SAN technology on top of the existing SAN infrastructure, and at a cost of efficiency because it effectively added another, or in some cases, multiple more layers of I/O management and protocols to what already existed.

      When SANs (and NAS) were integrated into server virtualization, it was primarily done with Virtual Storage Appliances that were virtual servers running the virtual SAN software on top of the underlying SAN architecture. With at least one of these VSAs per virtual host, the virtual SAN architecture was consuming a lot of compute resources in the virtual infrastructure.

      So virtual SANs were a mess. If it hadn’t been for faster CPUs with more cores, cheaper RAM, and flash storage, virtual SANs would have been a non-starter based on I/O efficiency. Virtual SANs seemed to be the way things were going but what about that inefficiency? We are now seeing some interesting advances in software-defined storage that provide the same types of storage pooling as virtual SANs but without all of the layers of protocol and I/O management that make it so inefficient.

      With DAS, servers have direct access to the hardware layer of the storage, providing the most efficient I/O path outside of raw storage access. The direct attached methodology can and is being used in storage pooling by some storage technologies like HC3 from Scale Computing. All of the baggage that virtual SANs brought from traditional SAN architecture and the multiple layers of protocol and management they added don’t need to exist in a software-defined storage architecture that doesn’t rely on old SAN technology.

      SAN was once a brilliant solution to a real problem and had a good run of innovation and enabling the early stages of server virtualization. However, SAN is not the storage technology of the future and with the rise of hyperconvergence and cloud technologies, SAN is probably seeing its sunset on the horizon.

      Original Post: http://blog.scalecomputing.com/back-to-school-infrastructure-101/

      posted in Self Promotion scale scale blog san storage hyperconvergence
      S
      scale
    • Job Posting: Operations Coordinator at Scale Computing

      Link to application: https://boards.greenhouse.io/scalecomputing/jobs/257375#.V6H_UTsrLIU

      Ensure the following functions are executed with efficiency and accuracy. Document and keep up-to-date all processes pertaining to responsibilities. Create in-house process training or documents that are accessible to those outside of the Operations department, such as support and sales.Establish and report metrics on responsibilities to maintain minimum levels of operation and find efficiencies.

      Important skills & qualifications: 4 year degree or relevant experience. Proficiency in MS Office Suite products. Experience with CRM tool recommended. Ability to work in a fast paced, interrupt driven workplace while maintaining organization, attention to detail and flexibility.

      Responsibilities listed below can change at any time and other projects may be assigned as time and ability warrant. Responsibilities will be assigned on a gradual basis. Moderate lifting may be required. Position reports to the Director of Operations.

      Sales

      • Manage trade-in returns
      • Manage service contract renewals
      • Assign 3rd party licensing and work with Ops Specialist on renewals

      Support/Services

      • Place Support replacement requests within SLA
      • Follow up and process all return cases within 30 days of part shipment
      • Contact customers, and work with finance to invoice if needed, for outstanding returns
      • Follow up and process all Quality cases with contract manufacturer
      • Manage RMA returns that go back directly to contract manufacturer
      • Weekly/monthly/quarterly replacement shipping metrics
      • Reimage nodes as needed

      Shipping/Receiving

      • Ship sales/replacement requests
      • Assist marketing/accounting with other shipments if needed
      • Check in returns, manage daily DAM reporting
      • Resolve minor shipping issues with UPS
      • Schedule roadshow & migration cluster shipping

      Inventory

      • Maintain replacement part inventory and place purchase orders with contract manufacturer
      • Manage spare part dashboard on SalesForce.com
      • Review contract manufacturer portal for replacement order status
      • Track Scale Internal assets
      • Monthly Inventory Audit

      Asset Management – Asset Lifecycle maintenance

      • Maintain asset integrity when parts are shipped to/returned from Customers (i.e. update accounts, move entitlements, etc.)
      • Work with Sales Support Renewal Manager to ensure all Customer entitlements are accurate
      • Manage all internal inventory in Scale’s on-site warehouse

      Accounting/Finance

      • Upload DAM reporting to Confluence
      • Update weekly contract manufacturer stocking and shipment reports
      • Manage monthly RMA reconciliation
      • Review and approve UPS/third party logistics invoices

      Other

      • Record and manage contract manufacturer MQT (manufacturer quality tracker)
      • Place orders with 3rd party vendors as assigned
      • Weekly & monthly metrics pertaining to above responsibilities
      posted in Job Postings job job posting scale
      S
      scale
    • Don’t Double Down on Infrastructure – Scale Out as Needed

      There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

      There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets. It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

      The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.

      Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

      Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure. With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

      HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

      This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out. Hyperconverged Infrastructure is the solution.

      Original Location: http://blog.scalecomputing.com/dont-double-down-on-infrastructure-scale-out-as-needed/

      posted in Self Promotion scale scale hc3 hyperconvergence scale out
      S
      scale
    • RE: Scale UK Case Study: Penlon

      There has been some discussion about Scale presence in the UK market, so really wanted to share this case study with you guys. Thanks, as always!

      posted in Self Promotion
      S
      scale
    • 1 / 1