ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. StorageNinja
    3. Best
    S
    • Profile
    • Following 1
    • Followers 10
    • Topics 3
    • Posts 988
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @hutchingsp sorry that it was so long for me to address the post. When you first posted it seemed reasonable and we did not have any environment of our own that exactly addressed the scale and needs that you have. But for the past seven months, we've been running on a Scale cluster, first a 2100 and now a 2100/2150 hybrid and that addresses every reason that I feel that you were avoiding RLS and addresses them really well.

      The other issue with Scale (and this is no offense to them) or anyone in the roll your own hypervisor game right now is there are a number of verticle applications that the business can not avoid, that REQUIRE specific hypervisors, or storage be certified. To get this certification you have to spend months working with their support staff to validate capabilities (performance, availability, predictability being one that can strike a lot of hybrid or HCI players out) as well as commit to cross engineering support with them. EPIC EMR (and the underlying Cachè database), application from industrial control systems from Honeywell and all kinds of others.

      This is something that takes time, it takes customers asking for it, it takes money, and it takes market clout. I remember when even basic Sage based applications refused to support virtualization at all (then Hyper-V). It takes time for market adoption and even in HCI there are still some barriers (SAP is still dragging their feat on HANA certifications for HCI). At the end of the day customer choice is great, and if you can be a trailblazer and risk support to help push vendors to be more opened minded (That's great) but not everyone can do this.

      There are other advantages to his design over a HCI design. If he has incredibly data heavy growth in his environment he doesn't have to add hosts. As licensing for Microsoft applications stacks (datacenter, SQL etc) are tied to CPU Core's here in the near future adding hosts to add storage can be come rather expensive if you don't account for it properly. Now you could mount external storage to the cluster to put the growing VM's on, but I'm not sure if Scale Supports that? He also within the HUS can either grow existing pools, add new pools (maybe a dedicated cold Tier 3) or PIN LUN's to a given tier (Maybe put a database always in flash). There's a lot of control here of storage costs and performance (if you have patience to manage it. Sadly no VVOLs support coming to the old HUS's.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Never Give More than Two Weeks Notice

      @scottalanmiller said in Never Give More than Two Weeks Notice:

      I don't believe this. Maybe 50%. I'm used as a reference for a lot of people, and almost never get calls. People ask for references way more than they call them. And even if they call them, they have to also then turn someone down based on the responses. If the response is "we had to fire them for legal reasons", sure. But if it is "they didn't give ENOUGH notice on a contract we won't show you", what buffoon is going to not hire you for that? No one with a functional company, that's for sure.
      And that's still assuming that you can't get a single good reference. No one needs twenty of them, no one checks every job. It is SO easy to get good references, there is no real fear in getting stuck with a bad one.

      I was a manager for 8 employees and with churn had another 4-5 that would list me as a reference. I got calls on maybe 2 people ever. (Magnus and BizDPS). I prefer to leave a LinkedIn reference (A public one) when someone asks about it so they can point to that as an initial starting point. The biggest reference that matter is internal ones to the company you are going to (Like that one time I gave a reference at 3AM for John White lol). HR and managers trust people who know the companies expectations and culture.

      posted in IT Careers
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      We have this now and we use the same capacity with replicated local disks as you would with a SAN with RAID 10. Are you using RAID 6 or something else to get more capacity from the SAN than you can with RLS? We aren't wasting any capacity having the extra redundancy and reliability of the local disks with RAIN.

      With HDT pools you could have Tier 0 be RAID 5 SSD, RAID 10 for 10K's in the middle Tier, and RAID 6 for NL-SAS with sub-lun block tiering across all of that. Replicated local you generally can't do this (or you can't dynamically expand individual tiers). Now as 10K drives make less sense (hell magnetic drives make less sense) the cost benefits of a fancy tiering system might make less sense. Then again I see HDS doing tiering now between their custom FMD's regular SSD's, and NL's in G series deployments so there's still value in having a big ass array that can do HSM.

      We have only two tiers, but they can be dynamically expanded. Any given node can be any mix of all slow tier, all fast tier or a blend. There is a standard just because it's pre-balanced for typical use, but nothing ties it to that.

      The other advantage of having tiers with different raid levels etc, is he can use RAID 6 NL-SAS for ICE cold data, and RAID 5/10 for higher tiers for better performance. Only a few HCI solutions today do true always on erasure codes in a way that isn't murderous to performance during rebuilds. (GridStore, VSAN, ?)

      1. Cost. Mirroring has a 2/3x overhead for FTT=1/2 while erasure codes can get that much much lower (IE half as many drives for FTT=2, or potentially less depending on stripe width). As we move to all flash in HCI (it coming) the IO/latency overhead for Erasure Codes and Dedupe/Compression becomes negligible. This is a competitive gap between several different solutions right now in that space.

      2. When your adding nodes purely for capacity this carries other non-visible costs. Power draw (A Shelf on that HUS draws a lot less than a server), scale out systems consume more ports (and while this benefits throughput, and network ports are a LOT cheaper this is more structured cabling, more ports to monitor for switch monitoring licensing etc).

      At small scale none of this matters that much (OPEX benefits in labor and support footprint trump these other costs). At mid/large scale this stuff adds up...

      posted in IT Discussion
      S
      StorageNinja
    • RE: Why Right to Fire (and Hire) May Be in the Employee's Favour

      @scottalanmiller said in Why Right to Fire (and Hire) May Be in the Employee's Favour:

      If you don't have R2F, all feelings of or pretense of employee protection are gone. When you don't have R2F, all employee protections go to the state. Employees aren't the "children" of the employer, they are the enemies of it. Removing R2F makes an adversarial relationship between employee and employer.

      What I find interesting is states where it's difficult to fire have problems with unemployeement for younger employees so they create systems where you can fire below xxx age, or offer incentives like lower minimum wage for below yyy age.

      posted in IT Careers
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller 1.8TB drives if they are 10K are 2.5'' 6TB are 3.5''. If they can stuff a 3.5'' drive in a 2.5'' bay I'd be impressed.

      The reality of 10K drives is the roadmap is dead. I don't expect to see anything over 1.8TB and in reality because those are 512E/4KN block drives anyone with any legacy OS's ends up stuck with 1.2TB drives more often than not if they don't want weird performance issues.

      (Fun fact, Enterprise Flash drives are ALL 512E for 4KN back ends, but it doesn't matter because they have their own write buffers that absorb and re-order the writes to prevent any amplification).

      Storage nodes are not commonly used, but largely because the vendors effectively charge you the same amount for them (At least the pricing on Nutanix storage only nodes wasn't that much of a discount). Outside of licensing situations no one would ever buy them up front (they would have right sized the cluster design from the start). In reality they something you kinda get forced into buying (You can't add anyone else's storage to the cluster).

      I get the opex benefits of CI and HCI appliances, but the fact that you've completely frozen any flexibility on servers and storage comes at a cost, and that's lack of control by the institution on expansion costs.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      The other issue with Scale (and this is no offense to them) or anyone in the roll your own hypervisor game right now is there are a number of verticle applications that the business can not avoid, that REQUIRE specific hypervisors, or storage be certified. To get this certification you have to spend months working with their support staff to validate capabilities (performance, availability, predictability being one that can strike a lot of hybrid or HCI players out) as well as commit to cross engineering support with them.

      This is always tough and is certainly a challenge to any product. It would be interesting to see a survey of just how often this becomes an issue and how it is addressed in different scenarios. From my perspective, and few companies can do this, it's a good way to vet potential products. Any software vendor that needs to know what is "under the hood" isn't ready for production at all. They might need to specify IOPS or resiliency or whatever, sure. But caring about the RAID level used, that it is RAID or RAIN, what hypervisor is underneath the OS that they are given - those are immediate show stoppers, no vendor with those kinds of artificial excuses to not provide support are show the door. Management should never even know that the company exists as they are not viable options and not prepared to support their products. Whether it is because they are incompetent, looking for kickbacks or just making any excuse to not provide support does not matter, it's something that a business should not be relying on for production.

      This right here makes no sense to me. You are ok with recommending infrastructure that can ONLY be procured from a single vendor for all expansions and has zero cost control of support renewal spikes, hardware purchasing, software purchasing (Proprietary hypervisor only sold with hardware), but you can't buy a piece of software that can run on THOUSANDS of different hardware configurations and more than 1 bare metal platform?

      In Medicine for EMR's Caché effectively controls the database market for anyone with multiple branches with 300+ beds and one of every service. (Yes there is All Scripts that runs on MS SQL, and no it doesn't scale and is only used for clinics and the smallest hospitals as Philip will tell you). If you tell the chief of medicine you will only offer him tools that will not scale to his needs you will (and should) get fired. There are people who try to break out from the stronghold they have (MD Anderson who has a system that is nothing more than a collection of PDF's) but its awful, and doctors actively choose not to work in these hospitals because the IT systems are so painful to use (You can't actually look up what medicines someone is on, you have to click through random PDF's attached to them or find a nurse). The retraining costs that would be required to fulfill an IT mandate of "can run on any OS or hypervisors" to migrate this platform (or many major EMR's) are staggered. IT doesn't have this much power even in the enterprise. Sometimes the business driver for a platform outweighs the loss of stack control, or conformity of infrastructure (I"m sure the HFT IT guys had this drilled into their head a long time ago). This is partly the reason many people still quietly have a HP-UX, or AS400 in the corner still churning their ERP..

      I agree that you should strongly avoid lock in where possible, but when 99% of the other community is running it on X and your the 1% that makes support calls a lot more difficult, not just because they are not familiar with your toolset. A lot of these products have cross engineering escalation directly into the platforms they have certified. We have lock-in on databases for most application stacks (and live with it no matter how many damn yachts we buy Larry). The key things are:

      1. Know the costs going in. Don't act surprised when you buy software for 1 million and discover you need 500K worth of complementary products and hardware to deploy it.

      2. Know what parts you can swap if they fail to deliver. (Hardware, support, OS, Database, Hypervisor) and be comfortable with any with reduced choice, or no choice. Different people may need different levels of support for each.

      3. Also know what your options for the platform for hosted or OPEX non-hardware offerings are (IE can I replicate to a multi-tenant DR platform to make DR a lowered Opex).

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      I agree that you should strongly avoid lock in where possible, but when 99% of the other community is running it on X and your the 1% that makes support calls a lot more difficult, not just because they are not familiar with your toolset.

      If your application isn't working, why are you looking at my hardware? I've never once, ever, seen a company that needed to call their EMR vendor to get their storage working, or their hypervisor. What scenario do you picture this happening in? What's the use case where your application vendor is being asked to support your infrastructure stack? And, where does it end?

      Because performance and availability problems come from the bottom up not the top down. SQL has storage as a dependency, storage doesn't have SQL as a dependency, and everything rolls downhill...

      IF I'm running EPIC and want to understand why a KPI was missed and if there was a correlation to something in the infrastructure stack I can overlay the syslog of the entire stack, as well as the SNMP/SMI-S, API and hypervisor performance stats (application stats) the EUC environment stats (Hyperspace either on Citrix or View) and see EXACTLY what caused that query to run slow. There are tools for this. These tools though are not simple to build and if they don't have full API access to the storage, or hypervisor, with full documentation and this stuff built (including log clarification) its an expensive opportunity to migrate this to a new stack, and something they want to be restrictive on.

      SAP HANA is a incredible pain in the ass to tune and setup, and depending on the underlying disk engine may have different block sizing or other best practices. This is one of the times where things like NUMA affinity can actually make or break an experience. Getting their support to understand a platform enough to help customers tune this, their PSO assist in deployments, and their support identify known problems with the partner ecosystem means they are incredibly restrictive (Hence they ares still defining HCI requirements).

      It costs the software vendors money in support to deal with non-known platforms (Even if they don't suck). The difference between two vendors arguing and pointing fingers, and two vendors collaborating at the engineering level to understand and solve problems together is massive in customer experience (and the cost required).

      The amount of effort that goes into things like this, and reference architectures is honestly staggering and humbling (these people are a lot smarter than me). I used to assume it was just vendors being cranky, but seeing the effort required of a multi-billion dollar.

      At the end of the day EPIC, SAP and other ERP platforms will take the blame (not Scale, or Netapp, or EMC) if the platform delivers an awful experience (Doctors or CFO's will just remember that stuff was slow and broke a lot) so being fussy about what you choose to support is STRONGLY in their business interests and is balanced against offering enough choice that they do not inflate their costs too high. Its a balancing act.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      The retraining costs that would be required to fulfill an IT mandate of "can run on any OS or hypervisors" to migrate this platform (or many major EMR's) are staggered.

      Well, no. The cost is literally zero. In fact, it takes cost and effort to not support it. OS and hypervisors are totally different here. Writing for an OS takes work, because that's your application deployment target. That's where you need to target the OS in question. The hypervisor is no business to an application writer. That's below the OS, on the other side of your interface. There is zero effort, literally zero, for the application team.
      So what I see isn't a lack of effort, it's throwing in additional effort to try to place blame elsewhere for problems that aren't there. I've run application development teams, if your team is this clueless, I'm terrified to tell the business that I let you even in the door, let alone deployed your products.

      Applications can certainly care what the hypervisor/storage is and integrate with it.

      If your leveraging the underlying platform for writable clones, for test/dev QA workflows (See this a lot with Oracle DB applications and in oil gas where the application might even directly call Netapp API's and manage the array for this stuff).

      Backups - (Some Hypervisors have changed block tracking so a backup takes minutes, others don't meaning a full can take hours). BTW I hear Hyper-V is getting this in 2016 (Veeam had to write their own for their platform).

      VDI - always leverages so many hypervisor based integrations that the experience can be wildly different. Citrix and View both need 3D Video Cards to do certain things and being locked into a platform that has limited support for that can be a problem.

      Security - Guest introspection support to hit compliance needs, Micro-segmentation requirements (EPIC has drop in templates for NSX, Possibly HCI at some point). If you want actual micro-semtnagation and inspection on containers there isn't anything on the market that competes with Photon yet. At some point there may be ACI templates but that will require network hardware lock in (Nexus 9K) and that's even crazier (Applications defining what switch you can buy!).

      Monitoring - app owners want full stack monitoring and this is a gap in a lot of products. Here's an example.

      Applications caring about hypervisor and hardware will become more pronounced when Apache's Pass and Hurley hits the market and applications start being developed to access Byte addressable persistent memory, and FPGA co-processors. I don't expect all 4 to have equal support for this on day one, and the persistent memory stuff is going to be game changing (and also a return to the old days, as technology proves itself to be cyclical once again!).

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      SAP HANA is a incredible pain in the ass to tune and setup, and depending on the underlying disk engine may have different block sizing or other best practices. This is one of the times where things like NUMA affinity can actually make or break an experience. Getting their support to understand a platform enough to help customers tune this, their PSO assist in deployments, and their support identify known problems with the partner ecosystem means they are incredibly restrictive (Hence they ares still defining HCI requirements).

      Right, so at this point you are talking about outsourcing your IT department as this isn't application work, this is infrastructure work. So this is turning into a completely different discussion. Now you are hiring an external IT consulting group that doesn't know the platform(s) that you might be running. That's a totally different discussion.

      But what we are talking about here is needing an application vendor to do underlying IT work for them. It's a different animal. It does happen and there is nothing wrong with outsourcing IT work, obviously, I'm a huge proponent of that. But there is no need to get it from the application vendor, that some do might make sense in some cases, that application teams demand that they also be your IT department is a problem unless you are committed to them delivering their platform as an appliance and it should be treated that way in that case. Nothing wrong with that per se and a lot of places do just that.

      This is largely SAP's support model today. They have validated appliance or tightly controlled RA's. Honestly making in memory database work is such a niche skill set I don't blame them for demanding to take that on for two reasons.

      1. Its weird work most in house IT will not know.
      2. They can charge a ton of money.
      3. No one complains because if your using HANA for what its intended the 2 million you spend on it is a joke vs the benefit it brings to your org.
      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      Backups - (Some Hypervisors have changed block tracking so a backup takes minutes, others don't meaning a full can take hours). BTW I hear Hyper-V is getting this in 2016 (Veeam had to write their own for their platform).

      Sure... but what does the application layer care? Either the application takes care of its own backups and doesn't care what the hypervisor does, or it relies on IT to handle backups and it isn't any of their concern either.

      Again, this is an application vendor or programmer trying to get involved in IT decisions, processes and designs. Do you let the company that makes your sofa determine how big your fireplace has to be because "they want to ensure that you are cozy?"

      Application owners have RPO/RTO's and they expect the infrastructure people to often take care of that. (When I have a 5TB OLTP database, in guest options generally fail to deliver somehow).

      If I buy a couch or desk that's massive for a tiny apartment I could see the sales guy asking how big my door way is to make sure they can deliver it. Otherwise I'll be saying "GALLERY FURNITURE SUCKS THEY SELL COUCHES THAT DON"T WORK". This is what users, application owners, and infrastructure people do today. Vendors MUST protect their name. I'm not saying these whiners make any sense, but people do this.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      I think we've reached stasis here. I've provided examples where the platform matters.

      Okay, I'll buy that. The platform matters when internal IT has failed and you outsourced to an external IT department who has an interest in selling you something that you don't need to make extra money on probably the sale and definitely the consulting. Yes, I agree, but I don't agree that that doesn't match my original point. It's not in the interst of the customer, but there is a reason why they feel that they have to do it based on other decisions made in the same way.

      Do you feel, however, that since this discussion is based on scale for the context of the original question, that there is ever a realistic time that this happens at three or fewer compute nodes? We are talking about three nodes for an entire business here. What business, anywhere, is that small and deploying systems where vendors interact with them in this manner? I'm not saying that theoretically it isn't possible, but this thread is asking for an example where this has ever happened.

      Outside of pure theory, and even there I feel that it is hard to theorize, who has products that need these kinds of things while being so small as to not have benefits of the IPOD due to scale?

      Sure. If the customer has oracle or SQL 2016 or Windows Datacenter or other licensing per socket, simply shaving down from 3 nodes to 2 could be significant.

      I've also seen companies where they had a site with very low compute requirements (Port facility) where they needed to scale deep (400TB for Video archive). Replicated local or RAIN is crazy more expensive on this (they spend 87K on a HUS with a RAID 60 style DP pool for this if memory serves, good luck buying 800 or 1600TB of disks for that price...

      I know you like trying to find absolute rules for the SMB (which to be honest they kind of need, because if there's anything I learned from consulting in that space, or watching random SpiceWorks comments it is that everyone is at a subconscious level drawn towards awful idea's) but we are running increasingly into a world of workloads and needs that completely have no relation to what that company or site's industry or employee count is. Simple exclusionary rules make even less sense.

      Its like decisions on RAID for storage systems. Increasingly the historic rules (Deploy RAID 10, and size for capacity) is becoming awful advice, and with most storage systems that are modern its not even something you can choose anymore as the decision is abstracted at a RAIN level (or in the case of most modern storage appliances it a fixed erasure code set based on a stripe width of their NVRAM's ability to destage a write). The real savior of the SMB here is platforms, appliances and systems that remove the ability to go off and do something stupid, rather than "hard and fast rules" that increasingly don't matter (or are wrong).

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @Veet said in The Inverted Pyramid of Doom Challenge:

      @DustinB3403

      @DustinB3403 said in The Inverted Pyramid of Doom Challenge:

      @Veet said in The Inverted Pyramid of Doom Challenge:

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      This is my quote from the original challenge: "We all (I hope by now) know that SANs have their place and a super obvious one that explains why enterprises use them almost universally and know why that usage has no applicability to normal SMBs - scale."

      I agree with why lots of shops might deploy systems like you are describing, even if I generally don't agree with that decision, but I'm pretty confident that the use cases that you are describing @John-Nicholson are tied, nearly universally, to a scale that would already prompt a SAN-based infrastructure (or similar.)

      Have you seen these in small environments where the scale did not exist to warrant a SAN otherwise?

      Just a couple of months ago - I was contacted by a prospective client , who was looking to get his website designed ... So, I went over to his office one day, for a general face-to-face, and we got talking, and quite proudly mentioned about recently acquiring a Synology DS2015 box ... which was all pretty alright, until he mentioned why .. It turned out that their vendor recommended that they migrate their one Windows 2012 server to a VM, and that, if they WANTED RELIABILITY, SCALABILITY & PERFORMANCE, they would HAVE TO, move from a local storage to a NAS .. btw, their current total data size is a little less than 1TB ... They have around 40 users ... Now, for the cherry on the cake .. The vendor took-out the 2x2TB HDDs from the server, and reused them in the new NAS box. Apart from that, they installed another 2 TB HDD in the NAS box for "Backups" (Can you believe it, I could not ), and then installed a 128 GB HDD on the server, to install Hyper-V 2012. This , the vendor said would "further increase performance,, and that they did not have to buy new HDDs, which would save money" The VMs and data were on the NAS box ...

      Upon, pointing-out and explaining the rather obvious flaws in this design, the client was left rather gobsmacked ... Anyway, I designed their website, and will be taking-over the support & maintenance of their IT, once the annual contract with the existing vendor runs it course.... I recommended, that they reattach the HDDs to the server, and run everything locally, and return or try to sell-off the DS2015 box, and get a smaller one, just for back-ups (VEEAM)... I hear, that the existing vendor, recently agreed to take back the DS2015, and compensate them by installing a lower-end 4 bay box, and by extending their service contract (I'm not sure if my client is going to agree to this) ....

      Shocking, no ?

      This is the same practice many SMB's experience every day. The IT Vendor clearly doesn't have an expert in house, just someone who gets paid to sell hardware with enough experience to setup some basic hardware.

      I'm not shocked, and glad you were able to point out the issues. I didn't see what server they have that was scaled back to just a compute node though. . .

      I don't think it's about lack of knowledge or experience ... I feel, it's just about unscrupulous business practice, of up-selling something ...

      Stupidity, and there's multiple people to blame.

      1. Small business's are not blameless, they should seek good advice and consulting. When they hire people who they pay less per hour than geek squad this is what you get. By refusing to pay real consulting rates, this is what they end up getting...

      2. Small consulting shops that center around Hyper-V these days seem to be in love with building clusters on prosumer grade QNAP/synology etc. As there is no deal registration its not actually something they can "mark up" much. It does add a lot of labor, but you have to look at these shops training commitment (or often lack there of). They tend to be fed with cert mill grade MCSE's who learned out to make a Hyper-V cluster and Microsoft's storage curriculum emphasis this without ever discussing quality of storage. (Meanwhile a VCP 5.5 or newer will cover scale out local storage as there are quite a few VSAN questions on that test).

      3. There is a growing trend where the self taught IT guy in the SMB's knoledge is drifting farther and farther from the enterprise. The tools and best practices are making the "Cargo Cult of the Enterprise" even more dangerous.

      As far as a shop with only a single server instance I'm starting to ask why even bother? Why not host the application, get as many of your apps delivered by SaaS, and leverage MDM/MAM/SSO tools and move away from the need for GPO or local domain for management.

      Does doing this cost a little more? Sure. It does however give you a much more transparent cost to IT (Your not assuming risks because the SLA's are fairly well known and far more absolute from a SaaS provider these days than a server in a closet).

      I think our real boogyman in the (S in SMB) is not the guy with the Synology but anyone advocating physically local servers at all. Servers with some exceptions for SME's or niche industries increasingly belong in datacenters.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Hyper-V replication licensing

      @scottalanmiller

      @scottalanmiller said in Hyper-V replication licensing:

      Nothing wrong with tape on its own. But I would explain to them that this is a mismatch of needs. They clearly dont' see themselves as a viable business, but as a hobby (no virtualization.) If they don't virtualize, they can't reasonably say that they think this is a real business, they are SO far below the home line it isn't even discussable. No grey area at all, this is a hobby and a joke to their owners. Make that absolutely clear.

      Scott your insulting hobbyists. Most of the OS instances in my house are virtual. My home datacenter looks down on their business practices.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Hyper-V replication licensing

      @scottalanmiller

      @scottalanmiller said in Hyper-V replication licensing:

      @DustinB3403 said in Hyper-V replication licensing:

      If the original backup host fails in 90 days, the client is then on the hook to Microsoft. It's far cheaper to purchase a second standard license then to worry about it.

      Not in 99.999% of cases. Remember you are talking about a double failure, not a single failure. So let's run the numbers assuming a single license is $700.

      For 90 Day Failover Window Licensing Cost: $700
      For Sub 90 Day Double Failover Licensing Cost: $1400

      The drug you are looking for is failing over to reduce maintenance window times for host/hypervisor patching every patch Tuesday (assuming Hyper-V).

      posted in IT Discussion
      S
      StorageNinja
    • RE: Simplivity - anyone use them?

      @scottalanmiller said in Simplivity - anyone use them?:

      I've done enterprise branch office, it's very different than an SMB, in most cases. ROBO and SMB have a lot of overlap, but a lot of differences, too.

      Enterprise ROBO is different in a few cases...

      1. Can I manage and monitor availability and performance of 300 sites from one dashboard isn't something I've had a SMB ask.

      2. SMB's might get down to 6-20VM's at a small office with a dozen people. ROBO can be 1-2VM's in the back of a gas station/dairy queen etc.

      3. Can I build an HA cluster All in (software/hardware/licensing/networking gear/UPS/Labor to deploy) for UNDER $10K is something I hear form both, but in ROBO its something that can actually be delivered on because of how spartan the hardware requirements and the existence of a primary data center to provide quorum services and absorb the shared management overhead resources.

      4. SMB's typically need backup software at the edge and a traditional backup workflow and vault to cloud/offsite system. ROBO can often just be basic replication offsite, or in many cases DR/BC is handled by the application layer (although some have historically done this at the array/storage layer).

      5. Some ROBO Edge systems are effectively "disposable" but they still want HA for maintenance window reasons (vMotion), or so they can have something fail and not need a 4 hour parts contract (That a SMB typically will want, as the overhead isn't murderous like it is with 400 sites).

      posted in IT Discussion
      S
      StorageNinja
    • RE: Simplivity - anyone use them?

      @scottalanmiller

      @scottalanmiller said in Simplivity - anyone use them?:

      But that's not a viable customer anyway. Don't hurt good, real possible customers in order to protect someone too stupid to operate in IT anyway. That's not sound logic. You are protecting the wrong people... punishing the qualified buyers to assist the unqualified ones.

      Also known as the majority of the people purchasing IT equipment?

      @scottalanmiller said in Simplivity - anyone use them?:

      That means that that customer can't do things like order food in restaurants, buy a car, buy a house, etc. This is a level of incompetence that is so bad, that there is no way that they could be an operational company.

      Yah about that. From my consulting days I was confused how people remembered to put their pants on their legs and not their heads some days...

      Withholding pricing till its qualified is part of a game in the enterprise that fixes the following situations...

      Sometimes its malicious...

      1. Cisco Fanboy network admin decides he wants to buy a new 5K switch he does't need... He is required to get 3 bids so he calls Broacde and asks for the biggest baddest VDX config so they will be too expensive.

      Sometimes its just someone in a hurry....

      2.HCI dude In a mid market company has decided he want some HCI! He mistakenly assumes that Simplivity does RAID 10 on top of RAID 10 like he used to do with LeftHand and vastly oversizes the solution. At the same time he quotes vendor xxx he assumes they have dedupe (even though its large block and doesn't really work or only scans the first couple GB of disk before it gives up). He quotes vendor Z and because he only got pricing missed the fact that they don't have a (Magical FPGA card) like simplivity and their VSA will require 8 core's be assigned (and hard reserve a good chunk of them!).

      There is HUGE differentiation in a lot of solutions.

      SMB's Quote first, learn later attitude works when your buying commodity low friction products (Printer Ink, generic rack mount servers etc). When you buying stuff like HCI or fancy Networking gear, or storage arrays I'd argue that getting quotes too quickly can (and I have seen) can lead to poor outcomes if someone doesn't stop in the quoting process and educate the customer. (When you end up running your entire business on a VNXe with 6 SATA disks because it was the only thing you thought you could afford). HCI appliance pricing is even trickier as with meet in the channel type solutions you are at the mercy of the underlying OEM for prices (and Dell's SSD prices have been changing almost daily in some cases due to changes in their supply chain!). Getting a price today carries the risk that another solution may appear cheaper if you get your quotes to far apart.

      Disclaimer, for what its worth my employer posts most of their list prices on the internet (or doesn't do a very good job of suppressing them). Thankfully we don't sell hardware (So that risk/shifting price) isn't part of our exposure and prices stay pretty constant with a fairly well known multiplier for support and updates till the end of time. We do sell primarily though channel though, and require you call someone to get a quote on anything but the smallest packages (like essentials plus, and even then you'll still pay less from a partner). The reason is to see if they can qualify and maybe get you something else that you might not know you need, or make sure your purchasing the right "SKU". It could be something basic like an academic institution not realizing we have special pricing, or someone deploying for a VDI pilot not knowing we have per user licensing that will cost them 20% of normal socket pricing, service providers missing per GB pricing, or ROBO missing per VM pricing stuff.

      I remember many years back someone called out Steve Balmer on Microsoft licensing being confusing with all the options. He calmly responded that he recognized it as a problem having lots of options to buy something, but he contented that to simplify it would lead to someone being angry.

      Complicated purchasing and pricing and packaging is fundamental to any class of products that doesn't have a laser like focus on a single vertical.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Domain Controller Down (VM)

      @wirestyle22 said in Domain Controller Down (VM):

      @Dashrender unsure of the IP of that host. It has nothing running on it typically, but I can just assume that.

      Hit what you think it might be on 443, you should be greeted with the ESXi landing page.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Domain Controller Down (VM)

      @scottalanmiller said in Domain Controller Down (VM):

      @DustinB3403 said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      At least if the other end knew what he needed he could get some help. But now he might cancel his subscription and go somewhere else (which I believe is what they are trying to avoid). I can't imagine the amount of "IT Pros" that contact them looking for support for issues like that.

      Same vein, how many avoid them because they don't provide ANY reasonable support options? I'm never asking anyone to support everything, but everyone needs to support something serious.

      Right, and they do. VMware.

      Oh okay, well that's fine then. Not the BEST option, but acceptable. And by BEST I don't mean that VMware is or isn't the best, I mean ONLY supporting that one is not as good as supported a few options.

      Ya, this whole thing started because Dustin said @wirestyle22 should drop them since they don't support anything else. That's ridiculous.

      I specifically said I'd look for alternative software if an appliance vendor said they only supported a single hypervisor.

      Big difference.

      Although the client SHOULD consider the high cost of VMware for such a small system. They are looking at a $40K SAN to support that one application now based on that one app. And that's a lot of VMware costs. We don't know how much that one app costs, but holy cow is that a huge budget for a tiny company just as support costs for a single app. SMBs don't normally have total budgets that big, let alone that much to spend as ancillary costs to a single app!

      You'd "hope" that this was a $200K+ application to make that make sense.

      In healthcare there's a strong chance that the cost of the application, the migration, and the accompanied support agreements make a 40K storage array "cheap". Combined the fact that he likely has 4-5 applications in this situation (at a minimum) and a small HDS or a VxRAIL appliance (~$65K starting) could be a rounding error.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Domain Controller Down (VM)

      @Dashrender said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @Dashrender said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @Dashrender said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      @scottalanmiller said in Domain Controller Down (VM):

      @stacksofplates said in Domain Controller Down (VM):

      If you're running on something using PV drivers that they don't understand...

      Then your critical app vendor is below the home line. THAT'S how scary this should be to companies.

      When your "business critical support" lacks the knowledge and skills of your first year help desk people, you need to be worried about their ability to support. Sure, when nothing goes wrong, everything is fine. But if anything goes wrong, you are suggesting these people don't have even the most rudimentary knowledge of systems today. That's worrisome. And it's why so many systems simply have no support options - relying on software and hardware that is out of support meaning that while the app might call itself supported, they depend on non-production systems making the whole thing out of support by extension.

      So when running with a preallocated qcow2 image, which caching mode do you use for your disk? Writethrough, writeback, directsync, none?

      What about IO mode? native, threads, default?

      No one can support every hypervisor at that level.

      Also, none of those things need to be supported by the app vendor. They just need to support the app and stop looking for meaningless excuses to block support. I understand some vendors want to support all the way down the stack, but if they don't know how to do that with virtualization, they don't know how to do it. The skills to support the stack would give them the skills to do it virtually even better (fewer variables.) So that logic doesn't hold up.

      You still haven't provided a single healthcare vendor that does any of what you say is appropriate.

      I know Greenway didn't have a virtualization plan 3 years ago when we were looking at them. It's why I had to build a ridiculous $100K two server failover system. Today the performance needed could be done for $25k.
      The sad thing is that the vendor could not provide any IOPs requirements, etc. They only had this generic hardware requirement.
      SQL Dual Proc Xeon 4 cores each two drive boot, 4 drive RAID 10 SQL, 4 drive log
      RDS single proc xeon 4 core 2 drive boot, 2 drive data
      IIS application dual proc xeon 4 cores each, 2 drive boot, 6 drive RAID 10 data
      etc
      etc

      Because... no support 🙂

      eh? yeah Greenway didn't bother to do the right thing for their customers and have support for hypervisors! Shit, how can they really support their customers on bare metal if they don't know the IOPs requirements, etc? Just keep stabbing hardware until they "get lucky"?

      That's my guess. Lacking support of VMs isn't exactly the big issue... it's WHY they lack that support that is the big issue.

      LOL - Short of someone like Epic, from what I can tell, they are mostly software developers, who don't care about the hardware/VM it's running on. They don't approach the software holistically.

      In healthcare you'll find a LOT that take this stance, for liability reasons (they want something they can provide support, or to reduce the chance of an SLA miss from something that their GSS isn't familiar with). Most healthcare systems are going hosted for this reason. I had a nice chat with the Cerner guys at VMworld and they mentioned that they offer SLA's all the way to how quick a patient note pulls up (7 seconds worst case I think). They in many cases actually take over on site support end to end (and act as a MSP in addition to an EMR). Realistically for EMR's given their horizontal integration of features, the next logical step is vertical integration of the hardware and end user computing support.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Domain Controller Down (VM)

      @Dashrender said in Domain Controller Down (VM):

      I didn't know what kind of medical facility @wirestyle22 was in..

      If HA is fully thought out and is felt is needed (don't forget about the power situation, and cooling, etc, etc, etc, - remember HA isn't a product, it's a process) then they should fully realize it. I'm guessing by the fact that the switches were 100 Mb that it really wasn't fully thought out, instead someone in the place of authority thought it sounded good and they tossed what they have in today in.

      Medical facilities with beds have generators and fuel. HVAC for something this small can be covered for redundancy with a spot cooler (I have this in my own house for my lab, so If I can afford it, you have to be a tiny outfit to not be able to afford it). I agree its a process, and the biggest piece is having a MSP to back you up, and having 24/7 dispatched resources to help you with the persistent layer. Not having redundancy at the people level is the biggest issue to address. While I normally advocate some kind of offsite ready to fire off DR, in the case of a facility like this its not actually as important (beyond BC reasons) because if the whole facility blows up the need for the system goes with it. Still there are a bazillion Veeam/VCAN partners who can cover this piece for cheap so why not.

      posted in IT Discussion
      S
      StorageNinja
    • 1 / 1