ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. KOOLER
    3. Posts
    • Profile
    • Following 6
    • Followers 8
    • Topics 5
    • Posts 294
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: What is a Linux Distro

      @thwr said in What is a Linux Distro:

      Good write-up.

      About OpenSuSE: It's very popular, that's true, But it's not a base distro. OpenSuSE was derived from SuSE which was derived from Slackware, which in fact is on of the three large base distros.

      There's a wonderful diagram over at Wikimedia:
      https://commons.wikimedia.org/wiki/File:Linux_Distribution_Timeline.svg

      Good one but a bit outdated info IMHO.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Windows Tape Library Emulator

      @thwr said in Windows Tape Library Emulator:

      I'm looking for a tape library and drive emulator for Windows, preferable one that is emulating an SAS / SCSI interface. There used to be a tool called VMCE (Virtual Media Changer Emulator), but it doesn't seem to be available anymore.

      I'm not looking for any VTL solution but for something that just fakes an robotic autoloader like the Overland Neo series (which sells under various names, Tandberg for example) . Windows is a hard requirement in this case.

      I'm planning to write a little how-to where a tape lib is involved and I don't want to use my real one at work 😉

      Like others (thanks for REF @Danp !!) pointed out we can help. VTL isn't free but I can manage a NFR license for you. If you have something to offer we can even give away a SDK so you'll write own / modify existing VTL plug-in for StarWind to do something. Do you remember C/C++? 😉

      P.S. SyncSort did exactly this as they OEM our VTL and iSCSI engine for tapes.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: New IT manager making changes... should I be concern?

      @stess said in New IT manager making changes... should I be concern?:

      @scottalanmiller said in New IT manager making changes... should I be concern?:

      Your big challenge here, if you decide to pursue a counter to these recommendations, will be in properly assessing business need (if you feel that his designs are not in the best interest of the company then you should, in theory, be able to not just put that into words but be able to put it into numbers) and then communicating that effectively to the powers that be. This is where the average IT person fails hard - IT tends to attract people who struggle to be able to quantify, qualify and communicate IT in business terms. Maybe you are not one of these people, but if you work in IT the chances are extremely high that this is an area where you feel a particular challenge.

      Thanks for the insight. I'll gather more information before making any decisions. These changes are estimated to take 6-8 months at least. I got time.
      I will look at the link you posted and make a better judgmental decision. I am 110% against SAN and know there are alternatives that could deliver results with fraction of the cost. *cough starwind virtual SAN *cough

      I'll see if I can have a quick talk with the management to give my input about all these changes. Obviously I am not going in empty hands.

      yeah we can do that! one thing to note: there are many ways to skin a cat (and hang a dog) so you may go SAN or SAN-less route and ether route will split into myriad of options! on your place I'd allow to pick up from at least 3 possible ones. imho 🙂

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Hyper-V - 3 VM migrations to new host

      @JaredBusch said in Hyper-V - 3 VM migrations to new host:

      @Mike-Davis said in Hyper-V - 3 VM migrations to new host:

      Were both of the Hosts domain joined?

      Yes everything was domain joined. Honestly, they should always be domain joined.
      You still need to allow constrained delegation though.
      https://blogs.technet.microsoft.com/matthts/2012/06/10/configuring-kerberos-constrained-delegation-for-hyper-v-management/

      The old host is Server 2008 R2 and the new host is running Server 2012 R2 if that makes a difference.

      This makes a HUGE difference. Hyper-V was completely redone for Server 2012+

      In this instance, I would export and import.

      not true

      even ws2016 hyper-v has something from pre-2008 (and especially azure fork -out, for example erasure coding implementation, load balancing etc)

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Hyper-V - 3 VM migrations to new host

      @LAH3385 said in Hyper-V - 3 VM migrations to new host:

      Assuming both server have similar storage space, and only 2 nodes we are talking about. Take a look at Starwind's Virtual SAN. They can provide free license for 2 nodes (you have to ask for it). Anatoly Vilchinsky is a real champ. He hooked me up with the license. You should be able to find tutorial on how to set it up.

      This will required downtime as it require restarts during installation and setting it up. It's a set and forget kind of deal. Crossover cable between the server highly recommended.

      thanks for ref and sorry for delayed response !!

      sure we can do that! OP can ping me if he'll need any assistance on a private key generation

      https://www.starwindsoftware.com/starwind-virtual-san-free

      😉

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: NAS or SAM-SD?

      if you don't pay for electricity from your own pocket just get R5xx from xByte and load FreeBSD on it (or Linux?) with ZFS

      don't do syno or netgear if yo plan more than 4 spindles

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out

      @travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

      @Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

      Is this really the case? I'm sceptical that a VMWare or HyperV or even a XenServer based system would have that huge a difference in performance requirements compared with a Scale system.

      "24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters."

      Is this genuine or is it a flippant example? If it's genuine...shut up and take my money.

      From Starwind's LSFS FAQ
      "How much RAM do I need for LSFS device to function properly?
      4.6 MB of RAM per 1 GB of LSFS device with disabled deduplication,
      7.6 MB of RAM per 1 GB of LSFS device with enabled deduplication.
      "

      So, yeah, could easily eat up that much ram. ~7.6GB RAM per TB of storage.

      I didn't spot the CPU recommendation, but I know it's beefy.

      you don't always use LSFS with starwind

      and if you use lsfs you don't always enable dedupe

      and we're offloading hash tables for nvme flash now so upcoming update will have ZERO overhead for dedupe

      supported combinations are

      1. flash for capacity and ram for hash tables => FAAAAAAAAAST !!

      2. spinning disk for capacity and nvme flash for hash tables => somehow slower but because of a spinning disk of course

      posted in Self Promotion
      KOOLERK
      KOOLER
    • RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out

      @travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

      Alright, I have to ask. Is Starwind able to get access to the hardware level drive access like this in Hyper-V? @KOOLER (sorry, forgetting the others around here with Starwind.)

      on hyper-v we'll run a mix of a kernel-mode drivers and user-land services and we'll get direct access to hardware

      on vmware we'll use hypervisor and will "talk" eventually to VMDK with a data container

      posted in Self Promotion
      KOOLERK
      KOOLER
    • RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out

      @scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

      No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.

      right! windows server has a recent dedupe so a VM with WS2012R2 will do the trick

      posted in Self Promotion
      KOOLERK
      KOOLER
    • RE: Tape drive alternative besides online backup for offsite backup

      thwr thanks for reference! 😉

      Mike, yes, we can do that just fine! it's still recommended to do Disk-to-Disk-to-Tape however because direct iSCSI access for tape can be slow and backups may not fit into backup window. if it's a case VTL is your best friend 😉

      https://www.starwindsoftware.com/starwind-virtual-tape-library

      Ping me if you'd have any questions so I could help.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Come Hear SAM Speak at SpiceCorps Auburn NY Tonight

      Hope you had a great time! 😉

      posted in Self Promotion
      KOOLERK
      KOOLER
    • RE: Hp storage D2d4324 nfs slow xenserver

      @scottalanmiller said in Hp storage D2d4324 nfs slow xenserver:

      For XenServer, which has high availability shared storage built in out of the box, you literally need nothing. HA-Lizard is a script that sets things up for you. You can use Starwind for larger scale. HP VSA will work as well. You can use Gluster or CEPH, too.

      +++

      Also it's a very common misconception in a hyperconverged scenario ALL the nodes should participate in the storage providing. It's not! There may be storage only nodes, compute only and hybrid. In a very different configurations. Pretty much all the HCI vendors can do that, especially if they support common uplinks (NFS, SMB3 etc).

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Best Whiskey (Hypervisor) versus the most sold Whiskey (Hypervisor)

      @aaron said in Best Whiskey (Hypervisor) versus the most sold Whiskey (Hypervisor):

      Huh, I thought Jack was the best selling whiskey.

      Jack isn't a whisky technically and it's not a "whiskey" for sure ;))

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: StarWind vs Storage Spaces Direct

      [ I'm really sorry it took so long !! ]

      StarWind Virtual SAN Vs. Microsoft Storage Spaces Direct Vs. VMware Virtual SAN

      OK, here we go! First of all, I think I owe a set of a disclaimers:

      A) I not only work for StarWind but also still own a noticeable part of it and while I’m trying to be as honest as possible and as unbiased as anybody could be… please still accept everything I say with a good grain of a sea salt (hey, healthy skepticism is always welcomed).

      B) I personally know (for many years) people who developed and also prominently evangelize for referenced competitive products, I remain in a very good relationship with them and I think can call many of them with a word “friend” (if “friend” can live 10,000 miles away from your home and you see him maybe few times a year at the best), which means I won’t too actively criticize what they and their companies do in public even if I really have a technical reason to do so.

      C) I’m under strict NDAs with the named companies and with some other ones as well which means I have to step on my tongue and stop leaking quite many really interesting things I know.

      So A), B) and C) summarized makes me probably not the best information source on the subject but… Let’s start to see what could we win!

      I’ll begin with a “maturity” issue where I’ll try to play a “devil’s advocate” for both Microsoft and VMware. It’s quite common to hear a statements (usually from Microsoft and VMware “competitors” who managed to download and compile some ZFS fork out, craft some reasonably good-looking HTML5 GUI and now they call themselves an uber-exciting hyperconverged or storage startup LOL) about Microsoft not understanding storage, VMware never been a storage company itself so there’s no traces of storage in their companies’ DNA, both companies having V1.0 version of their products and so on. I’d say neither Microsoft nor VMware aren’t small companies and they take Software-Defined Storage challenge for serious for sure: teams are extraordinary talented, partners are well-engaged, huge money is bet on success so everything others spent years on could be done by Microsoft and VMware in a very short time term (2-3 years I think). These guys are maybe indeed a bit late to the SDS party (well, big guys never were good in a true innovation, disruption strategy belongs to lean startups and it’s a law stronger than law of gravity IMHO) but they catch up very fast and while their products may have some “holes” in features line (who doesn’t have them?) everything they put into RTM labeled version works well for sure! Finally, if some particular niche isn’t served well by them maybe it could be because VMware and Microsoft don’t really see this niche as a valuable source of an income worth spending time on serving such a customer group? In a nutshell: everything I’ll compare below is assumed to be of a similar build quality, no FUD for sure!

      Now comes one very important assumption. While the topic covers StarWind Virtual SAN, Microsoft Storage Spaces Direct (Why didn’t Microsoft call it a “Virtual SAN” name as well? It would save us all A LOT of time and kill so many confusion!), and VMware Virtual SAN I’ll expand software-only offerings to so-called “ready nodes” which are branded servers with pre-installed hypervisor (Microsoft Hyper-V or VMware vSphere) and matching Software-Defined Storage solution from listed (SDS in this context) vendors. Software-Defined Storage eventually evolved into hyperconvergence, and hyperconvergence is now mostly considered as a “ready nodes” rather than a software alone and here’s why: SDS and hyperconvergence allowed to reduce implementation costs (CapEx) and maintenance costs (OpEx) in a way of not buying any “big named” SAN or NAS first (Software-Defined Storage did it) and later in a way of not buying any now DIY (Do-It-Yourself) SAN or NAS (hyperconvergence now) at all. “Ready nodes” take hyperconvergence and associated CapEx and OpEx savings just to another level up and save even more money upfront when hyperconverged vendor shares some of his major hardware discount with his end user to make hyperconverged infrastructure more affordable, and later when hyperconverged vendor covers all support for cluster by his own, eliminates a need in a “middle man” or MSP smaller SMB shop had to hire to support his vSphere or Hyper-V installation before. Software-Defined Storage Hyperconvergence HC “ready nodes”, this is how the whole virtualization evolution looks like for a typical SMB. StarWind has “ready nodes” called HCA (Hyper-Converged Appliance), VMware has them as well (VSAN “ready nodes” or VxRack from a parent Dell company), and Microsoft delivers similar solutions thru the network of partners.

      I’ve decided to separate the customers by size and by associated most typical scenarios instead of focusing on features and some products’ limitations because I believe it’s hard to decide is say lack of a f.e. deduplication a deal breaker or not for somebody. My separation is still not perfect, no line drawn in sand (and if there’s a one it’s definitely blurred) but most starting points are there.

      1. Very small SMBs and / or ROBOs. We’re talking about 2-3 hypervisor nodes (2 CPU sockets each host usually as it’s the most popular and cost-effective compute platform now), comparably few VMs (20-30 or so) so reasonably low “VMs-per-host” density, strictly hyperconverged setup, and either Hyper-V or vSphere used but never both at the same time. Dramatic growth isn’t expected in the nearest future. Shop is very short in a human resources to manage and support the whole thing.

      VMware does exist in a two-node version called “ROBO Edition” but it requires a third witness data-less host somewhere (private or public cloud) and it’s also “per-VM-pack” rather than “per-CPU-socket” licensed. Microsoft doesn’t support two-node Storage Spaces Direct setup currently but even if they would their data distribution policy doesn’t allow losing more than a single hard disk even in a three-node two-way replicated S2D cluster. Moreover, Storage Spaces Direct requires Datacenter license making resulting solution extremely expensive and a total overkill for smaller deployments. StarWind doesn’t need any witness entities (StarWind can utilize heartbeat network) for a pure two-node setup, doesn’t need network switches (StarWind doesn’t use broadcast and multicast messages like VMware Virtual SAN currently does), and doesn’t require any specially licensed Windows host (even free Hyper-V Server is absolutely OK for us, Windows Server Standard is PERFECT). Smaller overhead VM-based VMware solution has can be safely ignored (StarWind runs as part of a hypervisor on Hyper-V but requires a “controller” VM for vSphere) because IOPS requirements are low within this scenario. To put a final point StarWind software alone and hyperconverged appliances come with a 24/7 support so shortage or a complete lack of an on-site human resources is mitigated.

      These all points mentioned above make StarWind and StarWind-based hyperconverged appliances a very natural choice here, we’ll provide either our own Virtual SAN software alone to “fuel” virtual shared storage for existing commodity servers customer already has or we’ll ship a complete hyperconverged appliances with our StarWind Virtual SAN being used as a “data mover” layer. Still our “ready nodes” are more affordable than DIY (Do-It-Yourself) kits (StarWind has hardware discount it splits with the customer, customer has to spend millions of dollars literally to have comparable discount rate), still our “ready nodes” come with a 24/7 support while DIY is basically a “self-supported” solution.

      1. Bigger SMBs (4 and more hosts in a basic hypervisor cluster) and up to entry-level Enterprises (10 hypervisor hosts or so), everything hyperconverged, comparably high “VMs-per-host” density (10+ VMs). Still either Microsoft or VMware hypervisor employed (not both at the same time). Growth is expected, is moderate, and is more or less linear for compute and storage at the same time. IT management and administration staff is present on-site.

      For these particular customers Microsoft Storage Spaces Direct and VMware Virtual SAN are the best fit! Window Server Datacenter licensing makes sense because of the amount of VMs alone so Storage Spaces Direct are there automagically, and VMware Virtual SAN cost overhead is split between many VMs running on the same host so is reasonable. For these guys StarWind isn’t offering any paid software for primary storage (within this cluster I mean), but we’ll be happy to sell StarWind-branded “ready nodes” (just to drive server hardware costs down a little bit) where either Microsoft S2D or VMware VSAN will be used as a “data mover”. We’ll still use our own Virtual SAN to tap some little holes in a Microsoft and VMware products to add even more performance, increase storage efficiency, and add some more flexibility as StarWind isn’t forced to support one storage protocol while not supporting other one for example. For Microsoft we’ll add RAM-based write-back cache (Microsoft own CSV RAM cache is read-only and limited in size), 4KB in-line deduplication (Microsoft Storage Spaces Direct require ReFS and ReFS has no dedupe), log-structured file system, and set of a protocols Microsoft isn’t offering out-of-box (HA iSCSI including RDMA iSER and vVols extensions, failover NFS etc). VMware has no RAM-based write-back cache (flash only) as well, no dedupe for spinning disk (means VMware’s dedupe is for primary storage and backup scenario isn’t served), and block & file protocols (iSCSI, NFS, and SMB3) customer able to deploy immediately out-of-box (VMware VSAN is “private party” so only VMs has access to VSAN-managed distributed storage pool). Last but not least, we’ll still wrap everything customer gets into our own 24/7 support making us rather customer to own the whole support and maintenance thing.

      Making long story short: StarWind has same unchanged offering “hyperconverged appliance still being cheaper then do-it-yourself kit but all covered by our premium 24/7 support your DIY doesn’t have”. Except for “data mover” we’ll use Microsoft’s and VMware’s own SDS solutions keeping our own software as a complimentary free SKU to “enhance” them and to help in our differentiation from other vendors shipping same Dell or HP servers and same S2D or VSAN based HCI SKUs.

      1. Very big Enterprises (20+ hypervisor hosts), compute and render farms, cloud and hosting providers. Either hyperconverged or “compute and storage segregated” scenarios. VM density varies from host to host, some hosts may use Windows Server BYOL (Bring-Your-Own-License) for some or all of their VMs. Microsoft and VMware hypervisors can be used at the same time (so-called “multi-tenant” environment). Growth is unpredictable and compute can be increased separately from storage and vice versa. Staffing is generally not an issue but bigger guys always keep doing some sort of a restructuring to drive OpEx down so… nobody knows about tomorrow situation for sure!

      Both Microsoft Storage Spaces Direct and VMware Virtual SAN are really a bad choice here. First reason is strictly financial: for a hyperconverged environment it’s simply too expensive to pay $5,000+ of a licensing fees per every single host if there are too many of them. 20+ hosts will bring an associated $100,000+ price tag with them and it’s a MSRP of an exceptionally well-performing all-flash SAN covered with a super-strict SLA (Service Level Agreement) and delivering performance and features set Microsoft and VMware can only dream about. Tiny remark: Microsoft licensing has a benefit of an unlimited licensed Windows Server VMs included but if customer already has VMs licensed (Windows Server licenses purchased already or BYOL performed) this argument is gone and Storage Spaces Direct and VMware Virtual SAN play in an equal condition. Second reason is a hybrid financial / architectural: if compute and storage tiers need to be sized separately from each other whole hyperconverged concept fades and more classic “compute and storage segregated” should be deployed instead of it. Utilizing expensive Windows Server Datacenter licenses to build Scale-Out File Server storage only tier is pretty much pointless, just because a dedicated to serve storage only server hardware alone plus Windows Server Datacenter licenses will outweigh the price of an all-flash SAN still being still not able to catch up with an all-flash SAN’s IOPS, features and SLA included. VMware Virtual SAN simply doesn’t support non-hyperconverged architecture as it has to run on every single hypervisor host where running VMs are consuming VSAN-managed virtual shared storage, means “compute only” data-less VSAN-licensed nodes are supported while “storage only” VSAN-unlicensed nodes aren’t supported at all. Third reason is again architectural: it’s about multi-tenant environments where both vSphere and Hyper-V are deployed in a various proportion. VMware Virtual SAN doesn’t provide any way to export managed storage so anybody (including Hyper-V cluster of course) outside of a VSAN cluster are out of game immediately, and Microsoft can expose only SMB3 reasonably well while VMware doesn’t “understand” this protocol asking for more commonly adopted iSCSI and NFS, and Microsoft isn’t really good with any of them. This means Microsoft and VMware simply “talk different languages” and instead of having a single pool of storage being shared and consumed by either Microsoft or VMware running cluster, customer needs to maintain at least two separate storage pools one for Microsoft and one for VMware. This again brings just recently buried unified central all-flash SAN idea back again only because even if CapEx would be OK for a two separate “VMware only” and “Microsoft only” solutions, then OpEx would go over the roof for sure, and resource utilization would suck badly: “islands of storage” are always bad compared to a “single unified pool” which is good.

      StarWind here can offer our Virtual SAN software to run either on a non-symmetric (all nodes provide compute power but not all of them provide storage at the same time, say you have 40 nodes VMware vSphere cluster where only 8 nodes actually provide shared storage to the others, sort of a hyperconverged and non-hyperconverged mix) hyperconverged cluster and provide it with a virtual shared storage or on a “compute and storage segregated” cluster “powering” storage only tier. Unlike Microsoft and VMware we don’t expect our software to be licensed on every single node of a cluster (or it’s storage-only “sibling” part called Microsoft SOFS if “compute and storage segregated” model is utilized), we license consumed capacity and how many nodes of a hyperconverged cluster actually do provide exposed storage and how many instances of a StarWind service is running and where… we don’t actually care about that! Customer now has an excellent ability to pay for an exactly consumed storage resources and he’s the one who decides what servers do compute, what servers do storage and what servers do both compute and storage at the same time. Flexibility! Alternatively, we can issue the customer with a hyperconverged or storage only “ready nodes” in an any combination because we not only ship HCI but also we have “storage only” SA (Storage Appliance) “building blocks”. We can provide hyperconverged N-node cluster for just a fraction of VMware or Microsoft N-node cluster typical licensing costs and we can provide a fully packed all-flash SAN equivalent to feed shared storage to a hypervisor cluster as well. Still everything way cheaper than prospect will try to buy assemble himself and still everything covered with our 24/7 support, just because after we start running the storage part of the cluster – we immediately starting to own the whole big thing. As you can see StarWind is a very natural choice here both in terms of a hardware and with a “data mover” virtual shared storage layer as well if customer is lucky (unlucky?) to have servers purchased already.

      1. Strictly “compute and storage segregated”. This is a scenario and it’s mostly described in details as a part of a section 3 above. I just wanted to highlight it separately because I’ve talked about it in an “Enterprise” context while it’s absolutely possible that anybody who needs to grow compute and storage separately isn’t a good fit for hyperconvergence at all and he’ll end with a “compute and storage separated” instead of this “hype” trend. Making long story short: Microsoft and VMware are either unreasonably expensive here or don’t support this implementation scenario at all or don’t talk protocols other peer can understand (and this brings a need in a “middle man” who can easily kill performance, raise unnecessary support questions etc). StarWind has a very flexible licensing policy, supports “compute and storage segregated” in full and exposes all possible protocols. Add here “hyperconverged or storage alone less expensive that DIY” and “24/7 support included instead of a self-supported” messages and you’ll get a “full house”.

      2. Databases, either non-virtualized or virtualized. If server virtualization is maybe 80% of the market by number leaving 20% to databases, money split is actually flips everything other side up: 80% of money belongs to DBs leaving 20% to so-called generic server virtualization. Technically in this category we can fall down every single configuration with a very low VM density and a very high performance requirement per VM at the same time. High performance is something which makes this case (comparably few nodes and few VMs) being actually very different from a very small SMBs and ROBOs we discussed in our section 1, while initially these scenarious sound somewhat similar.

      VMware and Microsoft aren’t any good fit here. Either technical (VMware VSAN can’t be used on a non-virtualized SQL Server cluster as it doesn’t use vSphere, Microsoft can’t provide virtual shared storage to Oracle RAC as it can’t talk SMB3, VMware Virtual SAN and Storage Spaces Direct scale well with a many small consumers but they don’t really shine when one or a few consumers need all IOPS from a single big unified name space etc) or financial (licensing Windows Server Datacenter on every host of the cluster just for a few VMs or for storage only tier is waste of money, we touched this reason already before).

      StarWind is a perfect choice here both in terms of a software (so-called “data mover” to create virtual shared storage pool) installed on a top of a server set customer already has or a complete “ready nodes” for HCI or storage-only infrastructure. StarWind supports non-virtualized Windows Server environments, properly supports all possible storage protocols and can provide high performance (with a decent amount of RAM even matching in-memory TPC) shared storage to a single non-virtualized or a few virtualized consumers (VMs?) thanks to StarWind aggressive RAM write-back cache, storage optionally pinpointed to RAM completely, in-line 4KB dedupe, log-structuring, and data locality concepts used. In case of SQL Server (virtualized or not) instead of deploying a very expensive SQL Server Enterprise to build an AlwaysOn Availability Groups and utilizing “in-memory” DBs customer can use much cheaper SQL Server Standard and use AlwaysOn Failover Clustered Instances put on top of an “in-memory” storage StarWind provides. As a result, customer will get much better $/TPC ratio with a very similar or even better uptime metrics. Same about Oracle RAC and Oracle’s need in an expensive Enterprise plus special “in-memory” licenses, StarWind replaces all of these requirements with an old licensing scheme re-used plus in-memory storage we provide. Same about SAP R/4 vs SAP HANA. In case of a hardware purchased from StarWind we’ll also bring in our discount to make new hardware more affordable to customer, and we’ll keep everything wrapped in our 24/7 support either route customer choses: software-only or HCA/storage one.

      Conclusion: StarWind Virtual SAN is a complimentary rather than competitive solution to Microsoft Storage Spaces Direct and VMware Virtual SAN. In a case when we see our shared customer doesn’t need Microsoft S2D or VMware VSAN we’ll use our own software and that’s it, if we see that customer is going to benefit from a combined solution we’ll provide him with a stack where Microsoft S2D (or VMware VSAN) will co-exist with a StarWind Virtual SAN on the same hardware. Technically what we do here at StarWind is we “tap a holes” in a Microsoft and VMware product and positioning strategy in terms of a features they miss and we bring whole hyperconvergence to another level up by making good quality hardware even more affordable and 24/7 support a routine and a “checkbox” feature. Yes, we’ll split our hardware discount with you to make our “ready nodes” even cheaper than you’ll build anything yourself on DIY concept, and, yes, we’ll “babysit” your clusters for you so you don’t need to do anything yourself!

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Xen Orchestra on Ubuntu 15.10 - Complete installation instructions

      @hobbit666 said in Xen Orchestra on Ubuntu 15.10 - Complete installation instructions:

      @scottalanmiller said in Xen Orchestra on Ubuntu 15.10 - Complete installation instructions:

      The difference is that DRBD does not do scale out, you need Starwind for that. But for two node mirrored replication, it's unbeatable. It's the same technology we use for storage clustering on all Linux systems.

      Yeah I think we are never going to go over 2 hosts for the Citrix farm. So that will do. 🙂 Thx again Scott.

      Think about separation of your storage and hypervisor nodes. Management can be simplified quite a lot (storage put into "leave and forget" mode).

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Xen Orchestra on Ubuntu 15.10 - Complete installation instructions

      @scottalanmiller said in Xen Orchestra on Ubuntu 15.10 - Complete installation instructions:

      DRBD is used as the base for HA in many NAS storage systems.

      I think "many" should be "majority" BTW.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: StarWind vs Storage Spaces Direct

      @John-Nicholson said in StarWind vs Storage Spaces Direct:

      @scottalanmiller said in StarWind vs Storage Spaces Direct:

      My take on it is that after 20 years of Windows Software RAID being totally insane to implement in production, we need to wait at least one or two server release cycles before we have enough time for Storage Spaces Direct to have collected enough reliability data to even be a remote consideration. Microsoft's track record here speaks for itself. The entire hardware RAID industry exists almost solely to tackle this one software issue with Windows. Storage Spaces was just an attempt to rename it to hopefully get out of touch Windows Admins to think that there was some hot, new feature worth putting their data on and a lot got burned.

      There's some nasty @#$@ in there. Mainly write order fidelity isn't working yet with ReFS...

      ReFS with integrity streams enabled is a 100% log-structured file system pretty much like StarWind LSFS or NetApp WAFL or Nimble CASL (except StarWind and Nimle and NetApp are much more effective because of 4MB+ pages touching all spindles in a parity RAID, MSFT is still below 64KB most of the time). With integrity disabled it's just an NTFS without a scrub process (and with no dedupe). We did a cool review here take a look:

      https://slog.starwindsoftware.com/refs-virtualization-workloads-test-part-1/

      Good luck!

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: StarWind vs Storage Spaces Direct

      Guys appreciate your time and trust. I'm currently in London on a SpiceWorld so a bit head over the heals. Give me a day or two and I'll write some detailed story here. LOTS of things to mention.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Need a Good Bottle of Scotch

      @coliver said in Need a Good Bottle of Scotch:

      @thanksajdotcom said in Need a Good Bottle of Scotch:

      Oh so many options! I'm looking for some good top shelf stuff...not like millionaire level spending, but something that's around $100/bottle or even a bit more.

      I've tried the 16, 18, and 22 Glenlivet. The 22 was surprisingly smooth. The 16 was harsh.

      16 ins\t "normal" year so it's a some "special edition" and it's VERY different from COTS 15 yo

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Need a Good Bottle of Scotch

      @scottalanmiller said in Need a Good Bottle of Scotch:

      Laphroaig is excellent

      Smokey as hell 😉

      posted in IT Discussion
      KOOLERK
      KOOLER
    • 1 / 1