ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. StorageNinja
    3. Posts
    S
    • Profile
    • Following 1
    • Followers 10
    • Topics 3
    • Posts 988
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: Scale HC3 cluster for sale

      @davedemlow said in Scale HC3 cluster for sale:

      SM863

      Samsung Midrange. Not bad actually. Those are closer to 90 cents a GB if memory serves (If Dell Firmware, could maybe get a buck 20 per GB).

      posted in IT Business
      S
      StorageNinja
    • RE: VMWorld 2017

      @Grey said in VMWorld 2017:

      @scottalanmiller said in VMWorld 2017:

      @Tracy_Burton said in VMWorld 2017:

      @Grey said in VMWorld 2017:

      Unless SW, as a company, changes policies, they won't see me.

      I understand but I enjoy going to Austin and I have a good friend that I can see so SW is as important to me as the trip itself.

      Are you going to MC this year?

      Hopefully he makes it out to MC so he can hang with the team that got them their new SANs in place 😉

      CANNOT. No time off. Boss would want me to take this as PTO and that bank is empty. I took time in May and now I have to start saving up for my August '18 trip.

      Quit, find new job with more PTO?

      posted in IT Discussion
      S
      StorageNinja
    • RE: Sage 50 Quantum in Hyper-V VM

      @EddieJennings said in Sage 50 Quantum in Hyper-V VM:

      Officially tested in QA
      Support personnel trained on its use

      My perspective (Disclaimer I work for a software company and deal with engineering and PM for support statements almost daily).

      Once you commit to support something your on the hook to isolate all faults, and jointly work with the other parties involved until it's fixed.

      As one example we certified a RAID controller for use with our platform. That raid controller had a bug. Was it our code's problem? No. Did customers call us and blame us and expect us to drive a solution? Sure.

      We used our engineers and our joint support agreement with said hardware vendor to work for weeks around the clock (engineering time isn't cheap) and drive a solution. If you add up all the GSS hours, engineering hours, and project management hours spent dealing with lifecycle for a single raid controller vendor it has cost us millions I'm sure in 2016.

      Sage is saying that they don't run Hyper-V in QA for this app. (Running Hyper-V would include not just running Hyper-V on a box and calling it good, but regression testing against Service Packs, Hyper-V guest tools updates, and typically N-1 releases of the major product so they would need 2012R2 tested as well as 2016. For all these variations they would need to dedicated half a dozen servers at a minimum. They would need to extend their automation tools that are doing QE deployments to work with Hyper-V. Considering I've seen dozens of Cloud Management and orchestration products not support Hyper-V. If they don't have a cloud management product in place then they might have to write raw API and PowerCLI calls to do the orchisration for the testing on Hyper-V and KVM and Xen and other platforms. The could easily require hiring 1-2 more FTE's just to much with this and maintaining it.

      Sage has a lot of products like this with relatively small market share. Given the choice of them spending millions on testing and providing a support statement for Hyper-V/Xen/KVM and shifting engineering resources to QE and escalation for these platforms, I suspect most of their customers would be annoyed if their roadmap was killed just to maintain steady sate for these other platforms.

      I guess what I'm getting at here is that Issuing a support statement is a INCREDIBLY not free thing to do.

      I worked with Sage in the past to deploy their stuff on VMware. Once I told them what I was doing and what type of storage etc I was deploying they generally said "yah that sounds fine"

      Here is an example of their support statement on Virtualization.

      https://support.na.sage.com/selfservice/viewContent.do?externalId=54620&sliceId=1

      posted in IT Discussion
      S
      StorageNinja
    • RE: Scale HC3 cluster for sale

      @John-Nicholson said in Scale HC3 cluster for sale:

      Hc1150 3.38tb

      So the cluster is likely best sold for parts.

      3TB NL-SAS drives - Worth about $100 each

      480GB SSD (I'll assume since they were being cheap this is a PM863, which has awful latency consistency for writes). Worth about $240.

      posted in IT Business
      S
      StorageNinja
    • RE: Scale HC3 cluster for sale

      @mroth911 said in Scale HC3 cluster for sale:

      Hc1150 3.38tb raw storage/1.74tbu 64gb ram

      Support ended need to renew support contract

      I have more resources in my NUC cluster on my desk. I think I'll pass to avoid the thermal/power overhead this thing would draw.

      posted in IT Business
      S
      StorageNinja
    • RE: VMWorld 2017

      @Tracy_Burton said in VMWorld 2017:

      @scottalanmiller I'm a little on the fence. Its so damn expensive (and long). I would almost rather save the company some cash and go to SpiceWorld instead. Does anyone even go to SpiceWorld anymore?

      It's long for you? I have to fly in on Friday. I have Sat morning VTSP classes to teach, and Partner and TAM day stuff on Sunday as well as VM Underground (The conference within a conference) so by the time I Craw out of Vegas a week later my liver hurts, I'm the mayor of 2 bars, and the dealers at the $5 table at the Ellis Island know me by name...

      If you want a slightly shorter, smaller VMworld go to the EU one (Barcelona).

      posted in IT Discussion
      S
      StorageNinja
    • RE: VMWorld 2017

      @scottalanmiller said in VMWorld 2017:

      Actually it seems to still be growing, but the core group has shrunk a lot. It's less and less people that know each other and more and more random one timers that aren't active in the community or don't even know that it is there. Although after the London issue this year, I wonder if Austin will have growth. Lots of people still wondering if it will go forward

      @scottalanmiller said in VMWorld 2017:

      Actually it seems to still be growing, but the core group has shrunk a lot. It's less and less people that know each other and more and more random one timers that aren't active in the community or don't even know that it is there. Although after the London issue this year, I wonder if Austin will have growth. Lots of people still wondering if it will go forward

      With Dell world out of Austin they might pull off some of the SMB or Gov people who were regional so low travel costs and can't go to Vegas for policy reasons.

      posted in IT Discussion
      S
      StorageNinja
    • RE: VMWorld 2017

      @Tracy_Burton

      I'm going. Flying in Friday Evening, and flying out Friday mid-day. I should have 3-4 presentations but will be around the bar's, parties, casinos etc.

      posted in IT Discussion
      S
      StorageNinja
    • RE: 100Gbe NICs Hitting the Market

      @stacksofplates They will end up costing the same as 10/40 in not too long. Same number of lama's used.

      Note, RDMA, and RCoE require your protocol support it, and your platform support it end to end.

      posted in News
      S
      StorageNinja
    • RE: Was It the Last IT Guys Fault

      @IRJ said in Was It the Last IT Guys Fault:

      @Carnival-Boy said in Was It the Last IT Guys Fault:

      Probably the main thing that puts me off moving jobs is that it means moving in to someone else's shit, which you then have to spend months, or even years, sorting out.

      That is definitely true, but generally when moving jobs, the pay increase is significant. I haven't changed jobs for less than $10k and sometimes closer to $20k.

      There are jobs that are "net new roles" (Maybe a DBA for a new project) so you get to avoid some technical debt. Personally I didn't mind cleaning up crazy messes as long as I had the budget to do something about it (Joy of working for a MSP/Consulting company is you can tell people what it costs to fix, and if they balk you just go find someone else with money).

      I"ve changed jobs for as little as 4K (but ended up being 8K after 90 day bump).
      and I've changed jobs for 80K.

      The thing I've seen with changes is that they would advance my career and give me skills I needed to move up and on. I never took a pay raise for a job that would hold me back.

      posted in IT Discussion
      S
      StorageNinja
    • RE: vmware load balancing

      @BBigford

      Couple things...

      1. I've load balanced Horizon View with a Netscaler (It works). Even the free version works.

      2. Netscaler isn't a "native" per say as something that Citrix Acquired and likes to bundle sell. What was native was running CSG's with RR DNS (Note you can use RR DNS for View Security and connection servers also, never been the biggest fan but it has always worked.

      3. VMware has NSX that can do basic L4 LB's for View and is integrated from a management and hypervisor standpoint for some of it's services. Note most people going down the NSX path will do it for advanced stuff like micro segmentation and network introspection offload (I've seen it also used for Citrix for these reasons).

      4. F5 has a nice "license per user option" for Horizon View and they can completely replace the edge security server function if you go down this path..

      5. KEMP's are stupid cheap to use for LB services when you just need basic stuff. I think that most of the Netscalers functionality (layer 7 stuff, host specific balancing) is mitigated by Horizon including DRS and moving VM's no the back end to balance load (which mostly happens AFTER initial connection anyways). The reason why Citrix historically needed this stuff is you proxied connections to bare metal and you couldn't reshuffle heavy users after connection.

      I've always felt GSLB's are "dumb" and would rather just pay a 3rd party DNS provider to manage failover for me. Very few people need their functionality.

      Example NSX with View Config.
      https://elgwhoppo.com/2015/09/25/load-balancing-horizon-view-with-nsx/

      The key to understanding View is understanding what all talks to what 🙂

      http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-horizon-7-end-user-computing-network-ports-diagram.pdf

      posted in IT Discussion
      S
      StorageNinja
    • RE: Virtual Machines vs Containers

      @scottalanmiller Containers with some exceptions (ESXi Instant Clone, and Photon Fast Boot) are orders of magnitude faster to create than VM's.

      Containers running in a shared OS instance also can lower CPU overhead on the scheduler. Until recently Containers also sucked pretty bad at high IO activities (still not amazing, but a bit better).

      posted in IT Discussion
      S
      StorageNinja
    • RE: Virtual Machines vs Containers

      @stacksofplates said in Virtual Machines vs Containers:

      I know KVM can do dynamic resource allocation. You have to set a max number beforehand, but you can change RAM and CPU on the fly as long as its the same or under your max.
      Not sure about other hypervisors.

      ESXi supports this (HotAdd is the term used). Requires BaseOS support it.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Virtual Machines vs Containers

      @rustcohle

      Why not use both....

      Have a Docker/Kubernetes endpoint that "Forks" a fresh VM when a container command is run in a few ms, and allows you to do full resource management, network micro-segmentation while your developers get the "speed and easy of deployment" of a container?

      Most people don't need 20K Containers, they just have developers who want to use existing container framework tools for deployment.
      Virtual Machine admins don't want to see a single VM using 5000 IP's from DHCP with no visibility into what resources it's consuming, or the ability to secure 3 tier apps and things.

      talking to IDC and others the majority of containers are sitting inside VM's and will continue to. Bare metal container farms are only for the most extreme of use cases.

      http://www.vmware.com/products/vsphere/integrated-containers.html

      posted in IT Discussion
      S
      StorageNinja
    • RE: Xen and KVM - Who is using what and why?

      @FATeknollogee said in Xen and KVM - Who is using what and why?:

      Saying one moved to Scale because it's KVM is like saying I moved to Nutanix because it's KVM!

      Let me re-phrase the question: Was the move to Scale done because it was technically superior to XS or was it financially motivated?

      The challenge for both Scale and Nutanix is that KVM REALLY isn't the reason as they don't expose the native KVM API's and so products that would layer on top of KVM (Cloud Forms, vRealize Automation etc) are effectively broken unless re-written for their own proprietary API's. (In Scale's case with target customers and market this doesn't really matter in Nutanix's case this leads customers with heavy interop with other things like CMP, to end up running vSphere for the most part).

      In this regard a reason someone might choose KVM (Open Platform) or Xen (Public robust API's) actually is the LAST reason you would choose Scale/Nutanix as they are more difficult to interact with from 3rd party tools than a AS400 🙂

      posted in IT Discussion
      S
      StorageNinja
    • RE: Xen and KVM - Who is using what and why?

      @Dashrender This is really the case with most cloud platform overlays also.

      If I'm "using" Pivitol Cloud Foundry then the back end (Azure/AWS/SoftLayer) matters less for the day to day.

      I've seen customers use CMP's that could provision to In house, AWS, Softlayer and it actually showed the cost models for each on a given deployment (so they could cross compare).

      On one hand the hypervisor (for day to day) matters less if it's abstracted but it still does matter. In other ways it matters more (if the platform associated with it, offers network virualization, features that lower cost or speed actions like forked VM). The support model matters a hell of al to more than people give it credit (Why people like HCI appliances, the all in support model).

      posted in IT Discussion
      S
      StorageNinja
    • RE: Vendor Mistake - VMware Infrastructure Decisions

      @NetworkNerd said:

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @John-Nicholson said in Vendor Mistake - VMware Infrastructure Decisions:

      @scottalanmiller If your doing a 3 node vSAN for a low cost deployment you should go single socket and get more core's per proc. Leaves you room to scale later and costs the vSAN cost in half.

      They are likely stuck here with whatever was already bought. But good info for a greenfield deployment. Or if they manage to return these for three R730 for example.

      I'm not entirely certain we'll be stuck with what we bought. My boss and I were on a conference call with folks from Dell yesterday afternoon. They were talking about different options in SAN devices that would meet our requirements (whether it was Compellent, EMC, etc.), but the biggest issue was that these options were so expensive. Again, not one of them mentioned the potential for a VSAN deployment, so we brought it up (using either VMware VSAN or Starwind). The Dell team has to go back and redesign a quote for gear that would better support a VSAN deployment. In their words, they would likely have to return the servers and the PowerVault we have right now (not sure about the other gear - PowerConnect switches, TrippLite devices, APC PDUs, AppAssure appliance, and ip KVM switch).

      I'll be curious to see what comes back when they re-quote.

      Honestly may just be a matter of the inside team isn't familiar with it yet (They just re-assigned who has to know what products, and people are flying all over the place training people). Worst case call the VMware inside SDS desk (They are in Austin, right across the parking lot from Spiceworks HQ). Those guys have been piecing together vSAN quotes and have heads dedicated to work with your Dell team and make sure stuff is good.

      Now off to pack for ANZ for 2 weeks to go some of the mentioned training....

      posted in IT Discussion
      S
      StorageNinja
    • RE: Vendor Mistake - VMware Infrastructure Decisions

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @NetworkNerd said in Vendor Mistake - VMware Infrastructure Decisions:

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @NetworkNerd said in Vendor Mistake - VMware Infrastructure Decisions:

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @NetworkNerd said in Vendor Mistake - VMware Infrastructure Decisions:

      I'm also assuming you are turning RAID off on each host so Starwind can provide RAIN for you (thus creating the storage pool).

      No, you leave RAID on on the hosts and Starwind provides Network RAID. There is no RAIN here.

      So you'd leave RAID on and then make a small local VMFS datastore for the Starwind VM to run on so that Starwind can use the rest of the unformatted storage on the host for its network RAID?

      You just follow the Starwind install guide. But yes, that is what is going on.

      After reading each of these, I finally understand how it works:
      http://www.vladan.fr/starwind-virtual-san-product-review/
      http://www.vladan.fr/starwind-virtual-san-deployment-methods-in-vmware-vsphere-environment/
      https://www.starwindsoftware.com/technical_papers/HA-Storage-for-a-vSphere.pdf

      So, in a nutshell, you do use RAID on the host as you normally would and even provision VMware datastores as you normally would. It's the VMDKs you present to the Starwind VM that get used as your virtual iSCSI target. And you can add in the cache size of your choice from the SSD datastores on your ESXi host.

      So if I'm patching servers like I should, I'd have to patch the VMs running Starwind as well. Oh man would I hate to install a patch from MS that bombs my storage. I guess theoretically that isn't too different from installing some firmware on a physical SAN that has certain bugs in it. If one Starwind VM gets rebooted, you still have your replication partner presenting storage to the hosts and are ok.

      Right. And Hyper-V alone has very tiny, solid patches. Nothing like patching the OS.

      Hyper-V with a console is just as big as windows server from a patching perspective, and even Core Install's see patches with regular (IE monthly quite often) frequency. The install requirements for The ~150MB VMKernel are tiny vs the 10GB+ for Hyper-V Core installs. ESXi regularly goes ~6 months without needing a patch. Most of the patch surface is in upper stack things.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Vendor Mistake - VMware Infrastructure Decisions

      @KOOLER said in Vendor Mistake - VMware Infrastructure Decisions:

      @NetworkNerd said in Vendor Mistake - VMware Infrastructure Decisions:

      Before I started here a couple of months ago, my boss purchased a couple of Dell R630s and a PowerVault MD3820i (20 drive bays) to be our new infrastructure at HQ. We have dual 10Gb PowerConnect switches and two UPS devices, each connected to a different circuit. The plan is to rebuild the infrastructure on vSphere Standard (licenses already purchased) and have a similar setup in a datacenter somewhere (replicate the SANs, etc.). We're using AppAssure for backups (again, already purchased).

      The PowerVault has 16 SAS drives that are 1.8 TB 7200 RPM SED drives and 4 SAS drives that are 400 GB SSD for caching. Well, we made disk groups and virtual disks using the SEDs (letting the SAN manage the keys), but it turns out we cannot use the SSDs they sent us for caching. In fact, they don't have SED SSDs for this model SAN.

      At the time the sale was made, Dell ensured my boss everything would work as he requested (being able to use the SSDs for caching with the 7200 RPM SED drives). Now that we know this isn't going to be the case, we have some options.

      First, they recommended we trade in the PowerVault for a Compellent and Equalogic. The boss did not want that because he was saying you are forced to do RAID 6 on those devices and cannot go with RAID 10 in your disk groups. As another option, Dell recommended we put the SSDs in our two hosts and use Infinio so we can do caching with the drives we have. In this case we would make Dell pay for the Infinio licenses and possibly more RAM since they made the mistake.

      But I'm wondering if perhaps there is another option. Each server has 6 drive bays. So we have 20 drives total. Couldn't we have Dell take the SAN back, give us another R630, and pay for licenses of VMware vSAN for all 3 hosts? Each server has four 10 Gb NICs and two 1 Gb NICs. That might require we get additional NICs. But in this case, I'm not sure drive encryption is an option or if we can utilize the SEDs at all.

      I've not double-checked the vSAN HCL or anything for the gear in our servers as this is just me spit balling. Is there some other option we have not considered? We're looking to get the 14 TB or so of usable space that RAID 10 will provide, but the self-encrypting drives were deemed a necessity by the boss. And without some type of caching, we will not hit our IOPs requirements.

      Any advice is much appreciated.

      Keep R630s, refund PowerVault, refund AppAss. Get VMware VSAN and Veeam (accordingly).

      I've got (a non-trivial amount) of R630's in my lab running vSAN. You'll want the HBA 330 ideally (you can settle for the PERC H730 if you already have it) but otherwise the server works fine. Only limit over the R730/R730XD is fewer drive bays, and no GPU support.

      posted in IT Discussion
      S
      StorageNinja
    • RE: Vendor Mistake - VMware Infrastructure Decisions

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @NetworkNerd said in Vendor Mistake - VMware Infrastructure Decisions:

      @scottalanmiller said in Vendor Mistake - VMware Infrastructure Decisions:

      @John-Nicholson said in Vendor Mistake - VMware Infrastructure Decisions:

      @scottalanmiller If your doing a 3 node vSAN for a low cost deployment you should go single socket and get more core's per proc. Leaves you room to scale later and costs the vSAN cost in half.

      They are likely stuck here with whatever was already bought. But good info for a greenfield deployment. Or if they manage to return these for three R730 for example.

      I'm not entirely certain we'll be stuck with what we bought. My boss and I were on a conference call with folks from Dell yesterday afternoon. They were talking about different options in SAN devices that would meet our requirements (whether it was Compellent, EMC, etc.), but the biggest issue was that these options were so expensive. Again, not one of them mentioned the potential for a VSAN deployment, so we brought it up (using either VMware VSAN or Starwind). The Dell team has to go back and redesign a quote for gear that would better support a VSAN deployment. In their words, they would likely have to return the servers and the PowerVault we have right now (not sure about the other gear - PowerConnect switches, TrippLite devices, APC PDUs, AppAssure appliance, and ip KVM switch).

      I'll be curious to see what comes back when they re-quote.

      Why do they have to design a quote? You just tell them what you want, they give you a price. Other than "looking up the price", what are they doing?

      Verifying the HCL (for vSphere, and vSAN for the storage devices). If they are 13Gen servers though they should be adaptable, it's just a batter of getting a supported HBA (Hint, you want the HBA 330) and getting supported drives. Other thing I'll comment in general (Not related to Dell or vSAN) is avoid Intel NIC's and go Broadcom. LSO/TSO seems to not be stable on large frames (This can be mitigated by disabling offload at the cost of a few % of CPU if you need). After years of hating broadcom NIC's this feels weird and Intel SHOULD be fixing it at some point this quarter, but after 2 years of putting up with this on large frames I'm not that hopeful.

      posted in IT Discussion
      S
      StorageNinja
    • 1
    • 2
    • 37
    • 38
    • 39
    • 40
    • 41
    • 49
    • 50
    • 39 / 50