Replacing the Dead IPOD, SAN Bit the Dust
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
I also have concerns about it of course, such as how is hyperconvergence better than the current hosts/storage setup if its all in one box?
Because it is NOT all in one box. All in one box is what you have now. Hyperconvergence is the opposite.
DId you want the video from MangoCon? I diagram the difference there.
-
@scottalanmiller 1 TB drives.
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
How about backing up outside of the box, to say a local NAS box or a private cloud storage?
Same as anything else. Local, cloud, both. Whatever works for your needs. I like a local Synology, ReadyNAS or Exablox device (or SAM-SD, of course.)
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
@scottalanmiller 1 TB drives.
Bigger than I would have guessed.
-
@scottalanmiller Not yet, but plan to.
-
I have 2 Dell R720XD each with 10x1TB NLSAS in OBR10 and 1 older Dell R710 with 4 10K SAS drives in OBR10 running ESXi 6 (all installed on redundant SD card or USB flash). They are backed up to a Synology DS1813+ with 8x4TB Segate Constellation drives in OBR10 and then backups are uploaded throughout the day to Amazon S3 and Glacier.
Back in mid 2013, the cost for the R720xd servers was $7229 a piece with 4-hour Pro Support. The cost for the Synology was $999 (diskless) and the disks came to $2196. The total was $3195 for a backup target.
2- Dell R720XD servers and one loaded Synology NAS came to about $17,653 (USD), which is half of the lowest end of what you are looking at for a SAN.
-
Scale's entry level high availability cluster starts at $25K. That might actually be enough here, but I doubt it. But it gives you an idea of where things start.
-
Tagging @scale and @scale_alex, I'm betting they can get you a way better system than what you have now for the same price or less than what you're talking about for a replacement cost.
If not that, something like what wrx7m mentioned.
-
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
I have 2 Dell R720XD each with 10x1TB NLSAS in OBR10 and 1 older Dell R710 with 4 10K SAS drives in OBR10 running ESXi 6 (all installed on redundant SD card or USB flash).
Doing anything like Starwind between them?
-
@wrx7m We have a couple of Synology's around our enterprise and am currently using Veeam to backup VMs from their respective local hosts. But I would also have the same concern about the synology that I am also having with this current SAN. It will eventually be the bottom part of the pyramid.
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m We have a couple of Synology's around our enterprise and am currently using Veeam to backup VMs from their respective local hosts. But I would also have the same concern about the synology that I am also having with this current SAN. It will eventually be the bottom part of the pyramid.
Ah, no, it won't do that because your backup is not part of your dependency chain - it's not part of the architecture. The backup system is an independent system with its own risk. It fails separately from your overall system. That's what makes it a backup.
-
If you want to take your backups to the "next level" of reliability, you can do this with nearly any NAS device (Synology, ReadyNAS, SAM-SD, etc.) and use a tool like RSYNC to replicate between two units to provide for failover. But this is generally considered overkill for a backup because, by definition, backup is already a copy and not the original. So if you lose your backup system, you just repair or replace, kick off a fresh backup and you are back in business. No downtime.
Also, it is common to have your primary backup like a Synology then send to tape or USB drive or something else removable.
-
Remember, the thing that makes your SAN so much of a problem is that if the SAN fails, everything else fails. OR if the hosts fail, everything fails. They are dependent on each other.
But if your backup fails, nothing is impacted. Or if your system fails, the backup is still good.
-
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
I have 2 Dell R720XD each with 10x1TB NLSAS in OBR10 and 1 older Dell R710 with 4 10K SAS drives in OBR10 running ESXi 6 (all installed on redundant SD card or USB flash).
Doing anything like Starwind between them?
I am not at this time. I was doing the now-defunct, overly-complicated, under-supported, vSphere Storage Appliance v5. It was great until it had issues with some of the services that were required to run and keep track of the heartbeat. Early this year, I basically tore the whole thing out and rebuilt my VI in stages. Much simpler and elegant and since I have added 10Ge, live vmotion only takes a couple minutes for each VM (minus my big ass file server).
-
@scottalanmiller That is a good point. I'm almost to the point of questioning everything. Such as what is the meaning of life?, but that's another discussion for another day.
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m We have a couple of Synology's around our enterprise and am currently using Veeam to backup VMs from their respective local hosts. But I would also have the same concern about the synology that I am also having with this current SAN. It will eventually be the bottom part of the pyramid.
If it is only one of your backup targets, it really isn't.
-
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
I have 2 Dell R720XD each with 10x1TB NLSAS in OBR10 and 1 older Dell R710 with 4 10K SAS drives in OBR10 running ESXi 6 (all installed on redundant SD card or USB flash).
Doing anything like Starwind between them?
I am not at this time. I was doing the now-defunct, overly-complicated, under-supported, vSphere Storage Appliance v5. It was great until it had issues with some of the services that were required to run and keep track of the heartbeat. Early this year, I basically tore the whole thing out and rebuilt my VI in stages. Much simpler and elegant and since I have added 10Ge, live vmotion only takes a couple minutes for each VM (minus my big ass file server).
It's a good way to go. Not sure how much of a pain it will be to retrofit, though.
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
@scottalanmiller That is a good point. I'm almost to the point of questioning everything. Such as what is the meaning of life?, but that's another discussion for another day.
42
-
@NerdyDad said in Replacing the Dead IPOD, SAN Bit the Dust:
@scottalanmiller That is a good point. I'm almost to the point of questioning everything.
From what I've seen, it looks like you are in the "most common use case" where your needs are low. And someone sold you the absolute opposite of what you needed - a high cost, high risk system. The IPOD does have a place in very niche scenarios, but by and large it is a tool for vendors to sell you way more than you need. SANs have insanely high profit margins and it is worth nearly anything for vendors to sell them to you, no matter if you need one, can even use one or even are hurt by one. So we see it non-stop pushed as if it is a good idea.
In a situation like yours, standalone computers are what you use when you don't need high availability, and hyperconvergence / RLS is what you use when you do. You are in the stock use case range. It's just the IPOD sales tactics that caused problems.
-
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
@wrx7m said in Replacing the Dead IPOD, SAN Bit the Dust:
I have 2 Dell R720XD each with 10x1TB NLSAS in OBR10 and 1 older Dell R710 with 4 10K SAS drives in OBR10 running ESXi 6 (all installed on redundant SD card or USB flash).
Doing anything like Starwind between them?
I am not at this time. I was doing the now-defunct, overly-complicated, under-supported, vSphere Storage Appliance v5. It was great until it had issues with some of the services that were required to run and keep track of the heartbeat. Early this year, I basically tore the whole thing out and rebuilt my VI in stages. Much simpler and elegant and since I have added 10Ge, live vmotion only takes a couple minutes for each VM (minus my big ass file server).
It's a good way to go. Not sure how much of a pain it will be to retrofit, though.
Yeah, I did look into it. Especially, since 2 nodes was (is?) free. I was and am still wary from the VSA debacle.
Edit - Obviously, it is a totally different solution but I needed to get off of VSA and it was just too much to handle at that time. Once I get a bearing on our goals for the next couple of years, here, I will see how it fits into what is needed.