RAID Controllers - Stupidly Expensive for what they are
-
@dafyre said:
Dying Power Supply... Dying RAM.... Dying CPU... server dying of old age (these servers were ~8 years old-or better). It finally blipped it's last bleep a few months ago, lol.
Wouldn't replacing the server with something stable instead of putting in a nice SAN and multiple dying servers have fixed that for cheaper?
-
@Dashrender said:
Wouldn't DFS do this as well? did SMB 3.0 solve a problem that DFS did not? - I'm asking in earnest.
Yes and no. DFS is meant to sort of do that but does it in a very different way and often does not work. DFS is a bit flaky and certainly not designed to be an HA solution.
SMB3 is much more enterprise for high reliability.
-
@scottalanmiller said:
@Dashrender said:
Wouldn't DFS do this as well? did SMB 3.0 solve a problem that DFS did not? - I'm asking in earnest.
Yes and no. DFS is meant to sort of do that but does it in a very different way and often does not work. DFS is a bit flaky and certainly not designed to be an HA solution.
SMB3 is much more enterprise for high reliability.
But requires shared storage to provide the HA, right?
So considering the cost of the SAN and the risk of a single SAN install, is he really any better off?
-
@scottalanmiller Again... budget constraints... User Files were already hosted on the SAN and a single physical server. We hijacked the Physical Server's name for the Cluster Role (and retired that server), so we didn't have to change any folder redirection GPOs, etc...
That setup actually worked fine for about a year before that server died (it only acted up for about a week before it went kaput, lol). Now, AFAIK, the guys that still run that cluster have e-wasted the physical server that died. That just leaves one Windows 2012 Physical server (that has been rock solid) and a Windows 2012 VM running two File Server roles (one running on each server).
I'd have to go look, but the cluster is setup so that even if the other physical server fails, the single, remaining VM can run both file server roles (Node and Disk Majority + Windows File Share Witness, I think).
The net take away from that setup for us, has been Increased uptime and fewer headaches when servers start dropping like flies.
-
@dafyre said:
@scottalanmiller Again... budget constraints... User Files were already hosted on the SAN and a single physical server.
Ah, I see. Maybe the budgets wouldn't have been so constrained without being two devices to do the work of one
There is always an excuse as to why these things happen. But generally if you work back, there is an foundational decision that was bad or weird and led to a cascade of problems.
-
@dafyre said:
The net take away from that setup for us, has been Increased uptime and fewer headaches when servers start dropping like flies.
The net take away should have been "design sensibly from day one and reserve overspending for later improvements." Lower cost, easier management, higher reliability.
-
@scottalanmiller said:
The net take away should have been "design sensibly from day one and reserve overspending for later improvements." Lower cost, easier management, higher reliability.
While I agree, the design was sensible to us from day one. 8-), and as I have stated before, even knowing what I know now, I would have still done it that way because our experience, by and large, was pretty good. I didn't lose any sleep at night when things were working correctly.
They have now reached the Lower Cost (no need to buy another SAN, thanks to Scale), Easier Management (most everything is virutalized) And Higher Reliability phase now... When that last Physical Machine dies? All they gotta do is Spin up a new VM, make sure it is on a diferent Host than the existing one, join it to the cluster, and be happy... (Arguably, they should have already spun up a new VM and made it part of the cluster...).
-
@dafyre said:
While I agree, the design was sensible to us from day one. 8-), and as I have stated before, even knowing what I know now, I would have still done it that way because our experience, by and large, was pretty good. I didn't lose any sleep at night when things were working correctly.
Even thought the cost was more than double a more reliable design? What makes the design sensible or "good" in hindsight? Doesn't hindsight suggest that a lot of money was lost and unnecessary risk was taken on? It might have been "reliable enough", but if you could spend half the money and be "even more reliable", why avoid that?
-
@dafyre said:
They have now reached the Lower Cost (no need to buy another SAN, thanks to Scale), Easier Management (most everything is virutalized) And Higher Reliability phase now... When that last Physical Machine dies? All they gotta do is Spin up a new VM, make sure it is on a diferent Host than the existing one, join it to the cluster, and be happy... (Arguably, they should have already spun up a new VM and made it part of the cluster...).
Scale is pretty awesome. We have a cluster on its way, actually.
-
Sweet! They are sponsoring a SpiceClub meeting in Atlanta tomorrow. I'm actually going to go since I get of early.
-
They sponsored the last SpiceCorps that I was at as well (Rochester.)
-
Cool. I actually spoke with (well somebody relayed for me on the phone) one of the Scale guys who's going to be there tomorrow night, lol.
How big of a cluster did you guys get?
-
Just three nodes for now. It is heading to the lab. I'll be writing about it as soon as we have time to have it up and running.
-
@scottalanmiller That shouldn't take long, lol. We did it with a guy on the phone (super helpful, by the way) in like 30 minutes.
Each server we got came with screw drivers in it, lol. I still have a couple of them running around the house, lol.
-
Like the afternoon equivalent of a mimosa? Nice.