Big Virtual Fileservers
-
I'm bringing this up as a discussion. I want other people's input in any aspect. Is it wrong, is it right? Why or why not? What else can be done? How can it be done better?
I've always seen it as generally a good practice to have your fileserver VM host your files on as big of a VHDX as you need. I mean, they support up to 64TB...
I've been limiting my VHDX disks for fileservers to 5TB, mostly due to physical Tape limitations that we use, and physical disk accessibility at that time. Also, by general function and/or share. Now, 8 - 10 TB tapes and disks are more affordable and available. Tapes because it's easier to have a single ~5TB share on it's own tape, or to move it physical via a physical disk. Still, 5TB is a little arbitrary, but that's what I chose.
This (single) fileserver VM has multiple 5TB VHDX disks attached to it (about 7).
(Note: I have other file servers as well, not just this one. I'm just singling out this one for this topic)Not a problem, until maintenance needs to be done. Such as VHDX compacting that needed done tonight. I had to bring down the whole VM to do it.
I've been thinking that it would be easier to work with multiple fileserver VMs with divi'd up data, rather than a single fileserver VM holding all the data for a number of reasons:
- Maintenance: you then won't need to down so much of your data accessibility
- Replication: easier to divide it up
- Backups: more flexibility
- Changes: big file changes... such as enabling or disabling a type of auditing, or adding/removing a new permission that isn't done via AD group
- etc... as I'm thinking about this, there's a lot more, but this is enough to get the point across.
So what I'm thinking of doing is taking this single fileserver VM, and creating several additional fileserver VMs to server the data. I could just simply remove the disk from FileserverA, and attach it to a new one. Easy peasy.
What do you think? At worst, it may create another 100GB of VM operating system data, perhaps a tiny bit more CPU usage. With dynamic memory, it won't require that much more. IOPS wise, I don't see there being a difference. It's all on the same hardware RAID anyways. Running on a beefy R730xd.
-
@tim_g Interested to see what people have to say about this
-
There is really nothing bad about this from an IT point of view. I mean it creates complexity, which is bad, but it also makes management easier, which is good.
As long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
-
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
-
@tim_g said in Big Virtual Fileservers:
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
Yeah, I cannot see any downside to this other than the added complexity of more servers. But, from the sounds of it, the improved management will be a bigger offset to the positive than the added complexity.
-
I'm a fan of smaller vhdx attached to as few guests as possible (windows world here).
So licensing is the biggest item I wrestle with.
In terms of total storage attached to a single VM, I tend to stop around 30 TB total. Backups and everything else just become more complicated without decent reason.
in terms of XenServer (dead ATM) has a 2tb minus 4gb limit per vhdx. So to get to 30 TB you have 15 vhdx.
That's a lot to manage in and of its self.
-
@dustinb3403 said in Big Virtual Fileservers:
So licensing is the biggest item I wrestle with.
On my host in question, it has DC licensing with SA. So licensing and all that isn't an issue there. But that could easily be a huge deal breaker otherwise.
-
@jaredbusch said in Big Virtual Fileservers:
@tim_g said in Big Virtual Fileservers:
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
Yeah, I cannot see any downside to this other than the added complexity of more servers. But, from the sounds of it, the improved management will be a bigger offset to the positive than the added complexity.
Yeah that's my line of thinking too... I don't really see any other options then if yourself and others seem to be on board with it. The positive will definitely outweigh the added complexity in my situation. I think, once I have it set up, it won't be so bad. It's more of a set it and forget it, once replication and backups are set up I mean. Then I test restores and such occasionally as with everything else.
-
Have you looked into compression, deduplication and file minification before even considering splitting servers? Especially that last one, it's overlooked most of the time, but can give you some impressive storage gains. I did a project once, with minification alone we went down from 1.5TB to little over 200GB. Of course, it all depends what kind of files you're dealing with, but if you have typical users, I wouldn't be surprised to see 80MB powerpoint presentations. These can easily minify to 3-4MB.
-
I haven't seen the impressive savings that @marcinozga has, but I've seen Server 2012's Dedupe feature run about a 30% Savings... (from 1.5TB down to ~1TB).
-
@dafyre I've tried 2012's dedupe feature and had it cause corruption slowly little by little at which I had to restore the share (internal IT department share).
-
@dustinb3403 said in Big Virtual Fileservers:
@dafyre I've tried 2012's dedupe feature and had it cause corruption slowly little by little at which I had to restore the share (internal IT department share).
Eww, that's no fun. Never had that issue.
-
@dafyre it wasn't a huge deal as our share was mostly static, MSI's and flat documentation.
So reverting the share to the previous backup wasn't an issue. We just disabled dedupe on the drive and the problem was gone.
-
@marcinozga said in Big Virtual Fileservers:
Have you looked into compression, deduplication and file minification before even considering splitting servers? Especially that last one, it's overlooked most of the time, but can give you some impressive storage gains. I did a project once, with minification alone we went down from 1.5TB to little over 200GB. Of course, it all depends what kind of files you're dealing with, but if you have typical users, I wouldn't be surprised to see 80MB powerpoint presentations. These can easily minify to 3-4MB.
I am already taking advantage of space-saving technologies where it makes sense.