Migrate to DFS from UNC file shares? Complications..
-
Exactly.. I know its risky and I've hammered management about this and hence migrate to a new setup. To be more resilient towards any point of failure. As now its a waiting game.
Ok, I'll keep using SAN. As I know before NAS and SAN use to be very different in terms. However, in my usage case due to block level storage, its indeed a SAN.
Well, all things to be answered have been taken care of here. As it originally started off as DFS questions.
-
@ntoxicator said:
Where are they saving to now? Into My Documents (Documents?)
But the Roaming profiles (AppData folder) comes in handy as users complain if they lose their Bookmarks in google chrome... and their windows sticky notes...
Yes -- users save to My Documents (Documents), Pictures folder, Desktop. etc. The folder redirection works very nicely. As 90% of users connect and launch remote applications through a Terminal Server wrapper (use to use RDWeb). But i've deployed 2X Gateway (Parallels 2X). That way our clients billing software does not need to be installed and maintained on over 100 workstations. I can install it on a terminal server and push it out over network to all users.
This is fine, and I see no reason to change it at this time, other than admin simplification.
I wonder, when those users log into your TS, do they use the same profile on the TS as they do on their desktops, just an unrelated question.
-
@ntoxicator said:
Ok, I'll keep using SAN. As I know before NAS and SAN use to be very different in terms. However, in my usage case due to block level storage, its indeed a SAN.
They remain just as different as they always were. Nothing has changed. iSCSI is always SAN, cannot be used in conjunction with NAS functionality,
-
Just saying, The data being saved/written. Its all being saved to a virtualized disk pointed to the Guest Operating System (Windows Server).
But regardless, its data. When the time comes when we have to migrate to a new server setup -- it will just take time to migrate the windows server VM
As this Windows Server VM (Domain Controller). This single VM, has ALL the 1.5TB of storage that sits on one of the Virtualized Disks. As presented to XS through the storage repository as an ISCSI disk (pool).
Moving from the ISCSI disk pool NAS storage, and migrating data to a physical node using DRBD would take time. Although maybe not as slow as I'm assuming it might be.
-
@ntoxicator said:
Exactly.. I know its risky and I've hammered management about this and hence migrate to a new setup. To be more resilient towards any point of failure. As now its a waiting game.
That's not the issue. It's not that individual points of failure are a big deal. The big deal is that you have a totally unnecessary dependency chain that greatly (massively) magnifies your risk while introducing unnecessary cost, effort and performance bottlenecks.
-
@ntoxicator said:
Well, all things to be answered have been taken care of here. As it originally started off as DFS questions.
Through conversation, I think we've determined that the best solution for you is a NAS appliance at your remote location, so the need of DFS is gone.
Although - @scottalanmiller can a NAS be used in a DFS mount?
-
@ntoxicator said:
Moving from the ISCSI disk pool NAS storage, and migrating data to a physical node using DRBD would take time. Although maybe not as slow as I'm assuming it might be.
iSCSI is not NAS, it is SAN. Always, no exceptions. iSCSI and NAS can never go together.
Moving to local disks will take no longer than moving to anything else. Local disks are the fastest possible option so it is equal or better than any other option.
-
@Dashrender said:
Although - @scottalanmiller can a NAS be used in a DFS mount?
Some can and some cannot.
-
@scottalanmiller said:
@Dashrender said:
Although - @scottalanmiller can a NAS be used in a DFS mount?
Some can and some cannot.
ug!
-
NAS is just a file server. Some can't even do SMB!!
-
@scottalanmiller said:
@ntoxicator said:
Moving from the ISCSI disk pool NAS storage, and migrating data to a physical node using DRBD would take time. Although maybe not as slow as I'm assuming it might be.
iSCSI is not NAS, it is SAN. Always, no exceptions. iSCSI and NAS can never go together.
Moving to local disks will take no longer than moving to anything else. Local disks are the fastest possible option so it is equal or better than any other option.
I might be wrong on this, but I think @ntoxicator just flubbed when calling it NAS here - he's just not used to calling what he has SAN yet.
-
@scottalanmiller said:
NAS is just a file server. Some can't even do SMB!!
LOL - Here is one of those times you were suppose to read into it that my question implied that my choosen NAS does include SMB. I know that that's asking to much.
So again, Can all SMB file servers be a part of MS's DFS? or is that still a depends?
-
@Dashrender said:
@scottalanmiller said:
NAS is just a file server. Some can't even do SMB!!
LOL - Here is one of those times you were suppose to read into it that my question implied that my choosen NAS does include SMB. I know that that's asking to much.
So again, Can all SMB file servers be a part of MS's DFS? or is that still a depends?
Still a depends. AFAIK. I don't think that just any SMB handling does DFS. Although I think that most do.
-
@ntoxicator - What are you leaning towards for your remote location after this conversation?
-
Thank you - -and yes your correct. I was referring to it by its actual product name/description. As the Product is a Synology 1U rackmount server/NAS. But, as @scottalanmiller pointed out. Since I 100% indeed have it configured as block-level storage for iSCSI; its therefore a SAN
We actually have 2 Synology rack-mounts. The idea was to pool them together using the Synology HA / sync and its heartbeat setup. However, this was not fully implemented due to storage size on the original Synology storage unit. management complained about time it would take to migrate data to the new unit. As i would have to format the originating and setup as new before that could happen. But still would have single point of failure (back at the XenServer). I did however migrate the smaller Virtual Machines to the new Synology SAN storage and the block-level storage (Faster disks). So its just the domain controller VM and its data still sitting on the original Synology network storage device.
Having 2 SAN's configured (sync storage). Would just help if one of them failed, I could quickly swap out the iSCSI pool and SR pointers within XenServer Control Panel and get us back online. however, yes it is known if the single xenserver host failed -- we are shit out of luck. Management knows this.
I'm thinking of just a NAS unit. Probably a 2-disk unit in RAID-1. Again, I see a synology product here? I can create SMB2 shares on this, however I'm sure I will have to tie into AD using LDAP connector for it to work properly (because SMB share).
Unless I can create SMB share and present this network path \\location\share to the Domain Controller (net use). And then configure the seperate GPO policy for this sub-set of users @ satellite office. to which will make their folder redirection and roaming profile save to that new network location? Let windows server handle the file permissions on that SMB drive?
-
@ntoxicator said:
Thank you - -and yes your correct. I was referring to it by its actual product name/description. As the Product is a Synology 1U rackmount server/NAS. But, as @scottalanmiller pointed out. Since I 100% indeed have it configured as block-level storage for iSCSI; its therefore a SAN
Yeah, I hate their marketing in that way. The industry term for it is "unified storage" which is merged NAS / SAN. It's the use, not the product, that dertermines what it is.
-
@ntoxicator said:
We actually have 2 Synology rack-mounts. The idea was to pool them together using the Synology HA / sync and its heartbeat setup.
One of the places where NAS and SAN isn't something that you can fudge. I believe that the HA is for the NAS functionality only. But in both cases, it doesn't apply to use for VMs, so does not exist for you at all.
-
@ntoxicator said:
Having 2 SAN's configured (sync storage). Would just help if one of them failed, I could quickly swap out the iSCSI pool and SR pointers within XenServer Control Panel and get us back online.
Yeah.... that's insanely silly. Just drop the original Synology and you'll save money, go faster and be safer. All wins. The only rational answer is to remove the Synology completely. Anything is, I'd have to say, insane. Why would any money be spent to do something that isn't any good?
-
@scottalanmiller said:
@ntoxicator said:
We actually have 2 Synology rack-mounts. The idea was to pool them together using the Synology HA / sync and its heartbeat setup.
One of the places where NAS and SAN isn't something that you can fudge. I believe that the HA is for the NAS functionality only. But in both cases, it doesn't apply to use for VMs, so does not exist for you at all.
Gotcha - and i completely understand that now :). The HA would apply to the storage units themselves, and not to the running VM's. As latency and time it takes... we would still have downtime while I would have to re-associate storage pool / SR's and virtual disks to the VM's on xenserver node.
I see the bigger picture on that aspect now after it all being laid out to me.
-
@ntoxicator said:
I'm thinking of just a NAS unit. Probably a 2-disk unit in RAID-1. Again, I see a synology product here? I can create SMB2 shares on this, however I'm sure I will have to tie into AD using LDAP connector for it to work properly (because SMB share).
SMB has no relationship to AD. AD is authentication, SMB is a network file protocol. AD will be needed because you are dealing with AD users, I assume, but is not a factor due to SMB in any way.