Are There Reasonable Multi-Master Over the WAN Storage Options?
-
My take on it is... you don't. It's not a reasonable thing to attempt to do. You make people change their processes.
-
@scottalanmiller said:
My take on it is... you don't. It's not a reasonable thing to attempt to do. You make people change their processes.
Where are the sites located?
-
@wirestyle22 said:
@scottalanmiller said:
My take on it is... you don't. It's not a reasonable thing to attempt to do. You make people change their processes.
Where are the sites located?
UK, France and Malaysia. I think.
-
That just seems like a bad idea. If the WAN fails, you are going to have issues with different file versions.
-
@scottalanmiller said:
@Dashrender said:
What kinds of files are we talking about there? I'm guessing not Office type documents as something like SharePoint would solve this problem.
No, bigger ones like engineering files.
With the size of those files, and only 50mbps, I doubt that's going to cut it for what they want to do. Back in the late 1990 and early 2000 all the engineering software packages had integrated file management options. One of those with a WAN between sites. I would mean waiting a bit when someone checks out a file that's not local, but you wouldn't have to sync everything over the slow WAN connection.
@scottalanmiller said:
@Dashrender said:
If the size is keeping things from going to SharePoint...
Centralized hosting has been tested and is not fast enough for their use case.
What makes them think self-hosting on the same connections is somehow faster? It's magic!
-
At least self hosting, with the local connection, the local users will get local LAN speeds as long as there is no locking.
-
Why not go with Self-Hosting and replicated off-site?
If somebody in France needs access to a file from Malaysia, then they should connect to the Malaysia file server via <insert your method here> to access the files.
Like @scottalanmiller -- sometimes you have to change the processes.
-
@dafyre said:
Why not go with Self-Hosting and replicated off-site?
If somebody in France needs access to a file from Malaysia, then they should connect to the Malaysia file server via <insert your method here> to access the files.
Like @scottalanmiller -- sometimes you have to change the processes.
That's what I am thinking. I want to look at using Exablox, one at each site. Each site with their own share of which they are the master that then replicates to the other sites.
-
@dafyre said:
Why not go with Self-Hosting and replicated off-site?
If somebody in France needs access to a file from Malaysia, then they should connect to the Malaysia file server via <insert your method here> to access the files.
Like @scottalanmiller -- sometimes you have to change the processes.
That is the proposed solution from @scottalanmiller. It is not the solution they want though.
-
Although I've only submitted that recommendation to @StefUk so it might be that with a talk with the business that they will understand and be ready to go that route.
-
@scottalanmiller said:
@dafyre said:
Why not go with Self-Hosting and replicated off-site?
If somebody in France needs access to a file from Malaysia, then they should connect to the Malaysia file server via <insert your method here> to access the files.
Like @scottalanmiller -- sometimes you have to change the processes.
That's what I am thinking. I want to look at using Exablox, one at each site. Each site with their own share of which they are the master that then replicates to the other sites.
Unless I missed it, we still haven't been told how large the files are. Is it better to deal with possible sync issues, or how about using RDS instead? If you really need to work on a file at relative performance, an RDS server in each location that users can share might be a better option.
-
RDS to a central location with VDI is being proposed as a long term solution, but not something that they are prepared to deal with in the short term.
-
How does using Exablox solve a file versioning problem? What is the solution for that specific problem, assuming you can't force a lock out to all nodes?
-
-
@scottalanmiller said:
RDS to a central location with VDI is being proposed as a long term solution, but not something that they are prepared to deal with in the short term.
Just before reading this, that is exactly where my mind leapt. Centralize the whole thing - RDS to a box near that storage pool. Problem solved.
-
@scottalanmiller said:
@Dashrender said:
How does using Exablox solve a file versioning problem?
Single site masters.
Please provide more details.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
How does using Exablox solve a file versioning problem?
Single site masters.
Please provide more details.
UK Exablox with a share of which it is the master. Replication of that data goes to France and KL.
France Exablox with a share of which it is the master. Replication of that data goes to UK and KL.
KL Exablox with a share of which it is the master. Replication of that data goes to UK and France.
Each site gets its own local data of which it is "in charge". It has the "write" share for that data. The replication is purely for reads.
Each site can work with its local data as normal. It's just a normal mapped drive for them. If a site needs data from another site it grabs a read only copy super fast from the replication, makes changes and then saves those changes over the WAN to the location where the master for that share is. Cumbersome on the less common saves, but only one master for every file.
-
Awesome - thanks!
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
How does using Exablox solve a file versioning problem?
Single site masters.
Please provide more details.
UK Exablox with a share of which it is the master. Replication of that data goes to France and KL.
France Exablox with a share of which it is the master. Replication of that data goes to UK and KL.
KL Exablox with a share of which it is the master. Replication of that data goes to UK and France.
Each site gets its own local data of which it is "in charge". It has the "write" share for that data. The replication is purely for reads.
Each site can work with its local data as normal. It's just a normal mapped drive for them. If a site needs data from another site it grabs a read only copy super fast from the replication, makes changes and then saves those changes over the WAN to the location where the master for that share is. Cumbersome on the less common saves, but only one master for every file.
Exactly, and if they restructure their shares correctly, those less common saves should really be uncommon.
-
Hi everyone,
thanks for chiming in ... and thanks @scottalanmiller for posting this on my behalf.RDS is currently out of the question at the moment due to intense graphical resources that they need. We are looking at some long term solutions - Nvidia Grid of some sort but at the moment RDS will not cut it.
stef