XenServer hyperconverged
-
@olivier
Great news. This product keeps getting better!
File level restore, hyperconverged architecture......Per @scottalanmiller imagine if you could also manage Hyper-V in that same console?
-
Will be keeping an eye on this will be keen to have a play (when I get two servers spare lol)
-
@olivier Will there be a beta, when is the ETA?
-
Probably a beta one day, but it's really to soon to have an ETA. I'm only on preliminary tests stage, so it seems to work, I have to:
- find the right settings
- make various tests in 2 hosts scenario
- reproduce the recipe when it seems OK after tests
Then, the automatisation phase would be a bit tricky, in order to "package" a turnkey thing.
My biggest interrogation now more about speed than resiliency (which seems OK).
But sure, as soon I got a minimal viable product, I'll open a beta.
-
@olivier
Do you plan to support more than a 2 node setup? -
That's very likely, but one step at a time
-
@olivier Totally understand, but, you can't blame a guy for getting excited
-
Haha sure
Hope the test would be conclusive. I have no guarantee, I'm exploring.
Imagine if only I had a bigger team
Let's keep up posted!
-
@olivier said
Imagine if only I had a bigger team
Well at least you have some willing testers here at ML.
-
I'm very interested to learn more about how the storage will be approached.
-
@scottalanmiller I have multiple angles of attack, I'm currently benching and establishing pros/cons for each approach.
-
@olivier
I think you should move this "hyperconverged" feature up on the release schedule -
I have file level restore on top right now
-
@olivier said in XenServer hyperconverged:
I have file level restore on top right now
I realize that.
File restore won't be unhappy at occupying the #2 spot, will it? jk -
@FATeknollogee It doesn't work like that.
Playing/exploring a technology is one thing, releasing a minimal viable product is another one. Maybe my exploration will finish by a "it will be better to wait for SmapiV3 in XenServer" verdict.
I set some goals, I'll try to reach them but I can guarantee anything. About the file level restore, our lead dev work on it, not me. So I try to have my "tech time" on this (which is a bit hard considering I'm doing a lot of not technical work)
-
Thanks for the detailed explanation.
Just curious, but what is "SmapiV3 in XenServer"?
-
At least a modular storage "API" for XenServer: http://xapi-project.github.io/xapi/futures/smapiv3/smapiv3.html
It will allow to plug any filesystem/share into XenServer via "simple" plugins.
For me, that's the best "neat" solution coming, but it's not yet ready.
-
Thx for the explanation & link.
Keep up the great work, you have a fantastic product (I know I'm not the 1st one to tell you that)
-
Hey there,
If anyone can make some quick benchmark if you have any Windows based VM: using Crystal Disk Mark (latest, 5.2 I think) with the default parameters (5/1GiB)
Done tests on Windows Server 2016 (TP5, yeah I know I'm late) and I would like to compare how much I can lose in a hyperconverged scenario.
Also, telling the SR type and the physical device underneath would be great Thanks!
edit: no worries, I'm not here to compare apples to apples, just want a quick order of magnitude.
-
@olivier Here are my results (Windows Server 2008, LVM, Raid 10, 8x 15K spinning rust) --
Sequential Read (Q= 32,T= 1) : 416.270 MB/s
Sequential Write (Q= 32,T= 1) : 412.617 MB/s
Random Read 4KiB (Q= 32,T= 1) : 14.298 MB/s [ 3490.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 17.564 MB/s [ 4288.1 IOPS]
Sequential Read (T= 1) : 321.305 MB/s
Sequential Write (T= 1) : 273.068 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.218 MB/s [ 297.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 12.264 MB/s [ 2994.1 IOPS]Test : 1024 MiB [C: 75.9% (56.9/75.0 GiB)] (x5) [Interval=5 sec]