Revisiting ZFS and FreeNAS in 2019
-
@xrobau I'm an intern and nothing you have listed is a special feature. I honestly thought you were a salesman or vendor on this thread.
-
@xrobau said in Changes at Sangoma:
No. Most people have no idea how much time their NAS's CPU sits around waiting for data to be returned from the disk (hint: a lot). Modern CPUs are blisteringly fast. So fast that compressing 16kb of data and writing 2kb to disk is often 10x faster than just writing 16kb of data to disk in the first place.
Actually, they do. Again, considered standard knowledge since ~2000. Now, most people who use NAS and definitely most people using FreeNAS, aren't aware of standard IT knowledge so we expect them, as non-technical people, to have no idea about such things. So from the viewpoint of people using NAS devices and FreeNAS, yes, most are clueless.
This is also why hardware RAID is dead - modern CPUs do this in their spare cycles, while they're waiting for other things to happen.
Except it isn't dead, that's literally crazy. And, more importantly, it's not at the RAID level where this happens. So this makes absolutely no sense. You are confused about how this works at the basic level. You need to step back and look at the big picture. Once again, this is how the ZFS marketing confuses non-IT people.
ZFS has three layers... RAID, FS, and LVM. In other systems, they are separate. Because of this, you are thinking that it is the RAID, rather than the FS or LVM, that is doing the compression. But it is not. So pointing out that the hardware RAID doesn't do this only shows that you are missing the basics, it is not making hardware RAID look bad. Hardware RAID supports compression exactly the same. In face, you can trivially prove this because ZFS on hardware RAID maintains the compression capabilities exact the same.
You can compress using any layer that you want. Commonly it is in the filesystem layer. And yes, when using hardware RAID with NTFS or ZFS or any number of things, the CPU does the compression in the background using spare cycles.
-
@xrobau said in Changes at Sangoma:
ZFS does not use parity. That seems to be where a bunch of this confusion is coming from.
Traditional RAID uses parity. ZFS uses copies.This is just false. All "complete" RAID systems offer stripes, mirrors (what you are confusingly calling copies) and parity. All of them.
Using terms like "copies" instead of the standard term "mirrors" makes it seem like maybe it's something unique. But it is not.
You are both confused that you think other RAID systems don't use mirrors (the very first RAID ever was a mirror), and that you think that ZFS is not parity (nearly all deployments of ZFS are specifically for its parity.) Everything you are basing your position on is as wrong as could possibly be.
-
@xrobau said in Changes at Sangoma:
ZRAIDx means there are X ADDITIONAL copies of the data. ZRAID2 has 3 copies of each chunk, spread across physical devices. The "X" number is how many HDDs the zraid can tolerate failing.
No, it does not. Not to be repetative, but I'm addressing each non-factual statement as they were posted.
First, ZRAID isn't a thing. But we know that you mean RAIDZ (just as @travisdh1 means levels, not levers.) We assume you are just making a typo over and over again.
But RAIDZ is parity, always, no exceptions. The term RAIDZ is a reference to parity RAID within ZFS. If you want non-parity RAID, it can't have the word RAID in it at all, it's literally just called a mirror. Not a copy.
And the numbers means nothing of the sort. The number in RAIDZ (with blank implying a "1") refers to the numbers of parity, nothing to do with copies or mirrors. And there is only RAIDZ, RAIDZ2, and RAIDZ3. There is no 4+, no one has ever implemented fourth order parity yet. And no one is expected to, RAIDZ3 has proven to be effectively worthless in the real world (the tech is great, it just has almost no viable use case) so making RAIDZ4 would be an exercise in futility.
-
There, I am caught up. Sorry that that was so long, but it was so many of the most wildly wrong posts that I've seen since being just in this community. I can only imagine that we have a downstream troll (or salesman?) trying a play a trick and @xrobau got caught by them. But it is really important that no one stumble on this thread and think that any of that information is somehow in any way correct.
But good news, Google now ranks us the very top hit for ZRAID8 Literally, since this is the first place it has ever been mentioned.
-
@scottalanmiller You'll want to double check you're article on the cult of ZFS, that was a direct quote. I know it's just a typo.
-
@travisdh1 said in Changes at Sangoma:
@scottalanmiller You'll want to double check you're article on the cult of ZFS, that was a direct quote. I know it's just a typo.
Fixed
-
@xrobau said in Changes at Sangoma:
So if a ZRAID2 has 5 spindles, that means that a copy of block 1 of the zpool will be placed on spindle 1 at sector 10, 3 at sector 100 and spindle 5 at sector 500.
Sorry to go back to this, I was writing something else and just realized why you were saying this. So let's break this down because there are a few mistakes here leading you to some bizarre ideas.
- ZRAID2 doesn't exist, but we accept that you mean RAIDZ2, which is standard RAID 6.
- These are not copies, these are "pieces of the block". Each spindle gets one piece of it and you need at least three of the five spindles to put the data back together.
- There are three unique pieces of data, and two pieces of parity data. (Just explaining RAID 6 here.)
So far, everything stated here is just regular parity RAID. Nothing special or different about RAIDZ implementations of it. But there must be a reason that you mentioned this.
That's when I realized that you were talking about variable width stripes, which is the "big feature" of the RAIDZ parity implementations. This is what allows RAIDZ to close the famous "RAID 5 Write Hole". That's why you were thinking about "where" the data was put on the drives.
Yes, it does this. But as it is not related to random locations, and not copies but rather parity striping, it is in no way what you think that it is. It's just that what you were writing was so disconnected from reality that I'm sure none of us had any idea what had led you to write this, so we just overlooked it.
When ZFS does mirroring (RAID 1) it does not have this "feature". It's also not needed, it's a problem of RAID 5, not RAID 1. Doing this with RAID 1 (mirroring, copying) would just waste resources and slow things down and wear the drives faster.
Also, this is the biggest feature that ZFS was promoted over (and as you can see, you repeated it without realizing) and is unique to using the parity RAID feature. So while you may think that ZFS means mirroring, at some point the sources you are using are assuming that it always means parity (which most people do assume, for sure.) So some of what you are writing is based around the assumption that you will use parity RAID, so much so, that parity information is being applied to mirroring accidentally.
Also, this "feature" is widely considered to be worthless on ZFS because it has not affected any enterprise RAID system for a very, very long time. Both because of things like batteries, NVRAM and similar, but also because RAID 5 (aka RAIDZ) is no longer a widely used option.
-
@scottalanmiller said in Changes at Sangoma:
@xrobau dont' get us wrong, we totally know that you are a developer, not IT.
Funnily enough, you're wrong there - I'm a developer NOW, but I'm a Solaris Sysadmin originally. Then I got my Windows cert, and CCIE, and a bunch of other things before moving into DevOps. So, please - trust me when I say I know what I'm talking about.
@scottalanmiller said in Changes at Sangoma:
These are not copies, these are "pieces of the block". Each spindle gets one piece of it and you need at least three of the five spindles to put the data back together
Honestly, this is where you are 100% wrong, and you refuse to listen to me. I'm trying to explain how ZFS is different, and you can't just say 'You're wrong, and I know this because I know nothing about ZFS'.
ZFS is based on copies of the data. There is no parity. Stop using the word parity as it has NOTHING to do with ZFS. If you're using the word parity, in relation to ZFS, you are wrong.
I don't know how much more blunt I can be. ZFS does not use Parity. ZFS uses copies.
Right, now that I hopefully have made that clear, let me try again.
Parity, in RAID speak is 1+2+3+4=10 - If you lose one of the disks, you end up with this:
1+?+3+4=10
Simple maths lets you figure out that the missing value is 2. (10-4-3-1 = 2)
That's how parity works. Not rocket science.
ZFS works on copies. So, when you write 1, 2, 3 and 4 to a zpool, you get something like this:
Disk 1: x 1 x 3 x
Disk 2: 1 x 2 x 3
Disk 3: x 1 2 x 4
Disk 4: 4 x 2 x 3
Disk 5: x x x x 4Copies. Of. The. Data.
That looks vaguely accurate, but even if I missed something, assume 3 copies of all data across 5 spindles.
Copies. Not parity. COPIES.
OK, so can we move on from this now? Old RAID == Parity. ZFS == Copies. Hopefully I've made this clear now.
Now, if you want to learn more about this, please feel free to go on any of the Solaris Administration courses I have, OR, feel free to read any of the plethora of documentation on ZFS. But telling me I'm wrong isn't going to get you anywhere, because I know what I'm talking about here. This is my field of expertise.
Now, if you can take a breath, admit that you've learned something new about ZFS, I can continue on with the OTHER differences, and some of your potential misconceptions
-
I'm not going to bother going through all the individual replies - please try to consolidate them into a single response, but almost all of them are suffering from the same misapprehension that ZFS uses parity data instead of copies. If I missed something (I skimmed through them), feel free to reply in a single post and I'll try to address any confusion.
-
@xrobau said in Changes at Sangoma:
Honestly, this is where you are 100% wrong, and you refuse to listen to me. I'm trying to explain how ZFS is different, and you can't just say 'You're wrong, and I know this because I know nothing about ZFS'.
I listened and understand what you are saying. What I'm explaining is that this is NOT how ZFS works, at all. I don't know where you are getting this, but it is simply not reality. Can you find some source, because EVERY source says you are incorrect.
-
@xrobau said in Changes at Sangoma:
ZFS is based on copies of the data. There is no parity. Stop using the word parity as it has NOTHING to do with ZFS. If you're using the word parity, in relation to ZFS, you are wrong.
This is simply made up. Period
-
@xrobau said in Changes at Sangoma:
I'm not going to bother going through all the individual replies - please try to consolidate them into a single response,
That would be insanely obnoxious. The points are separate. Don't do "wall of text", that's a way to shut down discussion.
-
@xrobau said in Changes at Sangoma:
I don't know how much more blunt I can be. ZFS does not use Parity. ZFS uses copies.
You are missing the point that no one is misunderstanding you, we are all agreeing that what you are saying is absolutely and completely wrong. And every source from Oracle to Ubuntu to years of ZFS expertise to ZFS forums to wikipedia point this out. Even some of your own posts have info about this.
-
@xrobau said in Changes at Sangoma:
ZFS works on copies. So, when you write 1, 2, 3 and 4 to a zpool, you get something like this:
Disk 1: x 1 x 3 x
Disk 2: 1 x 2 x 3
Disk 3: x 1 2 x 4
Disk 4: 4 x 2 x 3
Disk 5: x x x x 4
Copies. Of. The. Data.Yes, we understand what you are saying. It's just not true, you can repeat it over and over. But the bottom line is that stating it "bluntly" doesn't change that there is no source for this. We all know how ZFS works, and it does nothing like you are saying. I don't know where you got these ideas, but they are false. They don't even make sense.
-
@scottalanmiller Sorry, dude, I'm going to give up. If you don't want to work with me here, then I'm just going to not bother.
I have no coin in this game. I'm just trying to help you out. I've been using ZFS for 15 years now, and I'm extremely confident in my knowledge. A lot of people try to simplify this and those simplifications are where you're getting confused.
Anyway, I'm out. Enjoy!
-
@xrobau said in Changes at Sangoma:
OK, so can we move on from this now? Old RAID == Parity. ZFS == Copies.
No, because you are missing EVERYTHING. That's NOT what old RAID means, nor is it what ZFS means. Period.
RAID 1, the first "old" RAID was mirroring (copies.)
You'll notice that I read everything you wrote and refuted it. You clear read nothing anyone here or elsewhere has written about RAID or ZFS or else you'd understand that what you are saying makes no sense and isn't a response to what we've been writing. You are acting like we aren't understanding what you are saying, when clearly we understood and showed references as to why it is incorrect.
-
OK, here's my last ditch gasp to try to get you to understand my point of view:
RAID1 doesn't use parity. RAID0 doesn't use parity. So, which RAID versions do use parity, being that this is what this entire discussion is about?
-
@xrobau said in Changes at Sangoma:
Now, if you want to learn more about this, please feel free to go on any of the Solaris Administration courses I have, OR, feel free to read any of the plethora of documentation on ZFS. But telling me I'm wrong isn't going to get you anywhere, because I know what I'm talking about here. This is my field of expertise.
Actually, this is mine. Telling you you are wrong isn't getting anywhere, but it doesn't change the fact that everything you think you know about RAID and ZFS is completely incorrect. I mean, honestly, this is the worst understanding of storage I've seen before. And that comes from more than a decade of these kinds of discussions. I've never seen anything this dramatic. From not knowing the basic terms you are using, to completely not knowing the basic technologies.
Just try Googling some of your stuff. And I've already provided documentation as to why you are wrong. And I've asked you to do the same. Please do so. The person making crazy claims and refuting the ENTIRE industry, including the makers of the product, is definitely the one is the "needs to provide proof" seat.
-
@xrobau said in Changes at Sangoma:
So, which RAID versions do use parity, being that this is what this entire discussion is about?
RAID 2 (no existing implementation)
RAID 3 (deprecated)
RAID 4 (rare)
RAID 5 (aka RAIDZ)
RAID 6 (aka RAIDZ2)
RAID 7 (aka RAIDZ3)This is RAID 101 here. Literally the A+ requires this in the first half.