Strange Smart Array p410i problem
-
Welcome to ML. I think that you'll like it around here.
-
@StrongBad said:
Welcome to ML. I think that you'll like it around here.
Sir, It's nice to have people from around the world . If people from US are sleeping at least someone from Europe still around.
-
@Joyfano said:
@StrongBad said:
Welcome to ML. I think that you'll like it around here.
Sir, It's nice to have people from around the world . If people from US are sleeping at least someone from Europe still around.
@Joyfano is all excited!
-
@scottalanmiller said:
@flomer said:
Perhaps I should try that? This incident got me thinking that perhaps I have to look closer at my ESX server also, since this machine is a DL385 with the same controller...
In this era I would definitely strongly consider virtualizing even a dedicated storage device. The stability and flexibility are almost always worth it.
But, will this not lead to slower operation? There must be some overhead? And this should be the only VM on that server, then?
-
@scottalanmiller said:
@flomer said:
But, I have seen earlier on other DL380s that a drive will have an amber LED indicating that it is in "pre-failure" state, probably because of SMART-errors. That didn't happen here, and I have not seen a RAID before that has been so slow and troubled by a bad disk... I don't have that much experience, though... I will be interesting to hear what HP or our dealer will say about this. I also asked them if they think the RAID card is faulty. The ease of having a controller guiding you is sort of not present anymore, might as well buy an LSI HBA and go for ZFS. I woudl surely have been notified about this.
Any chance that you are using third party drives instead of HP drives? Non-HP drives will not fully report to the SmartArray.
I have not touched the RAID since setting it up almost three years ago, and all parts are HP parts. See below for snapshot of error message. The RAID is RAID 6, not 60, by the way...
I still haven't heard from the reseller why the RAID controller didn't kick the drive out.
-
@flomer said:
But, will this not lead to slower operation? There must be some overhead? And this should be the only VM on that server, then?
Yes but if you can measure it that would be shocking. There is effectively no overhead on the disk IO and all of your bottlenecks are from spinning disks and such. You still get all of your available threads, nearly all of your memory, etc. The CPU hit is nominal and the disk IO hit is nominal. The benefits are huge and the caveats are things you probably can't even measure.
But yes, the only VM on the server most likely. Virtualization won't cause performance issues. Consolidation might. But test it, you might get a lot of consolidation out of it too. All depends on your workload, hardware changes, etc.
-
@flomer said:
I still haven't heard from the reseller why the RAID controller didn't kick the drive out.
There is a really good chance, and I really mean VERY GOOD chance, that the threshold of "too many errors" was not hit until the reboot. Rebooting a system will cause changes in drive activity that could easily trigger the difference between "too few" and "too many" errors. I have seen this a lot.
-
@scottalanmiller said:
@flomer said:
I still haven't heard from the reseller why the RAID controller didn't kick the drive out.
There is a really good chance, and I really mean VERY GOOD chance, that the threshold of "too many errors" was not hit until the reboot. Rebooting a system will cause changes in drive activity that could easily trigger the difference between "too few" and "too many" errors. I have seen this a lot.
Well, I actually rebooted the machine thinking it might help before I knew it was a failing drive. It seemed to help a little, but I guess I must just have imagined it getting better. Or at least the situation got just as a bad after a little while.
-
@scottalanmiller said:
@flomer said:
But, will this not lead to slower operation? There must be some overhead? And this should be the only VM on that server, then?
Yes but if you can measure it that would be shocking. There is effectively no overhead on the disk IO and all of your bottlenecks are from spinning disks and such. You still get all of your available threads, nearly all of your memory, etc. The CPU hit is nominal and the disk IO hit is nominal. The benefits are huge and the caveats are things you probably can't even measure.
But yes, the only VM on the server most likely. Virtualization won't cause performance issues. Consolidation might. But test it, you might get a lot of consolidation out of it too. All depends on your workload, hardware changes, etc.
But, isn't virtualizing FreeNAS something that is generally adviced against? And how to presen the entire RAID 6 drive to FreeNAS? By using several 2 TB disks and LVM? I guess ZFS is out of the question anyway, since I don't have direct access to the individual drives.
-
@flomer said:
But, isn't virtualizing FreeNAS something that is generally adviced against?
Based on what? The rule is "virtualize everything." I know of no reason that FreeNAS should be physical.
-
@flomer said:
And how to presen the entire RAID 6 drive to FreeNAS?
Not sure what you mean. As it is, the RAID array is presented as a disk. When you virtualize you put the FreeNAS storage onto the presented disk from the hypervisor. There is nothing to know here, just set up in the default way and as long as you have hardware RAID 6 that is the only storage option that you have.
-
@flomer said:
I guess ZFS is out of the question anyway, since I don't have direct access to the individual drives.
ZFS doesn't need direct access to anything. Using hardware RAID or virtualization don't block ZFS in any way.
-
@scottalanmiller said:
@flomer said:
But, isn't virtualizing FreeNAS something that is generally adviced against?
Based on what? The rule is "virtualize everything." I know of no reason that FreeNAS should be physical.
I have read on the FreeNAS forums that they advice people not to virtualize, but it might have been specifically about ZFS.
-
@flomer said:
@scottalanmiller said:
@flomer said:
But, isn't virtualizing FreeNAS something that is generally adviced against?
Based on what? The rule is "virtualize everything." I know of no reason that FreeNAS should be physical.
I have read on the FreeNAS forums that they advice people not to virtualize, but it might have been specifically about ZFS.
I'm a bit confused at the moment... I was under the assumption that ZFS needed raw access to drives. For SMART?
-
@flomer said:
By using several 2 TB disks and LVM?
No LVM in FreeNAS. That's a Linux system. You would do this with ZFS. But yes, groups of 2TB virtual disks, until someone surpasses that limitation.
-
@flomer said:
I have read on the FreeNAS forums that they advice people not to virtualize, but it might have been specifically about ZFS.
I would avoid those forums. I've been dealing with horrible information from them for years now. You should read this article, it explains why people are saying that. It isn't because they don't want you to be virtual, it's because they are "religious zealots" about using ZFS as software RAID. They don't mention that, though, instead they give bad general advice without context. They leave out the parts that matter (that they are trying to sacrifice stability and reliability to promote the use of ZFS as a software RAID system at any cost.)
-
@flomer said:
I'm a bit confused at the moment... I was under the assumption that ZFS needed raw access to drives. For SMART?
ZFS needs access if you want it to get the SMART data or to replace your RAID controller. Neither of these are things that you want. That's the careful marketing of the FreeNAS forums. The statement sounds reasonable but the "if" is a negative. You have enterprise hardware RAID and you want that reading the SMART data, not FreeNAS. You don't want FreeNAS having access to the drives, that would be bad.
-
ZFS is great when you want or need software RAID. It's probably the best software RAID option on the market. But it is not common for SMBs to want software RAID and when they do want it, it is normally because they are cutting costs and wanting to save the hundreds of dollars that quality hardware RAID costs. But you already have enterprise hardware RAID and to get SMART data to FreeNAS you don't just need to bypass it, you need to replace it which will cost even more money. That doesn't make sense.
You have an excellent enterprise storage server, trying to use ZFS to replace the parts that you already have will just undermine you. The FreeNAS forums assume that you care about using ZFS more than you care about anything else (business goals, cost, common sense, etc.) so they write the forums with that assumption in mind and it doesn't often come out as useful advice to people looking to actually implement FreeNAS in a business.
-
OK, now I have a bit of reading to do... Thank you for all the information! I might return with a question or two after doing some reading
-
No problem.
If you are comfortable with Linux, I would also suggest strongly considering dropping FreeNAS or FreeBSD at all. FreeBSD is not ideal as a storage platform. OpenSuse would be my first choice if you are comfortable working on a Linux server.