I've been running WB. 16.10.0010 for a few months now without issues on a couple 2920's.
Posts made by zachary715
-
RE: HP 2920 switch firmware issue
-
RE: Is Ubiquiti phasing out the UAP-AC line?
My guess would be yes. Most of those released 2015-2016. We've since seen their 3rd gen products released around 2018 with the HD products, and now the UAP6 devices are starting to trickle out. Doesn't really make sense to keep promoting the older stuff.
https://help.ui.com/hc/en-us/articles/360012192813#faq-device-gen
-
RE: Non-IT News Thread
@wirestyle22 said in Non-IT News Thread:
https://www.instagram.com/tv/CEDBcJuA2JB/?igshid=awlz0i67myqf
Watch this guy apologize for hitting a grand slam
Jomboy covered this well. So pissed Tatis actually apologized and that his teammates didn't back him.
-
RE: RAID5 SSD Performance Expectations
@Dashrender said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
We assume your controller has either non volatile cache or battery backup.
PERC H730p Mini has 2GB NV cache.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
Part of the reason I created this thread so that someone might see my current setup and let me know that. I wasn't aware of how much the cache impacted performance for SSD. I know now
-
RE: RAID5 SSD Performance Expectations
@Obsolesce said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I modified Server 2 with the SSDs RAID cache policy from Write Through to Write Back, and No Read Ahead to Read Ahead
Why was it write-through to begin with? I've only done that in some very niche instances.
I've always configured Write Back in the past, but didn't know if using SSDs changed that. Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues. Maybe should have done a little more research prior to deciding.
-
RE: RAID5 SSD Performance Expectations
Quick update, I modified Server 2 with the SSDs RAID cache policy from Write Through to Write Back, and No Read Ahead to Read Ahead. This appears to have made a drastic improvement as 55GB Windows VM live vMotions to Server 2 are now being completed in about 1 1/2 minutes vs 4 minutes previously, and the network monitor is showing performance on par with what I was seeing on Server 3. Now on to getting all 3 servers in direct connect mode for vMotion and backups over 10Gb/s. Thanks.
-
RE: Non-IT News Thread
@Grey said in Non-IT News Thread:
@mlnews said in Non-IT News Thread:
A potentially deadly weather pattern is setting up across the central US
Extreme temperatures coupled with high humidity flowing from the Gulf of Mexico have set the stage for life-threatening heat in parts of the central and southern US.
Texas and Oklahoma are no strangers to excessive heat in the heart of summer and, a little over 10 days into the season, the region is bracing for stifling heat through the upcoming holiday weekend.
Temperatures are set to feel hotter in Dallas,Texas, than in Death Valley, California. Earlier in the week, parts of Texas registered the ultimate mark of oppressive warmth. Some cities including San Antonio, Lufkin and Victoria set records for hot low temperatures, with some failing to dip below 80 degrees even in the overnight hours.goddamnit, 2020. So help me, if the elections go badly in Nov, I'm signing up for a Mars 1-way ticket.
There's no option other than going badly, so go ahead and get that pre-registration going lol.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
Nothing your random writes are super high, way higher than those disks could possibly do. 10K spinners might push 200 IOPS. So 8 of them, in theory, might do 1,600. But you got 70,000. So you know what you are measuring is the performance of the RAID card's RAM chips, not the drives at all.
Got ya. I may just have to evacuate this server for the time being and do some various testing with different RAID levels and configs to see how they compare. I just would have expected a little more noticeable performance difference than what I'm seeing. I've seen it in VMs all along where I didn't think they were as zippy as they should be, but they were quick enough for what we were doing so didn't really dig. But now I'm curious.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
EDIT: I see CrystalDiskMark has the ability to measure the IOPS. Will run again to see how it looks.
Yup, that's common.
But aware that you are measuring a lot of things... the drives, the RAID, the controller, the cache, etc.
Results are in...
Server 2 with SSD:
Server 3 with 10K disks:
Is anyone else surprised to see the Write IOPS on Server 3 as high as they are? More than double that of the SSD's.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
For my use case, I'm referring to MB/s as I'm looking at it from a backup and vMotion standpoint which is why I'm measuring it that way.
That's fine, just be aware that SSDs, while fine at MB/s, aren't all that impressive. It's IOPS, not MB/s, that they are good at.
What's a good way to measure IOPS capabilities on a server like this? I mean I can find some online calculators and plug in my drive numbers, but I mean to actually measure it on a system to see what it can push? I'd be curious to know what that number is even to see if it meets expectations or if it's low as well.
EDIT: I see CrystalDiskMark has the ability to measure the IOPS. Will run again to see how it looks.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config
You are assuming that that is the write speed, but it might be the read speed. It's also above 2Gb/s, so you are likely hitting network barriers.
I would assume read speeds should be even higher than the writes. If I'm doing vMotion between Servers 1 & 2 which are identical config, I'm getting same transfer rate of 250MB/s.
-
RE: RAID5 SSD Performance Expectations
@Danp said in RAID5 SSD Performance Expectations:
Have you checked the System Profile setting in the bios? Setting this to
Performance
may make a difference.I'll look into this. Thanks for the suggestion.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I just assumed being MLC SSD they would still provide better performance.
Oh they do, but a LOT. Just remember that MB/s isn't the accepted measure of performance. IOPS are. Both matter, obviously. But SSDs shine at IOPS, which is what is of primary importance to 99% of workloads. MB/s is used by few workloads, primarily backups and video cameras.
So when it comes to MB/s, the tape drive remains king. For random access it is SSD. Spinners are the middle ground.
For my use case, I'm referring to MB/s as I'm looking at it from a backup and vMotion standpoint which is why I'm measuring it that way.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
Is it possible that it was traveling over a bonded 2x GigE connection and hitting the network ceiling?
No, in my initial post I mentioned that this was over 10Gb direct connect cable between the hosts. I only had vMotion enabled on these NICs and they were on their own subnet. I verified all traffic flowing over this nic via esxtop.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
Which performance do you feel is unexpected?
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config. I would have expected it to be higher, especially compared to the 10k disks. I understand it's RAID10 vs RAID5 and 8 disks vs 4, but I guess I just assumed being MLC SSD they would still provide better performance.
-
RE: RAID5 SSD Performance Expectations
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
When transferring from server 2 to server 3, it's transferring at around 750MBps, which is much more in line with my expectations.
Do you mean Mb/s or MB/s? Those are wildly different.
MBps. I tried to be careful about which I stated.
-
RAID5 SSD Performance Expectations
I have 3 servers, all Dell R640. Servers 1 and 2 were purchased together and are identical in spec. Both have 4 x 800GB SSD in RAID5 on SATA 6Gbps. Server 3 was purchased earlier this year when Xbyte was running a special, and it contains 8 x 2.4TB 10k disks in RAID10 on SAS 12Gbps. Servers 1 and 2 RAID is configured for Write-through, No Read Ahead, and I believe Caching Enabled. Server 3 is Write-Back, Adaptive Read-Ahead, and Caching Enabled. All 3 are running the H730p Mini Raid Controller.
All servers have quad-port network cards with 2x1Gb and 2x10Gb ethernet available. Servers 1 and 2 are Broadcom and Server 3 is Intel. I don't currently have 10Gb switches, so I'm trying to utilize direct-connection between hosts for things like vMotion and Backups.
My initial testing of direct-connect vMotion between servers 1 and 2 were puzzling. I was exceeding 1Gb speeds, but I was only hitting about 250MBps transfer, which is lower than I was expecting. So then I configured direct connect between servers 2 and 3 and tried vMotion. When transferring from server 2 to server 3, it's transferring at around 750MBps, which is much more in line with my expectations. When transferring right back to server 2, it again tops out around 250MBps.
At that point I decided to take vMotion out of it and attempted using CrystalDiskMark inside a VM to see how it tested. This may not be the most effective way to measure this, but was the first thing I thought of.
Server 2:
Server 3:
vMotion Stats from 3 to 2, 2 to 3, and 3-2 again. I cut off the numbers, but it's what I put above in the intro.
So Server 2 is showing a seq write speed very much in line with what I'm getting in vMotion. My question is, is this to be expected? The drives are Toshiba THNSF8800CCSE rated at 480MBps seq write. It seems unusual to me that my performance is as low as it is and wanted to see if this was in fact to be expected, or what other recommendations you had I should look at.
-
RE: Miscellaneous Tech News
Great, in-depth story on Marcus Hutchins, author of MalwareTech blog and primary person credited with stopping the WannaCry ransomware.
https://www.wired.com/story/confessions-marcus-hutchins-hacker-who-saved-the-internet/