CloudatCost IOWait Issues
-
@g.jacobse said:
I had to install EPEL and SAR - which I thought had been done on my C@C.
SAR is not a default but should not require the EPEL. No cloud provider, we hope, will have the EPEL by default.
-
@coliver That's a big spike from a little file copying.
-
@scottalanmiller said:
@coliver That's a big spike from a little file copying.
Agreed... I didn't have any spike in views at that point... although that should be minimal.
-
@scottalanmiller
No - I don't think that is the case.... I thought I had it installed.. and it's possible I really did have it installed but have since reimaged the box... thusly it wouldn't run...I had to search back to find your statement about EPEL and SAR to get the correct syntax... then I was being impatient on the reporting.
-
I saw one spike to 11.5% on mine about 11:30AM EST... mine are generally between 2 and 3 %.
I don't have any performance issues on this box at the moment... just doing a bit of piddling with it.
-
Mine was pretty bad last night. I constantly have issues when trying to upload files via sftp or scp anymore (php files usually). At this point I've pretty much given up on CloudatCost. Too bad I can't get a refund for all my Dev and BigDog instances.
-
Seems to be pretty snappy this morning. Anyone still seeing I/O issues?
-
Yes, still seeing issues. Here is our list, expanded by one new host added since the last list. Notice that the lowest IOWaits on the list are Digital Ocean (the one DO server that isn't super low is the log server which is BUSY) and the YL server (York Lab). That new server sits on the Drobo B800i iSCSI SAN with RAID 6. So that will give you an idea of how low IOWait should be on a very high latency, low performance, over the network SAN!!!!!
cc-lnx-jump : 1.23 cc-lnx-ublab : 3.58 cc-lnx-dev1 : 2.98 cc-lnx-rh7lab : 1.44 cc-lnx-rh6lab : 5.51 cc-lnx-mango-st : 0.33 cc-lnx-dblab1 : 0.52 cc-lnx-dblab2 : 1.95 cc-lnx-dblab3 : 0.79 dny-lnx-log : 0.19 dny-lnx-pbx1 : 0.02 yl-lnx-elastixmt : 0.01
-
Mine are showing a considerable improvement over yesterday. they were in the 20-40 range.
07:20:01 AM CPU %user %nice %system %iowait %steal %idle 07:30:01 AM all 2.14 0.00 0.21 18.19 0.00 79.46 07:40:01 AM all 0.18 0.00 0.02 1.69 0.00 98.12 07:50:01 AM all 1.96 0.00 0.19 5.94 0.00 91.91 08:00:01 AM all 0.35 0.00 0.04 2.76 0.00 96.85 08:10:01 AM all 0.03 0.00 0.01 1.04 0.00 98.92 08:20:02 AM all 2.39 0.00 0.24 19.76 0.00 77.61 08:30:01 AM all 0.43 0.00 0.04 1.25 0.00 98.28 08:40:01 AM all 0.49 0.00 0.05 1.97 0.00 97.49 08:50:01 AM all 2.56 0.00 0.24 8.81 0.00 88.39 09:00:01 AM all 0.75 0.00 0.08 7.45 0.00 91.73 09:10:01 AM all 0.30 0.00 0.03 5.41 0.00 94.25 09:20:01 AM all 1.65 0.00 0.16 4.93 0.00 93.26 09:30:01 AM all 1.37 0.00 0.14 3.82 0.00 94.68 09:40:01 AM all 0.00 0.00 0.00 0.23 0.00 99.77 Average: all 1.03 0.00 0.10 3.97 0.00 94.90
Please note - right now,.. it's just sitting there. I have nothing running on it,
-
That's a crazy spike!!
-
I actually have something running now....
08:30:01 AM CPU %user %nice %system %iowait %steal %idle 08:40:01 AM all 0.49 0.00 0.05 1.97 0.00 97.49 01:00:01 PM all 0.70 0.00 0.10 2.85 0.00 96.35 01:10:01 PM all 6.85 0.00 1.50 31.20 0.00 60.45 01:20:04 PM all 23.97 0.00 2.22 20.04 0.00 53.77 01:30:04 PM all 14.11 0.00 3.41 82.45 0.00 0.03 Average: all 1.61 0.00 0.24 6.44 0.00 91.72
-
OMG, 82%!!!
-
That's literally the highest IOWait I've ever seen.
-
Dang. I still think These IO issues caused some corruption on my CentOS box somehow.
-
@thecreativeone91 said:
Dang. I still think These IO issues caused some corruption on my CentOS box somehow.
Very possible.