Ubuntu Systemd Bad Entry
-
MD RAID 10 is configured on this box for the storage space. So I can check that as well.
-
@DustinB3403 said in Ubuntu Systemd Bad Entry:
MD RAID 10 is configured on this box for the storage space. So I can check that as well.
Should be the very first thing to check.
-
lots of deleted/unused inodes, clearing them all.
-
@DustinB3403 said in Ubuntu Systemd Bad Entry:
lots of deleted/unused inodes, clearing them all.
That is common in scenarios where you have filesystem corruption.
-
Inode ____ ref count is _, should be . Fix<u>?
Correcting these issues.
-
@thwr said in Ubuntu Systemd Bad Entry:
@DustinB3403 said in Ubuntu Systemd Bad Entry:
MD RAID 10 is configured on this box for the storage space. So I can check that as well.
Should be the very first thing to check.
Array status:
mdadm --detail /dev/mdx
https://raid.wiki.kernel.org/index.php/Detecting,_querying_and_testing#Querying_the_array_status
-
it looks like xvda is having issues according the current screen.
Might have to replace that drive...
-
At the moment the system appears to just be progressing through the blk_update_request with I/O errors for individual sectors on XVDA.
Should I abort this operation and find a replacement drive? Is it worth it to let this continue?
-
@DustinB3403 said in Ubuntu Systemd Bad Entry:
At the moment the system appears to just be progressing through the blk_update_request with I/O errors for individual sectors on XVDA.
Should I abort this operation and find a replacement drive? Is it worth it to let this continue?
Hard to say. Real data on it? Would try to get a last backup first before doing filesystem operations.
-
Yeah, all comes down to the value of recovery, really.
-
I don't mind tearing down the system, it's only running 1 VM that I'm backing up my VM's too. Which those delta's get pushed off nightly to another disk.
-
Time to reboot
-
And the system is in recovery mode. ..
-
Manual fsck is no fun.
-
At least all of the instructions are there, and this is a learning experience.
-
This post is deleted! -
All disks in the array appear to be fine according to MD.... So this is clearly this is something with the VM.
-
So I was able to just restore this VM to a snapshot from the other day.
Should I perform another fsck on this virtual system?
-
Not if it does not prompt you to.
-
So how can I check to see if whatever caused this issue is still present? I mean if it just happens from time to time, fine.
But wouldn't it be good to know what caused it?