What Is Eating CentOS Disk Space
-
So the process to track down the biggest problems are to start with df -h to determine which filesystem is the problem. Then start at the root of that filesystem and use du -smx * | sort -n to find the biggest space using directories there. Then cd into the directories and run du -smx * | sort -n again and keep looping through it like this until you find where space is being used that should not be.
-
df -h
Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_trvbackup-lv_root 50G 48G 0 100% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/sda1 485M 53M 407M 12% /boot /dev/mapper/vg_trvbackup-lv_home 402G 145G 236G 39% /home /usr/tmpDSK 1.6G 37M 1.5G 3% /tmp /dev/sdb1 1.5T 286G 1.2T 20% /backup/current /dev/sdb2 322G 211G 96G 69% /backup/archive
-
du -shx /*
out put keeps on counting .........
36K /backup 6.4M /bin 43M /boot 772K /dev 29M /etc
and so onn
-
root@trvbackup [~]# du -smx * | sort -n
1 anaconda-ks.cfg
1 CHANGELOG
1 cpanel3-skel
1 installer.lock
1 install.log
1 install.log.syslog
1 install.sh
1 latest
1 LICENSE
1 php.ini.new
1 php.ini.orig
1 public_ftp
1 public_html
1 README
1 scripts
1 tmp
3 csf -
trying on it......
-
@ajin.c said:
du -shx /*
out put keeps on counting .........
36K /backup
6.4M /bin
43M /boot
772K /dev
29M /etcand so onn
It takes a while if the system is full. The "and so on" is the part that is important.
-
@ajin.c said:
root@trvbackup [~]# du -smx * | sort -n
1 anaconda-ks.cfg
1 CHANGELOG
1 cpanel3-skel
1 installer.lock
1 install.log
1 install.log.syslog
1 install.sh
1 latest
1 LICENSE
1 php.ini.new
1 php.ini.orig
1 public_ftp
1 public_html
1 README
1 scripts
1 tmp
3 csfYou switched into root's home director "/root" which is not using any space. So this output won't help. You need to start at /. So do this...
cd /
du -smx * | sort -nAnd provide the complete results.
-
Adding keywords for anyone searching later: CentOS RHEL Red Hat Enterprise Linux
-
Here is some sample output from a web server I happen to be logged into at the moment. I added the "2> /dev/null" and the "tail" portions to make it easier to read and use. Make sure you are root before doing this to make things easy.
[root@to-lnx-web /]# **whoami** root [root@to-lnx-web /]# **pwd** / [root@to-lnx-web /]# **du -smx * 2> /dev/null| sort -n | tail -n 5** 153 boot 403 tmp 554 lib 899 usr 6070 var [root@to-lnx-web /]# **cd /var** [root@to-lnx-web var]# **du -smx * 2> /dev/null| sort -n | tail -n 5** 70 tmp 73 spool 184 lib 1708 www 3957 log [root@to-lnx-web var]# **cd log** [root@to-lnx-web log]# **du -smx * 2> /dev/null| sort -n | tail -n 5** 316 httpd 413 maillog-20140223 627 maillog 1043 maillog-20140302 1267 maillog-20140309
-
From my output above, you can see that I started in / and found that var was the directory using the most space under it. So I moved into var and did it again. Under var we saw that log was using the most space. So we moved until log and ran it again.
The 2>/dev/null removes extraneous error output that you don't care about.
The sort -n | tail -n 5 portion shows you only the five largest files or directories from each run. You could adult the "5" to "8" or "12" or whatever is most useful to you.
-
root@trvbackup [/]# du -smx * | sort -n
^C
root@trvbackup [/]#Waited arround half an hour ...but no output ....still waiting
-
If the drive is full, this will likely take some time. Because it is sorting the output it will show nothing until it completes.
-
Boss.....Still waiting for the output.......
-
root@trvbackup [/]# du -smx * | sort -n
du: cannot accessproc/11877/task/11877/fd/4': No such file or directory du: cannot access
proc/11877/task/11877/fdinfo/4': No such file or directory
du: cannot accessproc/11877/fd/4': No such file or directory du: cannot access
proc/11877/fdinfo/4': No such file or directory
0 proc
0 scripts
0 sys
1 backup
1 dev
1 lost+found
1 media
1 mnt
1 quota.user
1 razor-agent.log
1 selinux
1 srv
3 tmp
7 bin
8 root
14 sbin
29 etc
30 lib64
38 opt
43 boot
234 lib
5401 usr
17480 var
148041 home -
This is easy. It's someone storing stuff in their home directory. This is not a system problem but a user problem. Just just the same command but with /home instead of just / and it will produce the list of your offending users.
-
That is 148GB of user data.
-
root@trvbackup [/home]# du -smx * | sort -n
right ?
-
-
Hi SAM,
since the server was down , i had to install and configure a new one. i will come back as soon as the temperory issues are sorted out .
-
In the future, you might want to consider separating the /home directory out into its own filesystem so that end users cannot impact the system in this way. Or using quotas to limit how much damage that they can do.