Issue

  • Why is space not being freed from disk after deleting a file in Red Hat Enterprise Linux?
  • When deleting a large file or files, the file is deleted successfully but the size of the filesystem does not reflect the change.
  • I’ve deleted some files but the amount of free space on the filesystem has not changed.
  • The OS was holding several very large log files open with some as large as ~30G. The file was previously deleted, but only stopping and restarting the jvm/java process released the disk space. The lsof command shows the following output before restarting the java process
java 49097 awdmw 77w REG 253,6 33955068440 1283397 /opt/jboss/jboss-eap-5/jboss-as/server/all/log/server.log (deleted)
  • When you perform a df, the storage shows 90+% utilized, however, there is not really that much written to that space.

Resolution

Graceful shutdown of relevant process

  • First obtain a list of deleted files which are still held open by applications
    $ /usr/sbin/lsof | grep deleted
    ora    25575 data   33u   REG      65,65  4294983680   31014933 /oradata/DATAPRE/UNDOTBS009.dbf (deleted)
    
  • From lsof output, we find the process with pid 25575 has kept file /oradata/DATAPRE/UNDOTBS009.dbf open with file descriptor (fd) number 33
  • After a file has been identified you can free space occupied by this file by shutting down the process in question. If a graceful shutdown does not work, then you may issue the kill command to forcefully stop it by referencing the PID.

Truncate File Size

  • Alternatively, it is possible to force the system to de-allocate the space consumed by an in-use file by forcing the system to truncate the file via the proc file system. This is an advanced technique and should only be carried out when the administrator is certain that this will cause no adverse effects to running processes. Applications may not be designed to deal elegantly with this situation and may produce inconsistent or undefined behavior when files that are in use are abruptly truncated in this manner.
    $ echo > /proc/pid/fd/fd_number
  • For example, from the lsof output above
    $ file /proc/25575/fd/33
    /proc/25575/fd/33: broken symbolic link to `/oradata/DATAPRE/UNDOTBS009.dbf (deleted)'
    $ echo > /proc/25575/fd/33
    

NOTE:

  • The same reason will cause different disk usage from du command and df command, please refer to Why does df show bigger disk usage than du?
  • If you want to know the size (in blocks) these files are occupying you can use a command like:

Red Hat Enterprise Linux 5

# lsof | grep deleted | awk '{print $7}' | xargs sum 2> /dev/null | awk '{ SUM += $2 } END { print SUM }'

Red Hat Enterprise Linux 6

# lsof | grep deleted | awk '{print $9}' | xargs sum 2> /dev/null | awk '{ SUM += $2 } END { print SUM }'

Red Hat Enterprise Linux 7

# lsof | grep deleted | awk '{print $8}' | xargs sum 2> /dev/null | awk '{ SUM += $2 } END { print SUM }'

Root Cause

  • On Linux or Unix systems, deleting a file via rm or through a file manager application will unlink the file from the file system’s directory structure; however, if the file is still open (in use by a running process) it will still be accessible to this process and will continue to occupy space on disk. Therefore such processes may need to be restarted before that file’s space will be cleared up on the filesystem.

Leave a Reply

Your email address will not be published. Required fields are marked *