What Happens When a Linux Server Runs Out of Disk Space (And How to Fix It Safely)
Running out of disk space on a Linux server rarely feels urgent until something actually breaks. At that point websites can show errors, logins may fail, and you might even lose access to the server console.
This guide explains what is really happening under the surface, how to check disk usage, and how to clean space safely without making things worse. It assumes you are comfortable with basic SSH access, but you might be new to Linux administration.
If you need a more general introduction to resource usage first, you may also find our article How to Check CPU, Memory and Disk Usage on a Linux Server useful.
Why Disk Space Matters So Much on a Linux Server

What “disk space” actually is in plain English
On a Linux server, “disk space” is the amount of data your storage device can hold. This might be:
- A virtual disk on a VPS
- An SSD or HDD on a dedicated or virtual dedicated server
- A storage volume from your cloud provider
Linux organises that raw storage into partitions and filesystems. A few useful terms:
- Disk is the physical or virtual storage device.
- Partition is a slice of that disk.
- Filesystem (for example
ext4,xfs,btrfs) is how files and folders are stored and looked up. - Mount point is where that filesystem appears in your directory tree, such as
/,/home, or/var.
When we say “the disk is full”, we usually mean “one of the mounted filesystems is at or near 100% capacity”. Other filesystems might still have free space.
How disk space ties into your website or application
Every part of a typical website stack depends on disk space:
- The web server (Apache, Nginx, LiteSpeed) reads configuration files and writes access / error logs.
- PHP or other application runtimes write session data, temporary files and cache contents.
- The database (MySQL, MariaDB, PostgreSQL) writes table data, indexes, temporary sort files and transaction logs.
- The operating system writes system logs and uses temporary space during package updates.
If the filesystem holding any of these paths becomes full, you may see:
- New uploads failing
- Cache systems silently stopping
- Database errors or crashes
- Incomplete or failed system updates
Typical storage layouts on VPS and virtual dedicated servers
On many VPS and virtual dedicated servers, the default layout is simple: a single large partition mounted at / that holds almost everything. Sometimes you will also see:
/bootas a small separate filesystem for kernel files/homeon its own filesystem for user data/varseparated, because it contains logs, databases and growing data
You can see your actual layout with:
lsblk
What this does: lsblk lists block devices (disks) and how they are partitioned and mounted.
Expected output: A tree view showing devices like sda or vda, partitions such as sda1 and their mount points.
Nothing is changed by this command; it is read only.
What Actually Happens When Disk Space Runs Out

Early warning signs when a Linux disk is nearly full
Disk issues typically start before you hit 100%. Warning signs include:
- Alerts from monitoring or control panels that a filesystem is over 80% or 90% used
- Package updates (
apt,yum,dnf) failing with “No space left on device” - Applications complaining they cannot write logs or cache files
- Slow performance as the filesystem becomes fragmented or temporary files fail to be created
It is a good habit to act when a critical filesystem (for example / or /var) goes above roughly 90%, rather than waiting for a crisis.
What processes start to fail as space hits 100%
Once a filesystem reaches 100%, writes begin to fail. In practice that can mean:
- Log files stop updating or rotate incorrectly
- Database engines cannot write new data, causing queries and transactions to fail
- Services cannot start because their PID files or sockets cannot be created
- SSH sessions may misbehave if they cannot create temporary files
- System cannot write to its own journals or crash dumps
A related problem is inode exhaustion. Even if there is free space in gigabytes, you can run out of inodes (the internal data structures that track each file). This usually happens on systems with huge numbers of tiny files, such as cache directories.
How this looks for WordPress and WooCommerce sites
For WordPress and WooCommerce, a full disk usually appears as:
- HTTP 500 errors or “Error establishing a database connection”
- In the dashboard, media uploads fail or are stuck at 100%
- Plugins that rely on caching or logs behave inconsistently
- WooCommerce orders might not complete correctly if database writes fail
If PHP cannot write session files, users may be logged out repeatedly. If MySQL cannot write to its data directory or temporary directory, queries may fail or the service may stop entirely.
Serious consequences: data loss, corruption and boot failures
Most of the time, running out of disk space is inconvenient but recoverable. However, if writes fail at the wrong moment you can see more serious issues:
- Incomplete database writes that leave tables corrupted
- Interrupted system updates that leave packages half installed
- Damaged log or configuration files that were being written when space ran out
- Boot failures if critical system files or journals are left in a bad state
This is one reason good backups and careful recovery steps are important. For WordPress or WooCommerce, our guide What Every WordPress Owner Should Know About Backups and Restores is a helpful companion when working on space issues.
How to Check Disk Usage on a Linux Server
Using df to see overall disk usage and mounted filesystems
To see how full each filesystem is, use df. This is usually the first command to run when you suspect space issues.
df -h
What it does: df reports disk usage by filesystem. The -h flag makes sizes human readable (GB, MB).
Expected output: A table with columns for filesystem, size, used, available and mount point. Focus on the Use% and Mounted on columns.
How to adjust: To see only local filesystems (and exclude pseudo filesystems such as tmpfs):
df -hT -x tmpfs -x devtmpfs
This is read only and safe to run as often as you like.
Using du to find which folders are using the most space
Once you know which filesystem is full, you usually need to find the heaviest directories inside it. The du command (“disk usage”) is ideal for this.
To see the top space users directly under /:
sudo du -xh / | sort -h | tail -n 20
What it does:
du -xh /lists disk usage for files and directories under/sort -hsorts the list by sizetail -n 20shows the 20 largest entries
Expected output: A list of directories or files with sizes, for example 8.5G /var/lib/mysql or 4.2G /var/log.
How to adjust: Running du on the entire filesystem can be slow on busy servers. A common pattern is to drill down step by step, for example:
sudo du -xh /var | sort -h | tail -n 20
sudo du -xh /var/log | sort -h | tail -n 20
These commands are read only and safe, though they can use CPU and disk I/O on very large trees.
Spotting inodes exhaustion (many small files) with df -i
If df -h shows free space but applications still report “No space left on device”, you may have run out of inodes.
df -i
What it does: Shows inodes usage for each filesystem instead of bytes.
Expected output: Similar table to df -h, but with columns for IUsed, IFree and IUse%. If IUse% is near 100% on a filesystem, it cannot create new files, even if bytes are available.
On a server, this often points to:
- Huge numbers of cache files
- Automatically generated thumbnails or session files
- Mail queues with many small messages
Common mistakes when reading disk usage output
A few pitfalls to avoid:
- Looking only at total disk size. The disk may be 80% used overall, but a specific filesystem like
/varcan be 100% full. - Ignoring filesystem type. Some entries in
dfsuch astmpfsare memory based and do not reflect persistent storage. - Confusing
duanddf.dfshows the filesystem as the kernel sees it.dureads files and sums sizes. They can differ if files have been deleted but are still held open by a running process.
Safe First Steps When Your Linux Disk Is Full
Stay calm and avoid risky commands
When a disk is full, it is tempting to start deleting large things quickly. Certain actions can make recovery harder, such as blindly removing database files or system directories.
A few principles:
- Prefer moving or compressing data over deleting it, at least initially.
- Take a snapshot or backup first where possible, especially on production servers.
- Keep a log of every command you run and file you change. This makes rollback and later review easier.
If your hosting includes snapshot features, consider taking one before major changes.
Free a little space quickly so the system can breathe
When space is completely exhausted, even basic tools can fail. Your first goal is to free a small amount of space, perhaps 100–500 MB, so the system can operate normally again.
A common place to look is large compressed logs in /var/log. To list them:
sudo ls -lh /var/log | sort -k5 -h
What it does: Lists log files in /var/log with sizes, sorted from smallest to largest by the size column.
Expected output: A list where the last lines show largest log files, such as mail.log.1 or apache2/access.log.1.gz.
If you see very large rotated logs with names ending in .1, .2.gz, and you are confident they are not needed, you can free some space by removing a few of the oldest ones.
sudo rm /var/log/apache2/access.log.3.gz
Warning: rm is permanent. Double check the full path and filename. Avoid deleting unrotated logs like access.log or syslog at this stage.
How to undo: You cannot easily undo rm without a backup. If you are unsure whether a file is safe to delete, consider moving it instead:
sudo mv /var/log/apache2/access.log.3.gz /root/
This preserves the file in /root (which is usually on the same filesystem) but removes it from the log directory.
Check for large log files and rotate or trim them safely
Once you have breathing space, you can tidy logs more systematically. On most systems log rotation is handled by logrotate.
To trigger a manual rotation:
sudo logrotate -f /etc/logrotate.conf
What it does: Forces logrotate to run based on your existing configuration, rotating logs that meet its rules (for example by size or age).
Expected output: Usually nothing if it succeeds. Check /var/log after running to confirm large logs have been rotated and compressed.
Risk: Log rotation is generally safe, but misconfigured logrotate rules can rotate certain custom logs too aggressively. Always inspect /etc/logrotate.d/ files before editing them.
If a specific log file is enormous and you simply need to truncate it, you can safely reduce its size to zero without deleting the file:
sudo truncate -s 0 /var/log/your-large-log.log
What it does: Sets the file size to zero bytes in place. The file remains, with the same permissions and owner, so services can continue writing to it.
This is much safer than deleting an active log for running services.
Dealing with log files still held open by running processes
Sometimes you delete a large log file, but df -h still shows no extra free space. That can happen if a running process still has the file open.
To find which deleted but open files are still in use, you can use lsof if it is installed:
sudo lsof | grep '(deleted)'
What it does: Lists open files, then filters for those whose path includes (deleted).
Expected output: Lines showing a process name, PID and the path of the deleted file. For example a web server process still holding /var/log/apache2/access.log.1.
To actually free the space used by such a file, the holding process must be restarted. For example, if Apache holds a deleted log file:
sudo systemctl restart apache2
Warning: Restarting services like web servers or databases is service affecting. It will briefly interrupt websites or applications. Perform restarts during quieter periods or with user communication where possible.
Common Places Where Disk Space Gets Wasted
Old backups left on the server
Backups stored directly on the same server often become the biggest consumer of space. These might be:
- Control panel backups (cPanel, Plesk, custom scripts)
- Database dumps left in
/rootor/home - Full site archives created by plugins
While on server backups feel convenient, they are vulnerable to the same problems as the main data. Our article Backups vs Redundancy: What Actually Protects Your Website explains why off server backups are more reliable.
Application and web server logs that were never rotated
Unrotated logs grow indefinitely. Common culprits include:
- Apache or Nginx access and error logs
- PHP-FPM error logs
- Custom application logs in
/var/logor a project directory
For a deeper understanding of what Linux logs exist and which are safe to trim, see Understanding System Logs on Linux and Where to Find Them.
Cache folders, sessions and temporary files
Caches are designed to be disposable, but they can grow uncontrolled:
- PHP or application caches
- Reverse proxy caches (Varnish, Nginx FastCGI cache)
- Temporary upload directories in frameworks or libraries
Before clearing a cache directory, make sure you understand what owns it. Many can be flushed harmlessly, but deleting the wrong directory in /var/tmp or /tmp can interrupt running processes.
Leftover package files, old kernels and unused software
Linux package managers keep caches so they can reinstall packages quickly. Over time, these caches and old kernels can use a significant amount of space, especially on smaller VPS instances.
On Debian/Ubuntu, to see the size of the APT cache:
sudo du -sh /var/cache/apt/archives
On CentOS/Rocky/Alma, YUM/DNF caches typically live in /var/cache/yum or /var/cache/dnf.
Uploads and media libraries for WordPress and WooCommerce
For WordPress and WooCommerce, uploads are often the largest part of the site. Under the hood, these live in wp-content/uploads, often with multiple image sizes per upload.
Cleaning images directly from the filesystem can break references in the database or in page content. If possible, manage uploads via the WordPress Media Library, and consider offloading older media to object storage or a CDN.
G7Cloud’s Web hosting performance features include the G7 Acceleration Network, which can optimise images into AVIF and WebP automatically, often reducing actual bandwidth and storage use on your origin server without extra WordPress plugins.
Step‑by‑Step: Cleaning Up Disk Space Safely

1. Identify the problem filesystem and top offenders
Start with a quick overview:
df -h
Identify which filesystem is near or at 100% (for example the line where Use% is 99% or 100%). Note the mount point, such as / or /var.
Next, investigate within that mount point. If / is full:
sudo du -xh / | sort -h | tail -n 30
From there, drill down into the largest directories you see (/var, /home, /var/lib, /var/log and so on) using the same pattern.
2. Tidy logs and temporary data without breaking services
Once you have identified bulky log directories:
- Rotate logs where appropriate:
sudo logrotate -f /etc/logrotate.conf - Truncate oversized current logs rather than deleting them:
sudo truncate -s 0 /var/log/your-large-log.log - Remove the oldest rotated logs if many generations are stored:
sudo rm /var/log/your-app.log.10.gz
Check space again after these steps:
df -h
If temporary directories are large, consider cleaning them with application specific tools. For example, many web applications have CLI commands to clear caches, which is safer than manually deleting directories.
3. Remove unneeded packages and cached package data
With logs under control, package caches and old kernels are a good next step.
Debian / Ubuntu
To remove packages that were installed automatically but are no longer needed:
sudo apt autoremove
To clean out downloaded package files that have already been installed:
sudo apt clean
RHEL / CentOS / Rocky / Alma
To clean all cached packages:
sudo yum clean all
# or
sudo dnf clean all
What these do: They only remove cached packages and unused dependencies, not currently installed software.
Risk: Relatively low, though on very old systems removing old kernels must be done thoughtfully to ensure at least one known good kernel remains. Use distribution specific documentation when managing kernels.
4. Review backups and move them off‑server
If du shows large backup directories such as /backups, /home/backup or panel specific locations, it is usually preferable to move these to another storage system rather than keep them on the same disk.
For example, to move older backups to an attached secondary volume or NFS mount at /mnt/storage:
sudo mkdir -p /mnt/storage/old-backups
sudo mv /backups/*.tar.gz /mnt/storage/old-backups/
What this does: Moves backup archives to a different filesystem, freeing space on the primary one.
Risk: Reasonably low, but always confirm the target path is correct and has enough free space. Verify moved backups on the destination before deleting any copies.
For long term safety, consider using off server storage such as S3 compatible object storage or a dedicated backup service, rather than relying on the web server disk.
5. Clean application data carefully (WordPress, WooCommerce and others)
When du shows that application data (for example /var/www or /srv) is the primary space consumer, some extra care is needed.
For WordPress and WooCommerce sites:
- Use the WordPress dashboard to delete unused media, themes and plugins where possible.
- Take a full backup of files and database before removing any application data, especially uploads.
- Consider offloading large media libraries to a storage service or CDN.
If performance and traffic spikes are common issues alongside disk growth, the G7 Acceleration Network (part of our web hosting performance features) can help by:
- Optimising images on the fly into AVIF/WebP, often reducing file sizes by more than 60 percent without changing WordPress plugins
- Filtering abusive or non human traffic before it reaches PHP or the database, which reduces unnecessary log and cache growth
Commands you should treat as high‑risk or avoid
Some commands can cause damage if used without full understanding. Treat these as high risk when working on a full disk:
rm -rf /path
Recursive deletion is permanent and has no confirm prompt. A typo can wipe critical system directories. Preferrmon specific files ormvto a holding directory.find / -delete
A misappliedfinddelete across/is almost always catastrophic. Avoid blanket deletes; always print the list first with-printor-lsand inspect it.- Manual deletion of
/var/lib/mysql,/var/lib/postgresql, or similar data directories
This removes live databases and is rarely what you want during a space issue. - Editing or truncating files in
/etcwithout knowing their purpose
Configuration files are usually small and not a solution to space problems.
If you are in doubt about a deletion, stop and document what you have found so far. This is often the right point to involve an experienced administrator or to consider managed services.
Preventing Disk Space Problems in the First Place
Set sensible log rotation and retention policies
Log rotation is your first line of defence against disks filling silently. Some guidelines:
- Rotate high volume logs by size as well as by time (for example “rotate when > 100 MB or weekly”).
- Limit the number of old log files kept (for example keep 7 or 14 days of history, not 365, unless required for compliance).
- Compress older logs to reduce size on disk.
Most distributions place logrotate rules in /etc/logrotate.d/. Review and adjust them rather than writing ad hoc cleanup scripts.
Monitor disk usage and alerts before it becomes critical
Disk monitoring is comparatively simple to add and very helpful. Your options include:
- Using your hosting provider’s built in monitoring and alerting
- Installing tools such as Netdata, Prometheus exporters or simple shell scripts triggered by cron
- Control panels that send email when partition usage passes a threshold
A useful pattern is to set warning alerts around 80% usage and critical alerts around 90–95%, for each important filesystem.
Use off‑server backups and object storage where appropriate
Storing all backups on the same disk as your live data increases space pressure and does not protect against disk failure. Instead:
- Send regular backups to another server or dedicated backup service.
- Use object storage (S3, compatible services) for large media archives, especially for WordPress or WooCommerce.
- Keep at least one backup copy completely detached from your main environment.
Right‑sizing your VPS or virtual dedicated server storage
Sometimes the simplest solution is to increase disk capacity. When you notice frequent cleanups are needed, it may be a sign that your server size or storage allocation no longer fits your workload.
On many VPS and virtual dedicated servers, you can add additional volumes or resize existing disks. Plan ahead for:
- Growth in media content and user data
- Database growth curves
- Space needed for safe backups and maintenance
Capacity planning is not a one time task. Revisiting it quarterly for active businesses is a good habit.
When managed hosting can safely take this off your plate
Disk management, monitoring and capacity planning are part of ongoing server operations. If your main focus is running a business or developing applications, you may decide that low level storage care is not where you want to spend time.
In that case, a managed environment, such as our managed virtual dedicated servers or Managed WordPress hosting, can take responsibility for many of the routine checks, patching and monitoring, while still giving you control over your applications.
When You Should Not DIY: Signs You Need Help
Symptoms that suggest real data risk or complex recovery
There are situations where it is wise to pause and involve a specialist rather than experimenting:
- The server will not boot after a disk full incident.
- Database tables report corruption or refuse to start even after space is freed.
- Critical system files under
/etcor/usrhave been deleted. - Important data appears missing and you do not have recent off server backups.
In these cases, further write activity on the disk can complicate recovery options, especially if you will be relying on filesystem level tools later.
How to capture information before handing over to a specialist
Before you hand over to a hosting provider or independent administrator, gather:
- The output of
df -handdf -i - The output of
lsblk - A list of the largest directories, for example
sudo du -xh / | sort -h | tail -n 30 - Any recent error messages from web server logs, database logs and
/var/log/syslogor/var/log/messages
Also, provide a short summary of:
- What changed immediately before the incident (updates, migrations, new plugins)
- Commands you already ran, especially deletions
This context helps support teams and consultants focus their efforts efficiently.
Planning for the future: capacity, backups and responsibilities
Once the immediate problem is resolved, use it as an opportunity to clarify:
- Who is responsible for watching disk usage and performing cleanups
- Where backups are stored, how often they run and how restores are tested
- What level of monitoring and response you need as the environment grows
If you would prefer to delegate ongoing disk and system management, it may be worth reading When Managed Hosting Makes Sense for Growing Businesses and Understanding Hosting Responsibility: What Your Provider Does and Does Not Cover. These explain where your responsibilities begin and end on different hosting models.
Quick Reference: Commands and Checks for Disk Space Issues
Summary table of useful commands, what they do and safe use notes
| Command | Purpose | Notes on safe use |
|---|---|---|
df -h |
Show filesystem usage in human readable form | Read only and safe; first step to find full partitions |
df -i |
Show inode usage per filesystem | Read only; use when “No space left” appears but GBs are free |
lsblk |
List disks, partitions and mount points | Read only; helps understand storage layout |
du -xh /path | sort -h | tail |
Find largest directories/files under a path | Read only but can be slow on huge trees; drill down gradually |
sudo logrotate -f /etc/logrotate.conf |
Force log rotation based on existing rules | Generally safe; ensure logrotate configs are sensible first |
sudo truncate -s 0 file.log |
Empty a log file without removing it | Safe alternative to deleting active logs; retains permissions |
sudo apt autoremove / sudo apt clean |
Remove unused packages and cached downloads (Debian/Ubuntu) | Low risk; check the package list before confirming autoremove |
sudo yum clean all / sudo dnf clean all |
Clean package manager caches (RHEL family) | Low risk; can be run periodically |
sudo lsof | grep '(deleted)' |
Find open but deleted files still using space | Read only; restarting the owning service actually frees the space |
rm -rf /path |
Recursively delete files and directories | High risk. Avoid unless absolutely sure; there is no undo |
If you prefer more background on many of these commands, the manual pages on your server (man df, man du) and the official GNU Coreutils documentation are good references. For example, see the GNU Coreutils manual.
Managing disk space on a Linux server is a steady maintenance task rather than a one‑off job. With a handful of safe commands, sensible log rotation and good backups, it becomes straightforward.
If you decide you would rather concentrate on your sites and applications instead of this day‑to‑day server care, you can explore our managed and unmanaged virtual dedicated servers or Managed WordPress hosting as a next step.