Home / Knowledge Base / Linux & Server Basics / How to Check CPU, Memory and Disk Usage on a Linux Server
  1. Home
  2. »
  3. Knowledge Base
  4. »
  5. WordPress Hosting
  6. »
  7. Making WordPress Updates Safe When…

How to Check CPU, Memory and Disk Usage on a Linux Server

Table of Contents

How to Check CPU, Memory and Disk Usage on a Linux Server

Why checking CPU, memory and disk usage matters

A simple diagram showing a user request hitting a web server, then PHP and the database, all sitting on top of CPU, memory and disk resources, to visually tie performance symptoms to server resources.

What resource usage actually means for your site

Every request to your website travels through several layers:

  • The web server (for example Nginx or Apache)
  • Application code such as PHP and WordPress
  • The database, often MySQL or MariaDB
  • Underlying server resources: CPU, memory and disk

When people say “the server is slow”, it nearly always comes back to one or more of these resources being heavily used or blocked.

  • CPU controls how many tasks the server can work on at once and how quickly.
  • Memory (RAM) is where active programs and database caches live. Not enough RAM means more swapping and slowdowns.
  • Disk stores your files and database. If it fills up or becomes very busy, everything above it can feel slow or unstable.

Learning how to check these three resources gives you a clear, factual view of what is happening, instead of guessing.

Slow WordPress or WooCommerce: is it really the server?

WordPress and WooCommerce can feel slow for many reasons. Some are server related, others are within the site itself:

  • Poorly optimised plugins or themes
  • Uncached pages or dynamic WooCommerce carts
  • Database queries that are slow or repeated too often
  • High CPU, low RAM or disk problems on the server

Before spending time changing plugins or migrating hosting, it helps to answer a few basic questions:

  • Is CPU consistently very high when the site is slow?
  • Is the server running out of memory or using a lot of swap?
  • Is the disk nearly full or clearly overloaded?

These checks complement application-level diagnostics such as browser DevTools, performance plugins and tools like Query Monitor. You can see how this links together in our guide How to Diagnose Slow WordPress Performance Using Real Tools and Metrics.

Managed vs unmanaged: how much of this you want to own

If you run an unmanaged VPS or virtual dedicated server, you are responsible for:

  • Checking resource usage
  • Applying updates and security patches
  • Fixing performance issues and planning capacity

This can be quite manageable if you are comfortable learning Linux basics, can test changes away from production, and keep a record of what you do.

If you only need to run one or two business sites and prefer not to spend time on server internals, a managed option such as Managed WordPress hosting or a managed virtual dedicated server can offload much of this work. You still benefit from understanding the basics in this article, but you do not have to handle them every day.

Before you start: safely connecting to your Linux server

Logging in with SSH

To check CPU, memory and disk usage, you need terminal access to the server using SSH.

On macOS or Linux you can use the built-in Terminal. On Windows you can use Windows Terminal, PowerShell or an SSH client such as PuTTY.

The basic connection command looks like this:

ssh username@your-server-ip

This tells your computer to open a secure shell session to the server at your-server-ip using the login name username. You will be prompted for your password or SSH key phrase.

If SSH is new to you, see our step-by-step guide How to Connect to a Linux Server Securely Using SSH.

Basic command line comfort checks

Once you are logged in, it is wise to verify where you are and what user you are using:

whoami
pwd
ls
  • whoami prints your current user name.
  • pwd (print working directory) shows the directory you are in, often /home/username.
  • ls lists files and folders there.

These commands do not make changes. They are safe and help you orient yourself. For more foundations, our article Essential Linux Commands Every VPS Owner Should Know is a useful reference.

Safety note: look first, change later

This article focuses on read-only commands that inspect the system. They should not change configuration or delete anything.

Some healthy habits:

  • When in doubt, read the manual page with man commandname.
  • Copy commands carefully rather than retyping from memory.
  • Take a snapshot or backup before making larger changes to configuration or removing files.
  • Keep a simple log of what you ran, especially anything you later tweak.

As your skills grow, this makes it much easier to reverse or adjust earlier decisions.

Checking CPU usage on a Linux server

An abstracted terminal-style panel representing the top command, with highlighted areas for CPU, memory and process list, to help readers recognise the important parts when they run it themselves.

Quick CPU overview with top and htop

The quickest way to see live CPU usage is with top. It is installed on virtually all Linux distributions.

top

When you run top, you will see a constantly updating view of:

  • Overall CPU usage
  • Memory summary
  • A list of processes sorted by CPU usage

Useful keys inside top:

  • q to quit
  • P to sort by CPU usage
  • M to sort by memory usage

If you prefer a clearer layout, many servers also offer htop, which is more visual and keyboard friendly.

htop

If htop is not installed, you can usually add it with your package manager, for example on Ubuntu:

sudo apt update
sudo apt install htop

This updates your package list and installs htop. It is a low‑risk change but still worth doing on a non‑production or staging server first if you are cautious about unexpected package updates.

Understanding load average and CPU cores

At the top of the top output you will see a line like this:

load average: 0.25, 0.40, 0.35

These three numbers show the average number of processes waiting to run over the last 1, 5 and 15 minutes. To interpret them, you need to know how many CPU cores your server has.

You can check the number of CPU cores with:

nproc

If nproc prints 4, that means you have 4 CPU cores. Very broadly:

  • Load average lower than or close to the number of cores is usually fine.
  • Load average consistently higher than the number of cores suggests your CPU is struggling to keep up.

Short spikes are normal during backups or traffic bursts. You are looking for sustained high load, especially when users report slow page loads.

Finding which processes are using the most CPU

When CPU is high, the next question is “which process is doing this?” You can ask ps to show the top CPU consumers at that moment:

ps aux --sort=-%cpu | head -n 10

This command:

  • ps aux lists all processes in a detailed format.
  • --sort=-%cpu sorts them by CPU usage, highest first.
  • | head -n 10 shows only the first 10 lines for readability.

The output will include columns like USER, %CPU, %MEM and the command name. On a web server you might see:

  • php-fpm worker processes using CPU during PHP requests
  • mysqld using CPU during busy database activity
  • apache2 or nginx under heavy web load

You can safely run this as often as you like. It only reads process information.

Common CPU-related problems on web servers

Some frequent patterns you might spot:

  • PHP at 100% CPU for long periods: often heavy plugins, uncached pages, or a crawl by bots.
  • Database (mysqld) at high CPU: large or inefficient queries, missing indexes, or a very busy WooCommerce store.
  • Many web server workers at high CPU: unusually high concurrent traffic or abuse.

In these cases you can:

  • Use caching and optimisation on the application side.
  • Review bot and crawl traffic, and consider a protective layer.
  • Plan capacity upgrades if the load is consistently high even after tuning.

If you have heavier or unpredictable traffic, a managed virtual dedicated server can help with ongoing tuning instead of having to do it all yourself.

Checking memory (RAM) usage in Linux

Using free -h to see RAM at a glance

To see an overview of memory and swap usage, use:

free -h

This prints something like:

              total        used        free      shared  buff/cache   available
Mem:           4.0G        2.2G        300M        120M        1.5G        1.3G
Swap:          2.0G        200M        1.8G

The -h flag shows values in human‑friendly units such as MB and GB. This command is read‑only and safe.

What “used”, “free”, buffer and cache really mean

Linux tries to use memory efficiently. It is normal for “used” memory to be high, even when the server is healthy. The key columns are:

  • used: memory taken by running processes plus caches.
  • buff/cache: memory used for disk caches that can be freed if needed.
  • available: a better estimate of how much memory is really free for new applications.

Focus on available rather than the raw “free” value. If “free” is low but “available” is reasonably high, the server is usually fine.

Spotting swapping and out-of-memory issues

Swap is disk space used when RAM is full. It is much slower than real memory, so heavy swap usage can make your site feel sluggish.

In the free -h output above, look at the Swap line:

  • If swap “used” stays at 0 or a small figure, you likely have enough RAM.
  • If swap “used” grows and stays high, your server is regularly running out of RAM.

You can also watch swap usage over time with top or htop. In htop, swap is shown with coloured bars at the top.

Another symptom of memory problems is the kernel’s out‑of‑memory (OOM) killer terminating processes. You can search system logs for this kind of event:

sudo dmesg | grep -i "out of memory"

This scans the kernel message buffer for lines mentioning “out of memory”. It does not modify anything. If you see repeated OOM messages, you likely need to tune memory usage or increase RAM.

Seeing which processes are using the most memory

You can ask ps to sort by memory usage:

ps aux --sort=-%mem | head -n 10

This works like the CPU example earlier but sorted by %MEM. On a typical hosting server, top entries might be:

  • mysqld using memory for database buffers and caches
  • php-fpm pools or Apache workers
  • Background services such as Redis or Elasticsearch if you use them

If a single process uses an unexpectedly large amount of memory, that can point you to the next area to investigate, such as a specific application or service.

Checking disk space and disk I/O usage

A visual of a server disk represented as a circle or rectangle divided into segments for directories like /var, /home and /var/www, illustrating which areas might grow large and need checking with df and du.

Checking free disk space with df -h

If disks fill up, services can fail in surprising ways. Databases may stop accepting writes, logs cannot be saved and updates can break. It is a key health check.

To see disk space usage per filesystem, run:

df -h

This will show output similar to:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        80G   50G   27G  66% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
  • Size is the total size of the filesystem.
  • Used and Avail show used and available space.
  • Use% is the percentage used. Over about 90% deserves attention, and over 95% is urgent.
  • Mounted on shows where the filesystem is attached, such as /, /home, or /var.

Again, this command is read‑only.

Finding which folders are using the most space with du

Once you know a filesystem is nearly full, you can find which directories are largest with du (disk usage).

To see the largest directories under /var:

sudo du -h --max-depth=1 /var | sort -h

This does the following:

  • du -h --max-depth=1 /var shows the size of /var and its direct subdirectories.
  • | sort -h sorts the output by size, smallest to largest.

Be aware that du can take some time on very large directory trees and puts a bit of load on the disk while it runs. It still does not change data or configuration, it just reads metadata.

If your web files live under /var/www or /home, you can run similar commands there:

sudo du -h --max-depth=1 /var/www | sort -h

This often reveals large log folders, forgotten backups or cache directories that can be cleaned after careful review.

Safety note: avoid deleting system or application files

Once you find large directories, be cautious about how you free space:

  • Do not remove files in /usr, /bin, /lib or other core system paths unless you are certain they are safe to remove.
  • Do not delete databases directly from the filesystem.
  • Prefer to rotate or truncate logs using your distribution’s log rotation tools (logrotate) rather than deleting random log files.

Typical safe candidates for clean-up include:

  • Old application backups stored in /home or a custom backup folder.
  • Stale cache folders under wp-content/cache for WordPress, deleted via the application or plugin controls where possible.
  • Very old log archives in designated log directories, after checking they are not needed for compliance.

Before removing large amounts of data, it is wise to:

  • Take a backup or snapshot.
  • Record exactly what you deleted and where from.

Checking disk I/O bottlenecks (when the disk is the problem)

Sometimes plenty of free disk space exists, but the disk is overloaded. This is called an I/O bottleneck. It can make everything feel slow, particularly databases.

Two tools commonly used for this are iostat and iotop. They may not be installed by default.

On Ubuntu or Debian, you can install them like this:

sudo apt update
sudo apt install sysstat iotop

sysstat provides iostat. Installing these packages is low‑risk, but as with any system change, it is best tested off production where possible.

To see basic disk I/O statistics:

iostat -xz 1 5
  • -x shows extended statistics.
  • -z hides devices with no activity.
  • 1 5 means “update every 1 second, 5 times”.

Look at the %util and await columns:

  • %util near 100% for long periods suggests the disk is fully busy.
  • await is the average wait time for I/O requests. Very high numbers indicate slow responses.

To see which processes are doing the most disk I/O in real time:

sudo iotop

This opens an interactive view similar to top, listing processes and their I/O usage. Press q to quit. This helps identify, for example, if a backup process or a log-heavy application is stressing the disk.

Putting it together: a simple checklist when your site feels slow

Step-by-step: CPU, RAM, disk and network quick health check

When your site feels slow, a quick structured check can save time:

  1. Check CPU:
    • Run top or htop and watch load average and CPU usage for a few minutes.
    • Use ps aux --sort=-%cpu | head to see the top CPU processes.
  2. Check memory:
    • Run free -h and note “available” memory and swap usage.
    • If swap is high, use ps aux --sort=-%mem | head to see top memory users.
  3. Check disk usage:
    • Run df -h to ensure no filesystem is over about 90% usage.
    • If space is tight, use du to locate large directories.
  4. Check disk performance (if CPU and RAM look fine):
    • Run iostat -xz 1 5 to look for high utilisation and wait times.
    • Optionally use sudo iotop to identify heavy I/O processes.
  5. Check network traffic if needed:

By following this order, you work from most common bottlenecks to less common ones.

How this ties into PHP, MySQL and WordPress performance

These resource checks directly support higher‑level performance work:

  • If CPU is high and php-fpm processes are on top, caching pages or optimising plugins will likely help.
  • If the database is driving CPU and memory, query optimisation, indexing and cleaning up overhead may be needed.
  • If disk I/O is the bottleneck, database tuning, log rotation, or moving to faster storage can have a big impact.

For WordPress and WooCommerce, this might mean:

  • Using page caching and object caching to cut PHP and database load.
  • Optimising product catalog queries and checkout flows to avoid heavy queries on every request.
  • Offloading static assets and image optimisation.

At the hosting edge, the G7 Acceleration Network can automatically convert images to WebP and AVIF on the fly, and filter abusive bot traffic before it hits PHP or the database. This reduces wasted CPU and keeps performance more predictable even at busy times.

When resource upgrades or a managed VDS make more sense

If checks show that CPU, memory or disk are consistently close to their limits, and you have already applied sensible optimisations, it may be time to:

  • Add more CPU or RAM.
  • Move to faster storage.
  • Consider a different architecture, such as separate database and web layers.

Our article Scaling a Website Safely: When Vertical Scaling Stops Working explains when simply adding more resources is no longer the best answer.

At that stage, a managed virtual dedicated server can help by combining extra capacity with ongoing monitoring and tuning, so you are not constantly reacting to resource issues yourself.

Basic continuous monitoring options without becoming a full-time sysadmin

Lightweight tools and host-level graphs

Manual checks are useful, but it also helps to have a sense of trends over days and weeks. Even simple graphs can show:

  • Daily traffic peaks and how they affect CPU and RAM.
  • Slowly increasing disk usage before it becomes critical.
  • Unusual periods of heavy I/O or memory pressure.

Common approaches include:

  • Cloud provider dashboards that show CPU, RAM and disk metrics.
  • Lightweight monitoring agents that send metrics to a central system.
  • Basic log and metric tools like sar from sysstat to view historical data locally.

Many managed hosting platforms include these graphs automatically, so you can focus more on interpreting them than on installing monitoring software.

Why uptime and resource alerts matter

Alerts let you know when something is heading out of a comfortable range so you can react calmly instead of dealing with an outage. Useful alerts include:

  • High CPU load sustained for more than a few minutes.
  • Low “available” memory or heavy swap usage.
  • Disks nearing capacity.
  • Web checks that fail to load the homepage or checkout.

These are especially important if you run online shops or time‑sensitive services, where quiet failures can cost orders long before someone notices by hand.

For more context on how resource exhaustion ties into availability, see Why Websites Go Down: The Most Common Hosting Failure Points.

When to hand off to managed WordPress or managed VDS

There is a balance between learning enough to understand what is happening and taking on a full systems administration role.

Handing off may make sense if:

  • You mainly want to focus on your business or content rather than servers.
  • You have frequent traffic spikes and would rather a team monitor and tune capacity.
  • You run WooCommerce or other transactional sites where small slowdowns or downtime are costly.

For a single or small number of WordPress or WooCommerce sites, Managed WordPress hosting can remove much of the day‑to‑day burden. For more complex or multi‑site setups, a managed virtual dedicated server provides more flexibility while still offloading monitoring, patching and tuning work.

Summary: building confidence with Linux resource checks

Being able to check CPU, memory and disk usage is a core skill for anyone running a Linux server. It shifts you from guessing to informed decisions.

  • CPU checks tell you whether the server has the processing headroom it needs.
  • Memory checks show if RAM is comfortable or if swap and OOM events are a risk.
  • Disk checks confirm that you have space and performance for databases, logs and file storage.

The commands in this guide are observational. Used regularly, they give you a clear baseline of what “normal” looks like, so you can spot problems early.

If you decide you would rather spend more time on your site and less on resource graphs and shell sessions, you might find that managed virtual dedicated servers or Managed WordPress hosting are a natural next step. Either way, the understanding you have gained here will help you talk clearly about what your site needs and why.

Table of Contents

G7 Acceleration Network

The G7 Acceleration Network boosts your website’s speed, security, and performance. With advanced full page caching, dynamic image optimization, and built-in PCI compliance, your site will load faster, handle more traffic, and stay secure. 

WordPress Hosting

Trusted by some of the worlds largest WooCommerce and WordPress sites, there’s a reason thousands of businesses are switching to G7

Related Articles