Object Cache
An object cache stores database query results so that instead of running the query again the next time the results are needed, the results are served from the cache. This greatly improves the performance of WordPress as there is no longer a need to query the database for every piece of data required to return a response.
Redis is the latest and greatest when it comes to object caching. However, popular alternatives include Memcache and Memcached.
To install Redis, issue the following commands.
sudo apt install redis-server
sudo service php7.4-fpm restart
In order for WordPress to use Redis as an object cache, you need to install a Redis object cache plugin. Redis Object Cache by Till Krüss is a good choice.
Once installed and activated, go to Tools > Redis to enable the object cache.
This is also the screen where you can flush the cache if required.
The benefit of object caching can be seen when you look at the average database query time, which has decreased from 2.1ms to 0.3ms. The average query times were measured using Query Monitor.
In order to see a big leap in performance and a big decrease in server resource usage, we must avoid a MySQL connection and PHP execution altogether.
Page Cache
Although an object cache can go a long way to improving your WordPress site’s performance, there is still a lot of unnecessary overhead in serving a page request. For many sites, content is rarely updated. It’s therefore inefficient to load WordPress, query the database and build the desired page on every single request. Instead, you should serve a static HTML version of the requested page.
Nginx allows you to automatically cache a static HTML version of a page using the FastCGI module. Any subsequent requests to the page will receive the cached HTML version without ever hitting PHP or MySQL.
Step 1: Edit the Nginx Main Configuration File
Edit Nginx main configuration file.
sudo nano /etc/nginx/nginx.conf
In the http context, add the following 2 lines:
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=phpcache:100m max_size=10g inactive=60m use_temp_path=off;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
The first directive fastcgi_cache_path creates a FastCGI cache. This directive is only available under the http context of an Nginx configuration file.
The first argument specifies the cache location in the file system (/etc/nginx/cache/).
The levels parameter sets up a two-level directory hierarchy under /etc/nginx/cache/. Having a large number of files in a single directory can slow down file access, so I recommend a two-level directory for most deployments. If the levels parameter is not included, Nginx puts all files in the same directory. The first directory uses one character in its name. The sub-directory uses two characters in its name.
The 3rd argument specifies the name of the shared memory zone name (phpcache) and its size (100M). This memory zone is for storing cache keys and metadata such as usage times. Having a copy of the keys in memory enables Nginx to quickly determine if a request is a HIT or MISS without having to go to disk, greatly speed up the check. A 1MB zone can store data for about 8,000 keys, so the 100MB zone can store data for about 800,000 keys.
max_size sets the upper limit of the size of the cache (10GB in this example). If not specified, the cache can use all remaining disk space. Once cache reaches its maximum size, Nginx cache manager will remove the least recently used files from the cache.
Data which has not been accessed during the inactive time period (60 minutes) will be purged from the cache by the cache manager, regardless of whether or not it has expired. The default value is 10 minutes. You can also use values like 12h (12 hours) and 7d (7 days).
Nginx first writes files that are destined for the cache to a temporary storage area (/var/lib/nginx/fastcgi/). use_temp_path=off tells Nginx to write them directly to the final cache directory to avoid unnecessary copying of data between file systems.
The 2nd directive fastcgi_cache_key defines the key for cache lookup. Nginx will apply a MD5sum hash function on the cache key and uses the hash result as the name of cache files. After entering the two directives in the http context, save and close the file.
Step 2: Edit Nginx Server Block
Then open your server block configuration file.
sudo nano /etc/nginx/conf.d/your-domain.conf OR directly in tinycp Web >> domain >> your domain >>custom config
Scroll down to the location ~ .php$ section. Add the following lines in this section.
location ~ .php$ {
fastcgi_cache phpcache;
fastcgi_cache_valid 200 301 302 60m;
fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
fastcgi_cache_min_uses 1;
fastcgi_cache_lock on;
add_header X-FastCGI-Cache $upstream_cache_status;
}
The fastcgi_cache directive enables caching, using the memory zone previously created by fastcgi_cache_path directive.
The fastcgi_cache_valid sets the cache time depending on the HTTP status code. In the example above, responses with status code 200, 301, 302 will be cached for 60 minutes. You can also use time period like 12h (12 hours) and 7d (7 days).
Nginx can deliver stale content from its cache when it can’t get updated content from the upstream PHP-FPM server. For example, when MySQL/MariaDB database server is down. Rather than relay the error to clients, Nginx can deliver the stale version of the file from its cache. To enable this functionality, we added the fastcgi_cache_use_stale directory.
fastcgi_cache_min_uses sets the number of times an item must be requested by clients before Nginx caches it. Default value is 1.
With fastcgi_cache_lock enabled, if multiple clients request a file that is not current in the cache, only the first of those requests is allowed through to the upstream PHP-FPM server. The remaining requests wait for that request to be satisified and then pull the file form the cache. Without fastcgi_cache_lock enabled, all requests go straight to the upstream PHP-FPM server.
The 3rd line adds the X-FastCGI-Cache header in HTTP response. It can be used to validate whether the request has been served from the FastCGI cache or not.
Now save and close the server block configuration file. Then test your Nginx configuration.
sudo nginx -t
If the test is successful, reload Nginx.
sudo service nginx reload
or
sudo systemctl reload nginx
The cache manager now starts and the cache directory (/etc/nginx/cache) will be created automatically.
Testing Nginx FastCGI Cache
Reload your site’s home page for a few times. Then use curl to fetch the http response header.
curl -I [Login to see the link]
Like this:
nginx fastcgi cache test
Take a look at the X-FastCGI-Cache header. HIT indicates the response was served from cache.
Stuff that Should not be Cached
Login session, user cookie, POST request, query string, WordPress back-end, site map, feeds and comment author should not be cached. To disable caching for the above items, edit your server block configuration file (Or directly in tinycp Web >> domain >> your domain >>custom config ). Paste the following code into the server context, above the location ~ .php$ line.
set $skip_cache 0;
POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-..php|^/feed/|/tag/./feed/|index.php|/.sitemap..(xml|xsl)") {
set $skip_cache 1;
}
Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
Save and close the file. Then test Nginx configurations.
sudo nginx -t
If the test is successful, reload Nginx.
sudo systemctl reload nginx
Now its done