Shopware HTTP Cache with Varnish XKey
Shopware removes Redis as a dependency for Varnish as HTTP cache with version 6.7.0 and provides an alternative via Varnish XKey. Varnish XKey is part of the Varnish Module Collection, which is not integrated into Varnish Cache itself. We provide a corresponding Debian repository which enables the installation of the Varnish Module Collection via apt and thus simplifies the integration of Varnish XKey.
For the configuration of Varnish with XKey, Shopware provides a complete configuration in the Shopware developer documentation:
Prerequisite
- Shopware 6 Cluster
- Varnish Server with Varnish 7.7
- Varnish Modules Collection
For Managed Servers, the installation of Varnish 7.7 and the Varnish Modules Collection is carried out by our support team. If you would like to install the software, please contact our support team.
Trusted Proxies
Enter the source IP address of the proxy server in the configuration TRUSTED_PROXIES. We recommend operation within a creoline VPC network in order to demilitarize the internal infrastructure.
As of Shopware 6.6, a framework.yaml must first be configured, which activates the Shopware-internal TRUSTED_PROXIES configuration.
# config/packages/framework.yaml
framework:
trusted_proxies: '%env(TRUSTED_PROXIES)%' Example configuration via .env:
TRUSTED_PROXIES=127.0.0.1,10.20.0.1/32,10.20.0.2/32 In this example configuration, the two Varnish instances 10.20.0.1 and 10.20.0.2 are authorized as proxies.
You can find more information on trusted proxies in the official Symfony documentation at Symfony Proxy Docs.
Shopware 6 XKey configuration
In order for Shopware 6 to set the correct Cache-Control header, the configuration of the reverse proxy is required. The following configuration can be used to ensure that Shopware sets the value public, must-revalidate instead of the Cache-Control header value private, no-cache and invalidates the caches accordingly in the Varnish servers via BAN requests.
Before configuring Shopware, it is worth taking a look at Error analysis and making the settings described there in advance if necessary. In this way, downtime can usually be avoided, as it is difficult to estimate in advance and, for example, without a staging cluster, how much overhead will occur in the HTTP headers.
Create the following file and insert the following content accordingly:
nano config/packages/shopware.yml # Be aware that the configuration key changed from storefront.reverse_proxy to shopware.http_cache.reverse_proxy starting with Shopware 6.6
shopware:
http_cache:
reverse_proxy:
enabled: true
use_varnish_xkey: true
hosts:
# The names for app slave servers start with 2, as no. 1 is the app master server
- '10.20.0.XX' # <- Varnish cache for app slave 2 (10.20.0.X)
- '10.20.0.ZZ' # <- Varnish cache for app slave 3 (10.20.0.Z) In addition, the environment variable SHOPWARE_HTTP_CACHE_ENABLED=1 must be set in the .env file.
Varnish 7.7 Configuration
The Varnish configuration can be edited and rolled out directly via the configuration module in the Customer Center. Navigate to Server โ Server X โ Configuration files โ Varnish, to customize the Varnish configuration.
Please note that saving the changes to the Varnish configuration immediately triggers a config test with a subsequent reload of the Varnish instance. If you configure Varnish-Cache initially, we recommend a test instance to evaluate the correct configuration.
Shopware 6 Varnish XKey VCL
We provide you with the Shopware 6 Varnish XKey VCL and are happy to support you with any questions regarding integration. Please note that it is your responsibility to ensure that the Varnish cache functions correctly in your environment. If plugins are used that do not fully map the HTTP cache behavior of Shopware 6, this can lead to unexpected behavior of the cache. We therefore recommend that you first check the Varnish cache in a test instance or ideally in a staging cluster in order to detect possible conflicts at an early stage.
The latest Shopware 6 Varnish VCL can be obtained directly from GitHub:
vcl 4.1;
import std;
import xkey;
import cookie;
# Specify your app nodes here. Use round-robin balancing to add more than one.
backend default {
.host = "__SHOPWARE_BACKEND_HOST__";
.port = "__SHOPWARE_BACKEND_PORT__";
}
# ACL for purgers IP. (This needs to contain app server ips)
acl purgers {
"127.0.0.1";
"localhost";
"::1";
__SHOPWARE_ALLOWED_PURGER_IP__;
}
sub vcl_recv {
# Handle PURGE
if (req.method == "PURGE") {
if (client.ip !~ purgers) {
return (synth(403, "Forbidden"));
}
if (req.http.xkey) {
set req.http.n-gone = xkey.purge(req.http.xkey);
return (synth(200, "Invalidated "+req.http.n-gone+" objects"));
} else {
return (purge);
}
}
if (req.method == "BAN") {
if (client.ip !~ purgers) {
return (synth(403, "Forbidden"));
}
ban("req.url ~ "+req.url);
return (synth(200, "BAN URLs containing (" + req.url + ") done."));
}
# Only handle relevant HTTP request methods
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "PATCH" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
return (pipe);
}
if (req.http.Authorization) {
return (pass);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Micro-optimization: Always pass these paths directly to php without caching
# to prevent hashing and cache lookup overhead
# Note: virtual URLs might bypass this rule (e.g. /en/checkout)
if (req.url ~ "^/(checkout|account|admin|api)(/.*)?$") {
return (pass);
}
cookie.parse(req.http.cookie);
# set cache-hash cookie value to header for hashing based on vary header
# if header is provided directly the header will take precedence
if (!req.http.sw-cache-hash) {
set req.http.sw-cache-hash = cookie.get("sw-cache-hash");
}
set req.http.currency = cookie.get("sw-currency");
set req.http.states = cookie.get("sw-states");
if (req.url == "/widgets/checkout/info" && (req.http.sw-cache-hash == "" || (cookie.isset("sw-states") && req.http.states !~ "cart-filled"))) {
return (synth(204, ""));
}
# Ignore query strings that are only necessary for the js on the client. Customize as needed.
if (req.url ~ "(\?|&)(pk_campaign|piwik_campaign|pk_kwd|piwik_kwd|pk_keyword|pixelId|kwid|kw|adid|chl|dv|nk|pa|camid|adgid|cx|ie|cof|siteurl|utm_[a-z]+|_ga|gclid)=") {
# see rfc3986#section-2.3 "Unreserved Characters" for regex
set req.url = regsuball(req.url, "(pk_campaign|piwik_campaign|pk_kwd|piwik_kwd|pk_keyword|pixelId|kwid|kw|adid|chl|dv|nk|pa|camid|adgid|cx|ie|cof|siteurl|utm_[a-z]+|_ga|gclid)=[A-Za-z0-9\-\_\.\~%]+&?", "");
}
set req.url = regsub(req.url, "(\?|\?&|&&)$", "");
# Normalize query arguments
set req.url = std.querysort(req.url);
# Set a header announcing Surrogate Capability to the origin
set req.http.Surrogate-Capability = "shopware=ESI/1.0";
# Make sure that the client ip is forward to the client.
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
return (hash);
}
sub vcl_hash {
# Consider Shopware HTTP cache cookies
if (req.http.sw-cache-hash != "") {
hash_data("+context=" + req.http.sw-cache-hash);
} elseif (req.http.currency != "") {
hash_data("+currency=" + req.http.currency);
}
}
sub vcl_hit {
# Consider client states for response headers
if (req.http.states) {
if (req.http.states ~ "logged-in" && obj.http.sw-invalidation-states ~ "logged-in" ) {
return (pass);
}
if (req.http.states ~ "cart-filled" && obj.http.sw-invalidation-states ~ "cart-filled" ) {
return (pass);
}
}
}
sub vcl_backend_fetch {
unset bereq.http.currency;
unset bereq.http.states;
}
sub vcl_backend_response {
# Serve stale content for three days after object expiration
set beresp.grace = 3d;
unset beresp.http.X-Powered-By;
unset beresp.http.Server;
# This should happen before any early return via deliver, so that ESI can still be processed
if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
unset beresp.http.Surrogate-Control;
set beresp.do_esi = true;
}
# Reducing hit-for-miss duration for dynamically uncacheable responses
if (beresp.http.sw-dynamic-cache-bypass == "1") {
# Mark as "Hit-For-Miss" for the next n seconds
set beresp.ttl = 1s;
set beresp.uncacheable = true;
unset beresp.http.sw-dynamic-cache-bypass;
return (deliver);
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.Set-Cookie;
}
}
sub vcl_deliver {
## we don't want the client to cache anything except assets and store-api responses
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(theme|media|thumbnail|bundles|store-api)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
# Set a cache header to allow us to inspect the response headers during testing
if (obj.hits > 0) {
unset resp.http.set-cookie;
set resp.http.X-Cache = "HIT";
if (obj.ttl <= 0s && obj.grace > 0s) {
set resp.http.X-Cache = "STALE";
}
} else {
set resp.http.X-Cache = "MISS";
}
# invalidation headers are only for internal use
unset resp.http.sw-invalidation-states;
unset resp.http.xkey;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
} Define backend
A Varnish cache is always operated in a ratio of 1:1 to the app slaves for simple vertical scaling in our clusters. This means that if there are 2 app slave servers, there must also be 2 Varnish cache servers in the setup.
The 1:1 assigned app slave server is specified in the default backend. Example for Varnish server #2:
backend default {
.host = "10.20.0.Y"; # <- VPC IP address app slave #2
.port = "80"; # <- Web server port for 10.20.0.Y
} Define Purger
The purger is the server that is allowed to clear the cache via a BAN request. These are primarily the app servers. All app servers must be specified.
# ACL for purgers IP. (This needs to contain app server ips)
acl purgers {
"127.0.0.1";
"localhost";
"::1";
"10.20.0.X"; # <- VPC IP address app master #1
"10.20.0.Y"; # <- VPC IP address app slave #2
"10.20.0.Z"; # <- VPC IP address app slave #3
# ...
} Release load balancer for BAN request via URL
Varnish XKey also supports the ban via the path shop.creoline-demo.de/products, so that only parts of the website are deleted from the HTTP cache. The BAN request is often resolved directly via the domain name, which is why the load balancer must also be stored as a purger in this case.
It must be ensured that BAN requests from the load balancer are only permitted from certain IP addresses. For example, the WAN IPv4 address of the app server can be enabled in the configuration of the load balancer. Otherwise, it would be possible for third parties to clear the cache and your website would lose performance.
# ACL for purgers IP. (This needs to contain app server ips)
acl purgers {
"127.0.0.1";
"localhost";
"::1";
"10.20.0.X"; # <- VPC IP address app master #1
"10.20.0.Y"; # <- VPC IP address app slave #2
"10.20.0.Z"; # <- VPC IP address app slave #3
# ...
"10.20.0.1"; # <- VPC IP address load balancer
} Soft-Purge vs. Hard-Purge
The above Varnish configuration uses hard purges by default, which ensures that a page is removed from the cache and the next request takes longer. To counteract this, soft-purges can be used, which deliver the outdated page to the client once and update the cache for this website in the background.
Soft purges can be used with an adjustment to the Varnish configuration, see Line 27 in the above Varnish configuration:
# Hard-Purge
set req.http.n-gone = xkey.purge(req.http.xkey);
# Soft-Purge
set req.http.n-gone = xkey.softpurge(req.http.xkey); HTTP cache for logged-in users or visitors with shopping carts
By default, the Shopware HTTP cache is only available for not logged in visitors without shopping cart content. If you do not use customizations for logged-in users, the HTTP cache can also be activated for logged-in users with shopping carts.
Before activating this function, make sure that you do not provide prices based on customers or customer groups.
# config/packages/prod/shopware.yaml
shopware:
cache:
invalidation:
http_cache: [] Debugging
The default configuration removes all HTTP headers except for the Age, which is used to determine the cache age. An Age of 0 means that the cache is not working. This is usually because the Cache-Control: public HTTP header has not been set by the web application.
The curl command can be used as follows to check this:
curl -vvv -H 'Host: <sales-channel-domain>' <app-server-ip> 1> /dev/null which should return the following response:
< HTTP/1.1 200 OK
< Cache-Control: public, s-maxage=7200
< Content-Type: text/html; charset=UTF-8
< Xkey: theme.sw-logo-desktop, ... If the Cache-Control: public or xkey: ... HTTP header cannot be found in the response, this is probably due to a faulty configuration in the web application itself.
Check the configuration of Shopware 6 to see if the Reverse Proxy Mode has been activated correctly.
Error analysis
HTTP status 503 backend fetch failed on individual article and/or category pages
Depending on the requirements of your store, it is possible that the size of the HTTP header exceeds the standard size configured in Varnish for one or more item and category pages.
Check
Connect to your Varnish server via SSH and execute the following command:
curl -I -H "Host: <sales-channel-domain>" -H "X-Forwarded-Proto: https" http://<app-server-ip>/path/with/error/503 | wc -c If the result exceeds 8192, the Varnish cache systemd service must be adjusted here, whereby new standard sizes are set.
Solution
Root permissions are required for this, for managed servers the file is now available via the configuration module in the customer center. If the file is not available, please contact our support team.
Execute the following command as root user:
systemctl edit varnish Adjust the ExecStart setting as follows:
[Service]
ExecStart=/usr/sbin/varnishd -a :80 -a localhost:8443,PROXY -f /etc/varnish/default.vcl -P %t/%N/varnishd.pid -p feature=+http2 -s malloc,2g \
-p http_req_hdr_len=32k \
-p http_resp_hdr_len=32k \
-p http_resp_size=64k
Restart=on-failure Then reload Varnish-Cache to apply the configuration.
systemctl reload varnish If your server is a managed server, the commands that must be executed as root user can only be executed by our support. Please contact our support team so that we can make the desired changes.
HTTP status error 502 Bad Gateway
Check FastCGI Proxy Buffers
It is possible that the size of the FastCGI proxy buffers in Nginx is not sufficient for Varnish XKey, which is why the following settings must be stored in the server or location directive.
Check the corresponding Nginx error log file for the following error message:
The file name error.log may differ from your configuration.
grep "upstream sent too big header" /var/log/nginx/error.log Solution
Increase the following settings within your Nginx Server or Location directive as follows:
The file name webseite.conf may differ from your configuration.
nano /etc/nginx/conf.d/website.conf Option 1: Server directive
server {
...
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
...
} Option 2: Location directive
location XXX {
...
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
...
} Then test and reload the Nginx configuration:
nginx -t && systemctl reload nginx