In my previous post, I gave an overview of a high performance caching setup that facilitated SSL everywhere. This is important because one of the most widely used reverse-proxy caches for Drupal sites is Varnish, which doesn't support SSL termination on its own. Since that post, I gave a presentation at DrupalCamp NJ 2013. I presented the analogy I had developed previously and expanded on it. I prepared a demo in which I had a three server setup. One of the servers hosted the Pound reverse-proxy, one hosted the Varnish reverse-proxy HTTP accelerator, and one hosted my LAMP server with a Drupal site and another quick demo site to make it apparent when Varnish was working.
At the end of my presentation, a number of people asked that I post the configuration files I used in my demo. We use Ubuntu Linux servers at Zivtech, so the instructions here assume such servers are being used. But the configuration files for Varnish, Pound, and Apache are standard across platforms (e.g., the paths may change, but the files themselves remain relevant regardless of your particular system).
Set up the Apache server
In my example, I have three servers. I've called them Alpha, Beta, and Gamma. These servers have the IP addresses 192.168.1.101, 192.168.1.102, and 192.168.1.103, respectively. (Actually, I'll give another to Alpha because each SSL site should have it's own IP address.) Gamma will be the first one set up. It will be the Apache/PHP box. The setup here is the simplest because it is just a standard LAMP server. And I won't bother spending time discussing how to set up a LAMP server; there's plenty of other documentation out there. Once you get up and running, set up a Drupal site to begin testing. Also set up a simple PHP page to test the initial Varnish setup. Do this because Drupal is complicated. You need to ensure that the most basic case is working before trying to debug a more sophisticated case -- in debugging, always reduce.
The basic page is easy:
<style> body { background-color: #; } </style> </head> <body> The current time is <?php print time(); ?>
It spits out the Unix timestamp and a random hex color for the background of the page. This is handy for testing Varish because the color will stop changing on reload after caching.
Make sure to create a URL to which Apache will respond; I created a vhost entry for my test page. If you don't want to bother with real DNS for these experiments, just set some entries in your computer's hosts file. Test the new site. If all goes well, you should see some random colors.
Set up the Varnish reverse-proxy cache
Okay. Now things get more interesting. We're going to set up the Varnish instance on server Beta. You don't have to do this on a different server if you don't have the resources. Varnish can just as easily be run on the same server as Apache and Pound. I decided to use three different servers for my example only to make if very clear how each service remains independent. Having Varnish and Pound on the same server as Apache also means that you'll have to move all of your ports around. If each server is distinct you can keep everything running on port 80 and Pound on 80 and 443. But if you want them all on the same machine, Pound will be on 80 and 443, and Varnish and Apache will need to be moved to some other unclaimed ports.
On Beta I did a standard install of Varnish.
apt-get install varnish
On Ubuntu 12.04 and above this gives us Varnish 3.x.
Varnish uses a configuration file located at
/etc/varnish/default.vcl
to control how it caches and what backends it should route traffic to. I used a configuration file that was created by Lullabot and adapted it to get up and running quickly for this demo.
There are a couple of lines that I added and want to point out. First, note that our backend for Varnish is pointing at the IP of our Apache server.
backend web1 { .host = "192.168.56.103"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; }
Next, you'll see that I added an additional url to the list of cache passes. This list is telling Varnish never to cache pages matching certain url criteria. This is important for some admin pages. I added
/admin/https-test
to my list of passes because I will be using it to help test my setup.
if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/ooyala/ping$" || req.url ~ "^/admin/build/features" || req.url ~ "^/info/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^/admin/https-test" || req.url ~ "^.*/ahah/.*$") { return (pass); }
Finally, I added a new cookie regular expression so that secure session cookies will be passed through. That is the line that includes SSESS.
if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";(SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");
Lastly, I set Varnish to listen on port 80. In this setup Varnish's port doesn't matter that much since we'll be proxying everything though Pound. But for clarity, I'll be keeping it on port 80. This is configured in
/etc/default/varnish
.
You can now test out that this is working by trying to visit the simple static site we used above. After Varnish serves the page once, it should cache it and subsequent reloads should continue to give the same timestamp and the background colors should not be changing anymore.
Almost there.
Setting up Pound
Now we set up Pound to listen on both ports 443 and 80. We'll set it to redirect port 80 traffic to port 443 thus achieving SSL for the entire site.
ListenHTTP Address 192.168.56.101 Port 80 Service HeadRequire "Host: static-pound.local" Redirect "https://static-pound.local" End End ListenHTTPS HeadRemove "X-Forwarded-Proto" AddHeader "X-Forwarded-Proto: https" Address 192.168.56.101 Port 443 Cert "/etc/pound/demo.pem" Service BackEnd Address 192.168.56.102 Port 80 End End End
Note that Pound is listening for the site based on its Host name. Also note that we are adding in an X-Forwarded-Proto header. This will get picked up in Apache so that we can alert Drupal that we actually are behind SSL. Drupal gives a secure only cookie (starting with SSESS) in these cases.
Note that I already put a cert in place. This was a self signed cert that I made for this test setup. This is easy to do and there are references if you would like to do the same.
After starting Pound now with
/etc/init.d/pound start
you should be able to visit the pound server and see your static site being served over HTTPS through Pound and Varnish. Victory!
Final tweaks
Since the Apache server doesn't have SSL enabled, a Drupal instance on that server will not properly work over SSL. In order to actually get Drupal to recognize the cert we must alert Apache to enable the HTTPS flag. We do this in our VHost file. All we need is to add a line to the VirtualHost configuration:
SetEnvIf X-Forwarded-Proto https HTTPS=on
Now when Apache reads the X-Forwarded-Proto header that was set by Pound, it will set the HTTPS flag to true. The SetEnvIf setting comes from an Apache module called mod_setenvif. If you don't have that module enabled this should fail so make sure to install this and enable it first. Drupal will now report that SSL is enabled. To test this I created a simple module to report on the status of HTTPS. The final step before production is to download and enable the Varnish module for Drupal. This triggers Varnish cache flushes when you flush your Drupal cache. Everything will work fine without it but you may have some stale content at times so it's best to enable it.
There's plenty more you can do with with setup including sophisticated load balancing, a simple CDN setup, or even edge side includes. Now that you have a starting point I encourage you to experiment with the possibilities.