There was some knowledge floating around that 860 bytes was the
optimal size, I think it was from an Akamai engineer or something,
but the HTML 5 Boilerplate server configs use 256 bytes, and I
actually have HTML content that is less than 860 bytes, so I guess
I could benefit from compressing it. gzip compression is costly
for the compression side, but very quick for the client, so this
is a good thing.
See: https://github.com/h5bp/server-configs-nginx/blob/master/nginx.conf
The variable name is misleading as this really does is install the
certbot client and its dependencies, and we generally want this to
always happen. If a host doesn't want it, they can override it in
their host vars.
Perhaps I should rename this variable to "bootstrap_letsencrypt" or
something so it is more accurate.
The `ConditionFileIsExecutable` goes in the [Unit] section! This
fixes the error:
systemd[1]: [/etc/systemd/system/renew-letsencrypt.service:6] Unknown lvalue 'ConditionFileIsExecutable' in section 'Service'
Used to indicate if a vhost needs PHP configuration or not, like
for a static site. Set in the hosts's nginx_vhosts block. Defaults
to "False" if unset.
This reverts commit 201165cff6.
Turns out this actually breaks initial deployments, because the
cache gets updated in the first task, then you add sources for
nginx and mariadb, but it doesn't update the indexes because the
cache is < 3600 seconds old, so you end up getting the distro's
versions of nginx and mariadb.
When you give Ansible the key id it will check if the key exists
before trying to download and add it. I got the long fingerprint
from `sudo apt-key finger`.
I have added cache_valid_time=3600 for the first task in each
tag that could be possibly running apt-related commands. For ex,
the "nginx" tag is also in the "packages" tag, but sometimes you
run the nginx tag by itself (perhaps repeatadely), so you'd want
to limit the update unless the cache was 1 hour old
I never modify file in the git repo, but the WordPress updater does
updates from the web (for example TwentySixteen theme), and this
always causes the task to fail when I switch WordPress versions.
This reverts commit a38d822fad.
The docs definitely recommend twice a day. From a note on certbot's
installation page:
> if you're setting up a cron or systemd job, we recommend running
> it twice per day (it won't do anything until your certificates
> are due for renewal or revoked, but running it regularly would
> give your site a chance of staying online in case a Let's
> Encrypt-initiated revocation happened for some reason). Please
> select a random minute within the hour for your renewal tasks.
See: https://certbot.eff.org/#ubuntuxenial-nginx
For idempotence we need to run all apt-related tasks, like editing
source files, adding keys, installing packages, etc, when running
the 'packages' tag.
Take an opinionated stance on HTTPS and assume that hosts are using
HTTPS for all vhosts. This can either be via custom TLS cert/key
pairs defined in the host's variables (could even be self-signed
certificates on dev boxes) or via Let's Encrypt.
Hosts can specify use_letsencrypt: 'yes' in their host_vars. For
now this assumes that the certificates already exist (ie, you have
to manually run Let's Encrypt first to register/create the certs).
The creation of the fastcgi cache dir is part of the nginx role and
should be labled as such. In situations where you only run nginx
tasks with `-t nginx` nginx will fail to start due to the missing
cache dir.
Google's preload check application pointed out that there was an
extra semi colon in the HTTP header:
$ hstspreload checkdomain alaninkenya.org
Warning:
1. Syntax warning: Header includes an empty directive or extra semicolon.
The tool can be downloaded here: https://github.com/chromium/hstspreload
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Google's PageSpeed Insights tool pointed out that the Genericons
in WordPress' Jetpack module could be compressed.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Everything is HTTPS now, whether self-signed or otherwise, so it
doesn't make sense to have a config switch for this.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
It's just deduplication, since it's already obvious that the dict
is for nginx-related vars:
- nginx_domain_name→domain_name
- nginx_domain_aliases→domain_aliases
- nginx_enable_https→enable_https
- nginx_enable_hsts→enable_hsts
Signed-off-by: Alan Orth <alan.orth@gmail.com>
It would be bettwe to set these defaults in the role's defaults, but
we can't because they exist in dicts for each of the host's sites.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
The `enable_https` option in host_vars becomes `nginx_enable_https`
to be more consistent with other nginx options used in host_vars.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Set `use_snakeoil_cert: 'yes'` in host_vars. This is good for dev
hosts where we don't have real domains or real certs. But everything
should have TLS.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
For now I generated the certs manually, but in the future the play-
book should run the letsencrypt-auto client for us!
Signed-off-by: Alan Orth <alan.orth@gmail.com>
We need to actually check if HSTS was requested before setting the
header in the block handing PHP requests. We check in the main vhost
block, but nginx headers are only inherited if you don't set ANY
headers in child blocks (ie, headers set in parent blocks are cleared
if you set any new ones in the child).
Signed-off-by: Alan Orth <alan.orth@gmail.com>
This is really a per-site setting, so it doesn't make sense to have
a role default. Anyways, HSTS is kinda tricky and potentially dang-
erous, so unless a vhost explicitly sets it to "yes" we shouldn't
enable it.
Note: also switch from using a boolean to using a string; it is st-
ill declarative, but at least now I don't have to guess whether it
is being treated as a bool or not.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
For security and predictability clients should only get a reponse
if they request a hostname we are actually hosting.
If TLS is in use then this will use a self-signed snakeoil cert for
an HTTPS-enabled blank, default vhost.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
I realized there was no need to do a full clone when I was working
in a Vagrant environment in a coffee shop with slow Internet. ;)
Signed-off-by: Alan Orth <alan.orth@gmail.com>
A template is better than ansible's `apt_repository` module because
we can idempotently control the contents of the file based on vari-
ables.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
I was only setting it on the PHP block, which is for all dynamic
requests (ie pages from WordPress), but it should also be the same
for all static files not served from that block.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Include subdomains in the HTTP Strict Transport Security header,
and include the "preload" verb to inform Google we want to be pre-
loaded into the HSTS preload.
See: https://hstspreload.appspot.com/
Signed-off-by: Alan Orth <alan.orth@gmail.com>
I was attempting to make the config easier to use in test environments
where the key is self-signed, but meh, I rarely do that and I think
this logic doesn't actually work.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
nginx inherits headers from higher-level blocks UNLESS we also set
headers in the current block. In this case the FastCGI cache header
was being set, so we weren't getting the extra-security ones.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
nginx blocks inherit headers set in blocks above them UNLESS the
current level also sets headers[0]. This was causing PHP requests
to not have STS headers because of the FastCGI cache header which
is set in that block.
[0] http://nginx.org/en/docs/http/ngx_http_headers_module.html
Fixes GitHub #7.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
nginx is caching HEAD requests, then when users come along and do
a GET request they get an HTTP 200 with no request body. It seems
setting fastcgi_request_methods to GET doesn't stop nginx from caching
HEADs, so for now just add the $request_method to the key.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Bypasses caching for logged in users (right now only for sessions
where the "wordpress_logged_in" cookie is set. Doubles the trans-
actions per second as measured by siege:
$ siege -d1 -t1M -c50 https://mjanja.ch
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Reduces round trip time for clients. Note: I am using a certificate
chain in the `ssl_certificate' directive, so as I understand it, I
don't need to use an explicit trusted intermediate + root CA cert
with the `ssl_trusted_certificate' option. See the nginx docs for
more[0]. Addresses GitHub Issue #5.
Seems to be working, test with:
$ openssl s_client -connect mjanja.ch:443 -servername mjanja.ch -tls1 -tlsextdebug -status
Look for "OCSP Response" with "Cert Status: good".
[0] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Default is 5 minutes, but it seems like unless you're a high-traff-
ic site, there's no need to expire sessions so quickly. Also, the
istlsfastyet.com configs are using 24 hours, so surely we can.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Use "public" with "max-age" instead of Expires, as "max-age" is always
preferred if it's present. Note: setting "public" doesn't make the
resource "more cacheable", but it is just more explicit.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
My WordPress blogs have a /wordpress subdirectory in the document
root, but I don't serve from the /wordpress URI.
Technically, all we need is the tweaks to the try_files:
- `?args` passes query strings to php5-fpm
- removing 404 from the vhost's try_files so we don't return 404
when the requested file doesn't exist (obviously not all request
URI's in WordPress are actual files on the disk)
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Assumes you have a TLS cert for one domain, but not the others, ie:
http://blah.com \
http://blah.net -> https://blah.iohttp://blah.org /
Otherwise, without https, it creates a vhost with all domain names.
Signed-off-by: Alan Orth <alan.orth@gmail.com>
Without this, all requests to directory URIs throw 403 errors due
to directory listings not being allowed.
Signed-off-by: Alan Orth <alan.orth@gmail.com>