We should only try to start the nftables service after we finish
copying all the config files just in case there is some unclean
state in one of them. On a first run this shouldn't matter, but
after nftables and some abuse list update scripts have run this
can happen (mostly in testing!).
cron-apt updates the system against the security-only databases at
night so many packages are "missing" unless you run apt update. We
need to update the cache on all apt tasks actually because I might
be running them by their tag and they currently only get updated at
the beginning of the playbook.
This opens TCP port 22 on all hosts, TCP ports 80 and 443 on hosts
in the web group, and allows configuration of "extra" rules in the
host or group vars.
I will try using nftables directly instead of via firewalld as of
Debian 11 as it is the replacement for the iptables/ipset stack in
recent years and is easier to work with.
This also includes a systemd service, timer, and script to update
the spamhaus DROP lists as nftables sets.
Still need to add fail2ban support.
Recommended by ssh-audit, but also generally the concensus for a few
years that Encrypt-and-MAC is hard to get right. OpenSSH has several
Encrypt-then-MAC schemes available so we can use those.
See: https://www.daemonology.net/blog/2009-06-24-encrypt-then-mac.html
This was to enable the persistent systemd journal, but it is no lo-
nger needed as of Ubuntu 18.04 and Debian 11. I had removed the ta-
asks long ago, but forgot to remove this file.
This configures the recommended DROP, EDROP, and DROPv6 lists from
Spamhaus as ipsets in firewalld. First we copy an empty placeholder
ipset to seed firewalld, then we use a shell script to download the
real lists and activate them. The same shell script is run daily as
a service (update-spamhaus-lists.service) by a systemd timer.
I am strictly avoiding any direct ipset commands here because I want
to make sure that this works on older hosts where ipsets is used as
well as newer hosts that have moved to nftables such as Ubuntu 20.04.
So far I have tested this on Ubuntu 16.04, 18.04, and 20.04, but ev-
entually I need to abstract the tasks and run them on CentOS 7+ as
well.
See: https://www.spamhaus.org/drop/
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
Instead of manually creating our own self-signed certificate we can
use the one created automatically by the ssl-cert package on Debian.
This is only used by the dummy default HTTPS vhost.
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
This parameterizes the HTTP Strict Transport Security header so we
can use it consistently across all templates. Also, it updates the
max-age to be ~1 year in seconds, which is recommended by Google.
See: https://hstspreload.org/
The certbot-auto client that I've been using for a long time is now
only supported if you install it using snap. I don't use snap on my
systems so I decided to switch to the acme.sh client, which is imp-
lemented in POSIX shell with no dependencies. One bonus of this is
that I can start using ECC certificates.
This also configures the .well-known directory so we can use webroot
when installing and renewing certificates. I have yet to understand
how the renewal works with regards to webroot, though. I may have to
update the systemd timers to point to /var/lib/letsencrypt/.well-known.
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
Add skip-name-resolve=1 to disable lookups of hostnames to IPs. We
need to make sure all accounts are using IPs like 127.0.0.1 instead
of "localhost" now.
It seems that the usefulness of the query cache is diminishing in
recent years. If your cache is large then the time taken to scan
the cache can be longer than the SQL query itself.
See: https://haydenjames.io/mysql-query-cache-size-performance/
I downloaded the key and checked the fingerprint with gpg:
$ gpg --dry-run --import mariadb_release_signing_key.asc
gpg: key F1656F24C74CD1D8: 6 signatures not checked due to missing keys
gpg: Total number processed: 1