This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
Note: there were no IPv6 addresses in the top 10,000 this time so I
used a dummy address for the nftables set so the syntax was valid.
After the inital acme.sh script is downloaded and bootstrapped we
can remove it. If a host already has been bootstrapped then there
is no need to download it and do it over again.
It is a standalone package on (at least) Ubuntu 20.04 and Debian 11
and some cloud images do not have it installed by default (for exa-
mple Scaleway).
We have to force these because they are not updated on the host like
the other lists (API limit of five requests per day!). We update the
list periodically here in git.
This comes from the AbuseIPDB with a confidence level of 95%. I use
the following command to download and sort the IPs:
$ curl -G https://api.abuseipdb.com/api/v2/blacklist -d \
confidenceMinimum=95 -H "Key: $ABUSEIPDB_API_KEY" \
-H "Accept: text/plain" | sort | sed -e '/:/w /tmp/ipv6.txt' \
-e '/:/d' > /tmp/ipv4.txt
I manually add the XML formatting to each file and run them through
tidy:
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv4.xml
$ tidy -xml -utf8 -m -iq -w 0 roles/common/files/abusers-ipv6.xml
First, we cannot do a global check for has_wordpress or needs_php,
as those are defined per nginx vhost. Second, I realized that this
was only working in the past because vhosts that had WordPress or
needed PHP were listed first in the nginx_vhosts dict.
This changes the logic to first check if any vhosts have WordPress
or need PHP, then sets a fact that we can use to decide whether to
run php-fpm tasks or not.
According to jail.conf we actually need to separate multiple values
with spaces instead of commas. On some versions of fail2ban this is
a fatal error:
> CRITICAL Unhandled exception in Fail2Ban:
> Traceback (most recent call last):
> File "/usr/lib/python3/dist-packages/fail2ban/server/jailthread.py", line 66, in run_with_except_hook
> run(*args, **kwargs)
> File "/usr/lib/python3/dist-packages/fail2ban/server/filtersystemd.py", line 246, in run
> *self.formatJournalEntry(logentry))
> File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 432, in processLineAndAdd
> if self.inIgnoreIPList(ip, log_ignore=True):
> File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 371, in inIgnoreIPList
> "(?<=b)1+", bin(DNSUtils.addr2bin(s[1]))).group())
> File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 928, in addr2bin
> return struct.unpack("!L", socket.inet_aton(ipstring))[0]
> OSError: illegal IP address string passed to inet_aton
This affects (at least) fail2ban 0.9.3 on Ubuntu 16.04, but I never
noticed.
Some hosts can use fail2ban's nginx-botsearch filter to ban anyone
making requests to non-existent files like wp-login.php. There is
no reason to request such files naively and anyone found doing so
can be banned immediately.
In theory I should report them to AbuseIPDB.com, but that will take
a little more wiring up.
For now I am still manually updating this, as we can only hit their
API five times per day, so it is not possible to have each host get
the list themselves every day with our one API key.
This adds Abuse.sh's list of IPs using blacklisted SSL certificates
to nftables. These IPs are high confidence indicators of compromise
and we should not route them. The list is updated daily by a systemd
timer.
See: https://sslbl.abuse.ch/blacklist/
We should only try to start the nftables service after we finish
copying all the config files just in case there is some unclean
state in one of them. On a first run this shouldn't matter, but
after nftables and some abuse list update scripts have run this
can happen (mostly in testing!).
cron-apt updates the system against the security-only databases at
night so many packages are "missing" unless you run apt update. We
need to update the cache on all apt tasks actually because I might
be running them by their tag and they currently only get updated at
the beginning of the playbook.
This opens TCP port 22 on all hosts, TCP ports 80 and 443 on hosts
in the web group, and allows configuration of "extra" rules in the
host or group vars.