CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

December, 2017

2017-12-01

  • Uptime Robot noticed that CGSpace went down
  • The logs say “Timeout waiting for idle object”
  • PostgreSQL activity says there are 115 connections currently
  • The list of connections to XMLUI and REST API for today:

# cat /var/log/nginx/rest.log  /var/log/nginx/rest.log.1  /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E "1/Dec/2017" | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
    763 2.86.122.76
    907 207.46.13.94
   1018 157.55.39.206
   1021 157.55.39.235
   1407 66.249.66.70
   1411 104.196.152.243
   1503 50.116.102.77
   1805 66.249.66.90
   4007 70.32.83.92
   6061 45.5.184.196
  • The number of DSpace sessions isn’t even that high:
$ cat /home/cgspace.cgiar.org/log/dspace.log.2017-12-01 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
5815
  • Connections in the last two hours:
# cat /var/log/nginx/rest.log  /var/log/nginx/rest.log.1  /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E "1/Dec/2017:(09|10)" | awk '{print $1}' | sort -n | uniq -c | sort -h | tail                                                      
     78 93.160.60.22
    101 40.77.167.122
    113 66.249.66.70
    129 157.55.39.206
    130 157.55.39.235
    135 40.77.167.58
    164 68.180.229.254
    177 87.100.118.220
    188 66.249.66.90
    314 2.86.122.76
  • What the fuck is going on?
  • I’ve never seen this 2.86.122.76 before, it has made quite a few unique Tomcat sessions today:
$ grep 2.86.122.76 /home/cgspace.cgiar.org/log/dspace.log.2017-12-01 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
822
  • Appears to be some new bot:
2.86.122.76 - - [01/Dec/2017:09:02:53 +0000] "GET /handle/10568/78444?show=full HTTP/1.1" 200 29307 "-" "Mozilla/3.0 (compatible; Indy Library)"
  • I restarted Tomcat and everything came back up
  • I can add Indy Library to the Tomcat crawler session manager valve but it would be nice if I could simply remap the useragent in nginx
  • I will also add ‘Drupal’ to the Tomcat crawler session manager valve because there are Drupals out there harvesting and they should be considered as bots
# cat /var/log/nginx/rest.log  /var/log/nginx/rest.log.1  /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E "1/Dec/2017" | grep Drupal | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
      3 54.75.205.145
      6 70.32.83.92
     14 2a01:7e00::f03c:91ff:fe18:7396
     46 2001:4b99:1:1:216:3eff:fe2c:dc6c
    319 2001:4b99:1:1:216:3eff:fe76:205b

2017-12-03

  • Linode alerted that CGSpace’s load was 327.5% from 6 to 8 AM again

2017-12-04

  • Linode alerted that CGSpace’s load was 255.5% from 8 to 10 AM again
  • I looked at the Munin stats on DSpace Test (linode02) again to see how the PostgreSQL tweaks from a few weeks ago were holding up:

DSpace Test PostgreSQL connections month

  • The results look fantastic! So the random_page_cost tweak is massively important for informing the PostgreSQL scheduler that there is no “cost” to accessing random pages, as we’re on an SSD!
  • I guess we could probably even reduce the PostgreSQL connections in DSpace / PostgreSQL after using this
  • Run system updates on DSpace Test (linode02) and reboot it
  • I’m going to enable the PostgreSQL random_page_cost tweak on CGSpace
  • For reference, here is the past month’s connections:

CGSpace PostgreSQL connections month

2017-12-05