CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

March, 2019

2019-03-01

  • I checked IITA's 259 Feb 14 records from last month for duplicates using Atmire's Duplicate Checker on a fresh snapshot of CGSpace on my local machine and everything looks good
  • I am now only waiting to hear from her about where the items should go, though I assume Journal Articles go to IITA Journal Articles collection, etc…
  • Looking at the other half of Udana's WLE records from 2018-11
    • I finished the ones for Restoring Degraded Landscapes (RDL), but these are for Variability, Risks and Competing Uses (VRC)
    • I did the usual cleanups for whitespace, added regions where they made sense for certain countries, cleaned up the DOI link formats, added rights information based on the publications page for a few items
    • Most worryingly, there are encoding errors in the abstracts for eleven items, for example:
    • 68.15% � 9.45 instead of 68.15% ± 9.45
    • 2003�2013 instead of 2003–2013
  • I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs
Read more →

February, 2019

2019-02-01

  • Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!
  • The top IPs before, during, and after this latest alert tonight were:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "01/Feb/2019:(17|18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    245 207.46.13.5
    332 54.70.40.11
    385 5.143.231.38
    405 207.46.13.173
    405 207.46.13.75
   1117 66.249.66.219
   1121 35.237.175.180
   1546 5.9.6.51
   2474 45.5.186.2
   5490 85.25.237.71
  • 85.25.237.71 is the “Linguee Bot” that I first saw last month
  • The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase
  • There were just over 3 million accesses in the nginx logs last month:
# time zcat --force /var/log/nginx/* | grep -cE "[0-9]{1,2}/Jan/2019"
3018243

real    0m19.873s
user    0m22.203s
sys     0m1.979s
Read more →

January, 2020

2020-01-06 Open a ticket with Atmire to request a quote for the upgrade to DSpace 6 Last week Altmetric responded about the item that had a lower score than than its DOI The score is now linked to the DOI Another item that had the same problem in 2019 has now also linked to the score for its DOI Another item that had the same problem in 2019 has also been fixed 2020-01-07 Peter Ballantyne highlighted one more WLE item that is missing the Altmetric score that its DOI has The DOI has a score of 259, but the Handle has no score at all I tweeted the CGSpace repository link Read more →

January, 2019

2019-01-02

  • Linode alerted that CGSpace (linode18) had a higher outbound traffic rate than normal early this morning
  • I don't see anything interesting in the web server logs around that time though:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "02/Jan/2019:0(1|2|3)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
     92 40.77.167.4
     99 210.7.29.100
    120 38.126.157.45
    177 35.237.175.180
    177 40.77.167.32
    216 66.249.75.219
    225 18.203.76.93
    261 46.101.86.248
    357 207.46.13.1
    903 54.70.40.11
Read more →

December, 2018

2018-12-01

  • Switch CGSpace (linode18) to use OpenJDK instead of Oracle JDK
  • I manually installed OpenJDK, then removed Oracle JDK, then re-ran the Ansible playbook to update all configuration files, etc
  • Then I ran all system updates and restarted the server

2018-12-02

Read more →

November, 2018

2018-11-01

  • Finalize AReS Phase I and Phase II ToRs
  • Send a note about my dspace-statistics-api to the dspace-tech mailing list

2018-11-03

  • Linode has been sending mails a few times a day recently that CGSpace (linode18) has had high CPU usage
  • Today these are the top 10 IPs:
Read more →

October, 2018

2018-10-01

  • Phil Thornton got an ORCID identifier so we need to add it to the list on CGSpace and tag his existing items
  • I created a GitHub issue to track this #389, because I'm super busy in Nairobi right now
Read more →

September, 2018

2018-09-02

  • New PostgreSQL JDBC driver version 42.2.5
  • I'll update the DSpace role in our Ansible infrastructure playbooks and run the updated playbooks on CGSpace and DSpace Test
  • Also, I'll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system's RAM, and we never re-ran them after migrating to larger Linodes last month
  • I'm testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I'm getting those autowire errors in Tomcat 8.5.30 again:
Read more →

August, 2018

2018-08-01

  • DSpace Test had crashed at some point yesterday morning and I see the following in dmesg:
[Tue Jul 31 00:00:41 2018] Out of memory: Kill process 1394 (java) score 668 or sacrifice child
[Tue Jul 31 00:00:41 2018] Killed process 1394 (java) total-vm:15601860kB, anon-rss:5355528kB, file-rss:0kB, shmem-rss:0kB
[Tue Jul 31 00:00:41 2018] oom_reaper: reaped process 1394 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
  • Judging from the time of the crash it was probably related to the Discovery indexing that starts at midnight
  • From the DSpace log I see that eventually Solr stopped responding, so I guess the java process that was OOM killed above was Tomcat's
  • I'm not sure why Tomcat didn't crash with an OutOfMemoryError…
  • Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did a few months ago when we tried to run the whole CGSpace Solr core
  • The server only has 8GB of RAM so we'll eventually need to upgrade to a larger one because we'll start starving the OS, PostgreSQL, and command line batch processes
  • I ran all system updates on DSpace Test and rebooted it
Read more →

July, 2018

2018-07-01

  • I want to upgrade DSpace Test to DSpace 5.8 so I took a backup of its current database just in case:
$ pg_dump -b -v -o --format=custom -U dspace -f dspace-2018-07-01.backup dspace
  • During the mvn package stage on the 5.8 branch I kept getting issues with java running out of memory:
There is insufficient memory for the Java Runtime Environment to continue.
Read more →