CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

November, 2021

2021-11-02

  • I experimented with manually sharding the Solr statistics on DSpace Test
  • First I exported all the 2019 stats from CGSpace:
$ ./run.sh -s http://localhost:8081/solr/statistics -f 'time:2019-*' -a export -o statistics-2019.json -k uid
$ zstd statistics-2019.json
$ mkdir -p /home/dspacetest.cgiar.org/solr/statistics-2019/data
# create core in Solr admin
$ curl -s "http://localhost:8081/solr/statistics/update?softCommit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>time:2019-*</query></delete>"
$ ./run.sh -s http://localhost:8081/solr/statistics-2019 -a import -o statistics-2019.json -k uid
  • The key thing above is that you create the core in the Solr admin UI, but the data directory must already exist so you have to do that first in the file system
  • I restarted the server after the import was done to see if the cores would come back up OK
    • I remember last time I tried this the manually created statistics cores didn’t come back up after I rebooted, but this time they did

2021-11-03

  • While inspecting the stats for the new statistics-2019 shard on DSpace Test I noticed that I can’t find any stats via the DSpace Statistics API for an item that should have some
    • I checked on CGSpace’s and I can’t find them there either, but I see them in Solr when I query in the admin UI
    • I need to debug that, but it doesn’t seem to be related to the sharding…

2021-11-04

  • I spent a little bit of time debugging the Solr bug with the statistics-2019 shard but couldn’t reproduce it for the few items I tested
    • So that’s good, it seems the sharding worked
  • Linode alerted me to high CPU usage on CGSpace (linode18) yesterday
    • Looking at the Solr hits from yesterday I see 91.213.50.11 making 2,300 requests
    • According to AbuseIPDB.com this is owned by Registrarus LLC (registrarus.ru) and it has been reported for malicious activity by several users
    • The ASN is 50340 (SELECTEL-MSK, RU)
    • They are attempting SQL injection:
91.213.50.11 - - [03/Nov/2021:06:47:20 +0100] "HEAD /bitstream/handle/10568/106239/U19ArtSimonikovaChromosomeInthomNodev.pdf?sequence=1%60%20WHERE%206158%3D6158%20AND%204894%3D4741--%20kIlq&isAllowed=y HTTP/1.1" 200 0 "https://cgspace.cgiar.org:443/bitstream/handle/10568/106239/U19ArtSimonikovaChromosomeInthomNodev.pdf" "Mozilla/5.0 (X11; U; Linux i686; en-CA; rv:1.8.0.10) Gecko/20070223 Fedora/1.5.0.10-1.fc5 Firefox/1.5.0.10"
  • Another is in China, and they grabbed 1,200 PDFs from the REST API in under an hour:
# zgrep 222.129.53.160 /var/log/nginx/rest.log.2.gz | wc -l
1178
  • I will continue to split the Solr statistics back into year-shards on DSpace Test (linode26)
    • Today I did all 2018 stats…
    • I want to see if there is a noticeable change in JVM memory, Solr response time, etc

2021-11-07

  • Update all Docker containers on AReS and rebuild OpenRXV:
$ docker images | grep -v ^REPO | sed 's/ \+/:/g' | cut -d: -f1,2 | xargs -L1 docker pull
$ docker-compose build
  • Then restart the server and start a fresh harvest
  • Continue splitting the Solr statistics into yearly shards on DSpace Test (doing 2017, 2016, 2015, and 2014 today)
  • Several users wrote to me last week to say that workflow emails haven’t been working since 2021-10-21 or so
    • I did a test on CGSpace and it’s indeed broken:
$ dspace test-email

About to send test email:
 - To: fuuuu
 - Subject: DSpace test email
 - Server: smtp.office365.com

Error sending email:
 - Error: javax.mail.SendFailedException: Send failure (javax.mail.AuthenticationFailedException: 535 5.7.139 Authentication unsuccessful, the user credentials were incorrect. [AM5PR0701CA0005.eurprd07.prod.outlook.com]
)

Please see the DSpace documentation for assistance.
  • I sent a message to ILRI ICT to ask them to check the account/password
  • I want to do one last test of the Elasticsearch updates on OpenRXV so I got a snapshot of the latest Elasticsearch volume used on the production AReS instance:
# tar czf openrxv_esData_7.tar.xz /var/lib/docker/volumes/openrxv_esData_7
  • Then on my local server:
$ mv ~/.local/share/containers/storage/volumes/openrxv_esData_7/ ~/.local/share/containers/storage/volumes/openrxv_esData_7.2021-11-07.bak
$ tar xf /tmp/openrxv_esData_7.tar.xz -C ~/.local/share/containers/storage/volumes --strip-components=4
$ find ~/.local/share/containers/storage/volumes/openrxv_esData_7 -type f -exec chmod 660 {} \;
$ find ~/.local/share/containers/storage/volumes/openrxv_esData_7 -type d -exec chmod 770 {} \;
# copy backend/data to /tmp for the repository setup/layout
$ rsync -av --partial --progress --delete provisioning@ares:/tmp/data/ backend/data
  • This seems to work: all items, stats, and repository setup/layout are OK
  • I merged my Elasticsearch pull request from last month into OpenRXV

2021-11-08

  • File an issue for the Angular flash of unstyled content on DSpace 7
  • Help Udana from IWMI with a question about CGSpace statistics
    • He found conflicting numbers when using the community and collection modes in Content and Usage Analysis
    • I sent him more numbers directly from the DSpace Statistics API

2021-11-09

  • I migrated the 2013, 2012, and 2011 statistics to yearly shards on DSpace Test’s Solr to continute my testing of memory / latency impact
  • I found out why the CI jobs for the DSpace Statistics API had been failing the past few weeks
    • When I reverted to using the original falcon-swagger-ui project after they apparently merged my Falcon 3 changes, it seems that they actually only merged the Swagger UI changes, not the Falcon 3 fix!
    • I switched back to using my own fork and now it’s working
    • Unfortunately now I’m getting an error installing my dependencies with Poetry:
RuntimeError

Unable to find installation candidates for regex (2021.11.9)

at /usr/lib/python3.9/site-packages/poetry/installation/chooser.py:72 in choose_for
     68│
     69│             links.append(link)
     70│
     71│         if not links:
  →  72│             raise RuntimeError(
     73│                 "Unable to find installation candidates for {}".format(package)
     74│             )
     75│
     76│         # Get the best link
  • So that’s super annoying… I’m going to try using Pipenv again…

2021-11-10

  • 93.158.91.62 is scraping us again
    • That’s an IP in Sweden that is clearly a bot, but pretending to use a normal user agent
    • I added them to the “bot” list in nginx so the requests will share a common DSpace session with other bots and not create Solr hits, but still they are causing high outbound traffic
    • I modified the nginx configuration to send them an HTTP 403 and tell them to use a bot user agent

2021-11-14

  • I decided to update AReS to the latest OpenRXV version with Elasticsearch 7.13
    • First I took backups of the Elasticsearch volume and OpenRXV backend data:
$ docker-compose down
$ sudo tar czf openrxv_esData_7-2021-11-14.tar.xz /var/lib/docker/volumes/openrxv_esData_7
$ cp -a backend/data backend/data.2021-11-14
  • Then I checked out the latest git commit, updated all images, rebuilt the project:
$ docker images | grep -v ^REPO | sed 's/ \+/:/g' | cut -d: -f1,2 | xargs -L1 docker pull
$ docker-compose build
$ docker-compose up -d
  • Then I updated the repository configurations and started a fresh harvest
  • Help Francesca from the Alliance with a question about embargos on CGSpace items
    • I logged in as a normal user and a CGIAR user, and I was unable to access the PDF or full text of the item
    • I was only able to access the PDF when I was logged in as an admin

2021-11-21

  • Update all Docker images on AReS (linode20) and re-build OpenRXV
    • Run all system updates and reboot the server
    • Start a full harvest, but I notice that the number of items being harvested is not complete, so I stopped it
  • Run all system updates on CGSpace (linode18) and DSpace Test (linode26) and reboot them