CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

August, 2020

2020-08-02

  • I spent a few days working on a Java-based curation task to tag items with ISO 3166-1 Alpha2 country codes based on their cg.coverage.country text values
    • It looks up the names in ISO 3166-1 first, and then in our CGSpace countries mapping (which has five or so of Peter’s preferred “display” country names)
    • It implements a “force” mode too that will clear existing country codes and re-tag everything
    • It is class based so I can easily add support for other vocabularies, and the technique could even be used for organizations with mappings to ROR and Clarisa…

2020-08-03

  • Atmire responded to the ticket about the ongoing upgrade issues
    • They pushed an RC2 version of the CUA module that fixes the FontAwesome issue so that they now use classes instead of Unicode hex characters so our JS + SVG works!
    • They also said they have never experienced the type: 5 site statistics issue, so I need to try to purge those and continue with the stats processing
  • I purged all unmigrated stats in a few cores and then restarted processing:
$ curl -s "http://localhost:8081/solr/statistics/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>id:/.*unmigrated.*/</query></delete>'
$ export JAVA_OPTS='-Dfile.encoding=UTF-8 -Xmx2048m'
$ chrt -b 0 dspace dsrun com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdateCLI -t 12 -c statistics
  • Andrea from Macaroni Bros emailed me a few days ago to say he’s having issues with the CGSpace REST API

2020-08-04

$ http 'http://localhost:8080/rest/collections/1445' | json_pp | grep numberItems
   "numberItems" : 63,
$ http 'http://localhost:8080/rest/collections/1445/items' jq '. | length'
61
  • Also on DSpace Test (which is running DSpace 6!), though the issue is slightly different there:
$ http 'https://dspacetest.cgiar.org/rest/collections/5471c3aa-202e-42f0-96c2-497a18e3b708' | json_pp | grep numberItems
   "numberItems" : 61,
$ http 'https://dspacetest.cgiar.org/rest/collections/5471c3aa-202e-42f0-96c2-497a18e3b708/items' | jq '. | length'
59
  • Ah! I exported that collection’s metadata and checked it in OpenRefine, where I noticed that two items are mapped twice
    • I dealt with this problem in 2017-01 and the solution is to check the collection2item table:
dspace=# SELECT * FROM collection2item WHERE item_id = '107687';
   id   | collection_id | item_id
--------+---------------+---------
 133698 |           966 |  107687
 134685 |          1445 |  107687
 134686 |          1445 |  107687
(3 rows)
  • So for each id you can delete one duplicate mapping:
dspace=# DELETE FROM collection2item WHERE id='134686';
dspace=# DELETE FROM collection2item WHERE id='128819';
  • Update countries on CGSpace to be closer to ISO 3166-1 with some minor differences based on Peter’s preferred display names
$ cat 2020-08-04-PB-new-countries.csv
cg.coverage.country,correct
CAPE VERDE,CABO VERDE
COCOS ISLANDS,COCOS (KEELING) ISLANDS
"CONGO, DR","CONGO, DEMOCRATIC REPUBLIC OF"
COTE D'IVOIRE,CÔTE D'IVOIRE
"KOREA, REPUBLIC","KOREA, REPUBLIC OF"
PALESTINE,"PALESTINE, STATE OF"
$ ./fix-metadata-values.py -i 2020-08-04-PB-new-countries.csv -db dspace -u dspace -p 'fuuu' -f cg.coverage.country -t 'correct' -m 228
  • I had to restart Tomcat 7 three times before all the Solr statistics cores came up properly
    • I started a full Discovery re-indexing

2020-08-05

  • Port my dspace-curation-tasks to DSpace 6 and tag version 6.0-SNAPSHOT
  • I downloaded the UN M.49 CSV file to start working on updating the CGSpace regions
    • First issue is they don’t version the file so you have no idea when it was released
    • Second issue is that three rows have errors due to not using quotes around “China, Macao Special Administrative Region”
  • Bizu said she was having problems approving tasks on CGSpace
    • I looked at the PostgreSQL locks and they have skyrocketed since yesterday:

PostgreSQL locks day

PostgreSQL query length day

  • Seems that something happened yesterday afternoon at around 5PM…
    • For now I will just run all updates on the server and reboot it, as I have no idea what causes this issue
    • I had to restart Tomcat 7 three times after the server came back up before all Solr statistics cores came up properly
  • I checked the nginx logs around 5PM yesterday to see who was accessing the server:
# cat /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E '04/Aug/2020:(17|18)' | goaccess --log-format=COMBINED -
  • I see the Macaroni Bros are using their new user agent for harvesting: RTB website BOT
    • But that pattern doesn’t match in the nginx bot list or Tomcat’s crawler session manager valve because we’re only checking for [Bb]ot!
    • So they have created thousands of Tomcat sessions:
$ cat dspace.log.2020-08-04 | grep -E "(63.32.242.35|64.62.202.71)" | grep -E 'session_id=[A-Z0-9]{32}' | sort | uniq | wc -l
5693
  • DSpace itself uses a case-sensitive regex for user agents so there are no hits from those IPs in Solr, but I need to tweak the other regexes so they don’t misuse the resources
    • Perhaps [Bb][Oo][Tt]
  • I see another IP 104.198.96.245, which is also using the “RTB website BOT” but there are 70,000 hits in Solr from earlier this year before they started using the user agent
    • I purged all the hits from Solr, including a few thousand from 64.62.202.71
  • A few more IPs causing lots of Tomcat sessions yesterday:
$ cat dspace.log.2020-08-04 | grep "38.128.66.10" | grep -E 'session_id=[A-Z0-9]{32}' | sort | uniq | wc -l
1585
$ cat dspace.log.2020-08-04 | grep "64.62.202.71" | grep -E 'session_id=[A-Z0-9]{32}' | sort | uniq | wc -l
5691
  • 38.128.66.10 isn’t creating any Solr statistics due to our DSpace agents pattern, but they are creating lots of sessions so perhaps I need to force them to use one session in Tomcat:
Mozilla/5.0 (Windows NT 5.1) brokenlinkcheck.com/1.2
  • 64.62.202.71 is using a user agent I’ve never seen before:
Mozilla/5.0 (compatible; +centuryb.o.t9[at]gmail.com)
  • So now our “bot” regex can’t even match that…
    • Unless we change it to [Bb]\.?[Oo]\.?[Tt]\.?… which seems to match all variations of “bot” I can think of right now, according to regexr.com:
RTB website BOT
Altmetribot
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Mozilla/5.0 (compatible; +centuryb.o.t9[at]gmail.com)
Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/)
  • And another IP belonging to Turnitin (the alternate user agent of Turnitinbot):
$ cat dspace.log.2020-08-04 | grep "199.47.87.145" | grep -E 'sessi
on_id=[A-Z0-9]{32}' | sort | uniq | wc -l
2777
  • I will add Turnitin to the Tomcat Crawler Session Manager Valve regex as well…

2020-08-06

  • I have been working on processing the Solr statistics with the Atmire tool on DSpace Test the last few days:
    • statistics:
      • 2,040,385 docs: 2h 28m 49s
    • statistics-2019:
      • 8,960,000 docs: 12h 7s
      • 1,780,575 docs: 2h 7m 29s
    • statistics-2018:
      • 1,970,000 docs: 12h 1m 28s
      • 360,000 docs: 2h 54m 56s (Linode rebooted)
      • 1,110,000 docs: 7h 1m 44s (Restarted Tomcat, oops)
  • I decided to start the 2018 core over again, so I re-synced it from CGSpace and started again from the solr-upgrade-statistics-6x tool and now I’m having the same issues with Java heap space that I had last month
    • The process kept crashing due to memory, so I increased the memory to 3072m and finally 4096m…
    • Also, I decided to try to purge all the -unmigrated docs that it had found so far to see if that helps…
    • There were about 466,000 records unmigrated so far, most of which were type: 5 (SITE statistics)
    • Now it is processing again…
  • I developed a small Java class called FixJpgJpgThumbnails to remove “.jpg.jpg” thumbnails from the THUMBNAIL bundle and replace them with their originals from the ORIGINAL bundle

2020-08-07

  • I improved the RemovePNGThumbnailsForPDFs.java a bit more to exclude infographics and original bitstreams larger than 100KiB
    • I ran it on CGSpace and it cleaned up 3,769 thumbnails!
    • Afterwards I ran dspace cleanup -v to remove the deleted thumbnails

2020-08-08

  • The Atmire stats processing for the statistics-2018 Solr core keeps stopping with this error:
Exception: 50 consecutive records couldn't be saved. There's most likely an issue with the connection to the solr server. Shutting down.
java.lang.RuntimeException: 50 consecutive records couldn't be saved. There's most likely an issue with the connection to the solr server. Shutting down.
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.storeOnServer(SourceFile:317)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.processRecords(SourceFile:177)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.performRun(SourceFile:161)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.update(SourceFile:128)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdateCLI.main(SourceFile:78)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
        at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
  • It lists a few of the records that it is having issues with and they all have integer IDs
    • When I checked Solr I see 8,000 of them, some of which have type 0 and some with no type…
    • I purged them and then the process continues:
$ curl -s "http://localhost:8081/solr/statistics-2018/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>id:/[0-9]+/</query></delete>'

2020-08-09

  • The Atmire script did something to the server and created 132GB of log files so the root partition ran out of space…
  • I removed the log file and tried to re-run the process but it seems to be looping over 11,000 records and failing, creating millions of lines in the logs again:
# grep -oE "Record uid: ([a-f0-9\\-]*){1} couldn't be processed" /home/dspacetest.cgiar.org/log/dspace.log.2020-08-09 > /tmp/not-processed-errors.txt
# wc -l /tmp/not-processed-errors.txt
2202973 /tmp/not-processed-errors.txt
# sort /tmp/not-processed-errors.txt | uniq -c | tail -n 10
    220 Record uid: ffe52878-ba23-44fb-8df7-a261bb358abc couldn't be processed
    220 Record uid: ffecb2b0-944d-4629-afdf-5ad995facaf9 couldn't be processed
    220 Record uid: ffedde6b-0782-4d9f-93ff-d1ba1a737585 couldn't be processed
    220 Record uid: ffedfb13-e929-4909-b600-a18295520a97 couldn't be processed
    220 Record uid: fff116fb-a1a0-40d0-b0fb-b71e9bb898e5 couldn't be processed
    221 Record uid: fff1349d-79d5-4ceb-89a1-ce78107d982d couldn't be processed
    220 Record uid: fff13ddb-b2a2-410a-9baa-97e333118c74 couldn't be processed
    220 Record uid: fff232a6-a008-47d0-ad83-6e209bb6cdf9 couldn't be processed
    221 Record uid: fff75243-c3be-48a0-98f8-a656f925cb68 couldn't be processed
    221 Record uid: fff88af8-88d4-4f79-ba1a-79853973c872 couldn't be processed
  • I looked at some of those records and saw strange objects in their containerCommunity, containerCollection, etc…
{
  "responseHeader": {
    "status": 0,
    "QTime": 0,
    "params": {
      "q": "uid:fff1349d-79d5-4ceb-89a1-ce78107d982d",
      "indent": "true",
      "wt": "json",
      "_": "1596957629970"
    }
  },
  "response": {
    "numFound": 1,
    "start": 0,
    "docs": [
      {
        "containerCommunity": [
          "155",
          "155",
          "{set=null}"
        ],
        "uid": "fff1349d-79d5-4ceb-89a1-ce78107d982d",
        "containerCollection": [
          "1099",
          "830",
          "{set=830}"
        ],
        "owningComm": [
          "155",
          "155",
          "{set=null}"
        ],
        "isInternal": false,
        "isBot": false,
        "statistics_type": "view",
        "time": "2018-05-08T23:17:00.157Z",
        "owningColl": [
          "1099",
          "830",
          "{set=830}"
        ],
        "_version_": 1621500445042147300
      }
    ]
  }
}
  • I deleted those 11,724 records with the strange “set” object in the collections and communities, as well as 360,000 records with id: -1
$ curl -s "http://localhost:8081/solr/statistics-2018/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>owningColl:/.*set.*/</query></delete>'
$ curl -s "http://localhost:8081/solr/statistics-2018/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>id:\-1</query></delete>'
  • I was going to compare the CUA stats for 2018 and 2019 on CGSpace and DSpace Test, but after Linode rebooted CGSpace (linode18) for maintenance yesterday the solr cores didn’t all come back up OK
    • I had to restart Tomcat five times before they all came up!
    • After that I generated a report for 2018 and 2019 on each server and found that the difference is about 10,000–20,000 per month, which is much less than I was expecting
  • I noticed some authors that should have ORCID identifiers, but didn’t (perhaps older items before we were tagging ORCID metadata)
    • With the simple list below I added 1,341 identifiers!
$ cat 2020-08-09-add-ILRI-orcids.csv
dc.contributor.author,cg.creator.id
"Grace, Delia","Delia Grace: 0000-0002-0195-9489"
"Delia Grace","Delia Grace: 0000-0002-0195-9489"
"Baker, Derek","Derek Baker: 0000-0001-6020-6973"
"Ngan Tran Thi","Tran Thi Ngan: 0000-0002-7184-3086"
"Dang Xuan Sinh","Sinh Dang-Xuan: 0000-0002-0522-7808"
"Hung Nguyen-Viet","Hung Nguyen-Viet: 0000-0001-9877-0596"
"Pham Van Hung","Pham Anh Hung: 0000-0001-9366-0259"
"Lindahl, Johanna F.","Johanna Lindahl: 0000-0002-1175-0398"
"Teufel, Nils","Nils Teufel: 0000-0001-5305-6620"
"Duncan, Alan J.",Alan Duncan: 0000-0002-3954-3067"
"Moodley, Arshnee","Arshnee Moodley: 0000-0002-6469-3948"
  • That got me curious, so I generated a list of all the unique ORCID identifiers we have in the database:
dspace=# \COPY (SELECT DISTINCT text_value FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=240) TO /tmp/2020-08-09-orcid-identifiers.csv;
COPY 2095
dspace=# \q
$ grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' /tmp/2020-08-09-orcid-identifiers.csv | sort | uniq > /tmp/2020-08-09-orcid-identifiers-uniq.csv
$ wc -l /tmp/2020-08-09-orcid-identifiers-uniq.csv
1949 /tmp/2020-08-09-orcid-identifiers-uniq.csv
  • I looked into the strange Solr record above that had “{set=830}” in the communities and collections
    • There are exactly 11724 records like this in the current CGSpace (DSpace 5.8) statistics-2018 Solr core
    • None of them have an id or type field!
    • I see 242,000 of them in the statistics-2017 core, 185,063 in the statistics-2016 core… all the way to 2010, but not in 2019 or the current statistics core
    • I decided to purge all of these records from CGSpace right now so they don’t even have a chance at being an issue on the real migration:
$ curl -s "http://localhost:8081/solr/statistics-2018/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>owningColl:/.*set.*/</query></delete>'
...
$ curl -s "http://localhost:8081/solr/statistics-2010/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>owningColl:/.*set.*/</query></delete>'
  • I added Googlebot and Twitterbot to the list of explicitly allowed bots
    • In Google’s case, they were getting lumped in with all the other bad bots and then important links like the sitemaps were returning HTTP 503, but they generally respect robots.txt so we should just allow them (perhaps we can control the crawl rate in the webmaster console)
    • In Twitter’s case they were also getting lumped in with the bad bots too, but really they only make ~50 or so requests a day when someone posts a CGSpace link on Twitter
  • I tagged the ISO 3166-1 Alpha2 country codes on all items on CGSpace using my CountryCodeTagger curation task
    • I still need to set up a cron job for it…
    • This tagged 50,000 countries!
dspace=# SELECT count(text_value) FROM metadatavalue WHERE metadata_field_id = 243 AND resource_type_id = 2;
 count
-------
 50812
(1 row)

2020-08-11

  • I noticed some more hits from Macaroni’s WordPress harvestor that I hadn’t caught last week
    • 104.198.13.34 made many requests without a user agent, with a “WordPress” user agent, and with their new “RTB website BOT” user agent, about 100,000 in total in 2020, and maybe another 70,000 in the other years
    • I will purge them an add them to the Tomcat Crawler Session Manager and the DSpace bots list so they don’t get logged in Solr
  • I noticed a bunch of user agents with “Crawl” in the Solr stats, which is strange because the DSpace spider agents file has had “crawl” for a long time (and it is case insensitive)
    • In any case I will purge them and add them to the Tomcat Crawler Session Manager Valve so that at least their sessions get re-used

2020-08-13

  • Linode keeps sending mails that the load and outgoing bandwidth is above the threshold
    • I took a look briefly and found two IPs with the “Delphi 2009” user agent
    • Then there is 88.99.115.53 which made 82,000 requests in 2020 so far with no user agent
    • 64.62.202.73 has made 7,000 requests with this user agent Mozilla/5.0 (compatible; +centuryb.o.t9[at]gmail.com)
    • I had added it to the Tomcat Crawler Session Manager Valve last week but never purged the hits from Solr
    • 195.54.160.163 is making thousands of requests with user agents liket this:

(CASE WHEN 2850=9474 THEN 2850 ELSE NULL END)

  • I purged 150,000 hits from 2020 and 2020 from these user agents and hosts

2020-08-14

  • Last night I started the processing of the statistics-2016 core with the Atmire stats util and I see some errors like this:
Record uid: f6b288d7-d60d-4df9-b311-1696b88552a0 couldn't be processed
com.atmire.statistics.util.update.atomic.ProcessingException: something went wrong while processing record uid: f6b288d7-d60d-4df9-b311-1696b88552a0, an error occured in the com.atmire.statistics.util.update.atomic.processor.ContainerOwnerDBProcessor
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.applyProcessors(SourceFile:304)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.processRecords(SourceFile:176)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.performRun(SourceFile:161)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.update(SourceFile:128)
        at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdateCLI.main(SourceFile:78)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
        at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
Caused by: java.lang.NullPointerException
  • I see it has id: 980-unmigrated and type: 0
  • The 2016 core has 629,983 unmigrated docs, mostly:
    • type: 5: 620311
    • type: 0: 7255
    • type: 3: 1333
  • I purged the unmigrated docs and continued processing:
$ curl -s "http://localhost:8081/solr/statistics-2016/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>id:/.*unmigrated.*/</query></delete>'
$ export JAVA_OPTS='-Dfile.encoding=UTF-8 -Xmx2048m'
$ chrt -b 0 dspace dsrun com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdateCLI -t 12 -c statistics-2016
  • Then I see there are 849,000 docs with id: -1 and type: 5 so I should purge those too probably:
$ curl -s "http://localhost:8081/solr/statistics-2017/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>id:\-1</query></delete>'
  • Altmetric asked for a dump of CGSpace’s OAI “sets” so they can update their affiliation mappings
    • I did it in a kinda ghetto way:
$ http 'https://cgspace.cgiar.org/oai/request?verb=ListSets' > /tmp/0.xml
$ for num in {100..1300..100}; do http "https://cgspace.cgiar.org/oai/request?verb=ListSets&resumptionToken=////$num" > /tmp/$num.xml; sleep 2; done
$ for num in {0..1300..100}; do cat /tmp/$num.xml >> /tmp/cgspace-oai-sets.xml; done
  • This produces one file that has all the sets, albeit with 14 pages of responses concatenated into one document, but that’s how theirs was in the first place…
  • Help Bizu with a restricted item for CIAT