CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

August, 2023

2023-08-03

  • I finally got around to working on Peter’s cleanups for affiliations, authors, and donors from last week
    • I did some minor cleanups myself and applied them to CGSpace
  • Start working on some batch uploads for IFPRI

2023-08-04

2023-08-05

  • Export CGSpace to check for missing Initiative collection mappings
  • Start a harvest on AReS

2023-08-07

  • I’m checking the PostgreSQL logs now that statement logging has been enabled for a few days on DSpace Test
    • I see the logs are about 7 or 8 GB, which is larger than expected—and this is the test server!
    • I will now play with pgbadger to see if it gives any useful insights
    • Hmm, it sems the log_statement advice was old as pgbadger itself says:

Do not enable log_statement as its log format will not be parsed by pgBadger.

… and:

Warning: Do not enable both log_min_duration_statement, log_duration and log_statement all together, this will result in wrong counter values. Note that this will also increase drastically the size of your log. log_min_duration_statement should always be preferred.

  • So we need to follow pgbadger’s instructions rather to get a suitable log file
    • After enabling the new settings I see that our log file is going to be reaallllly big… hmmmm will check tomorrow morning
  • More work on the IFPRI batch uploads

2023-08-08

  • Apply more corrections to authors from Peter on CGSpace
  • I finally figured out a log_line_prefix for PostgreSQL that works for pgBadger:
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
  • Now I can generate reports:
# /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql-14-main.log -O /srv/www/pgbadger
  • Ideally we would run this incremental report every day on the postgresql-14-main.log.1 aka yesterday’s version of the log file after it is rotated
    • Now I have to see how large the file will be…
  • I did some final updates to the ninety IFPRI records and uploaded them to DSpace Test first, then to CGSpace

2023-08-11

  • Fix bug with header background on DSpace 7 on mobile

2023-08-12

  • Export CGSpace to check for missing Initiative collection mappings
  • I deployed the latest OpenRXV master branch with Angular v14 and backend updates on the server
  • Start a harvest on AReS

2023-08-14

  • I ported the DSpace 6.x REST API patch to allow specifying a bundle name when POSTing a bitstream to the legacy REST API in DSpace 7.6

2023-08-16

  • I noticed that the DSpace statistics pages don’t seem to work on communities or collections
    • I finally took time to look in the DSpace log file and found this for one:
2023-08-16 14:30:31,873 WARN  dace8f96-f034-488e-b38c-9f2eb5d0e002 6cbd0b18-6852-4294-99a5-02dfcab0a469 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
  • I’m surprised to see this because those should have been dealt with when we upgraded to DSpace 6
    • Looking in the Solr statistics core I see ~1,000,000 documents with the ID -1, and about 57,000,000 that don’t
    • Also interesting, faceting by dateYear I see:
      • 2023: 209566
      • 2022: 403871
      • 2021: 336548
      • 2020: 31659
      • … none before 2020
    • They are all type 5, which is “Site” aka the home page, according to dspace-api/src/main/java/org/dspace/core/Constants.java
    • Ah hah, and I can see in my DSpace 7 test Solr there are a bunch of hits with type: 5 that have “-1” of course, but also newer ones that have an actual UUID
    • I used the /server/api/dso/find?uuid=3945ec23-2426-4fce-a2ea-48b38b91547f endpoint to find out that there is a new /server/api/core/sites endpoint listing exactly one site (the home page) with this ID
    • So for now I can replace all the “-1” documents with this ID on the test server at least, then I will have to remember to do that during the migration of the production instance
    • I did a new export from DSpace 6 using solr-import-export-json with a query limiting it to documents of type 5 and negative 1 ID:
$ chrt -b 0 ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-fix-uuid.json -f 'id:\-1 AND type:5 AND time:[2020-01-01T00\:00\:00Z TO 2023-12-31T23\:59\:59Z]' -k uid -S actingGroupId,actingGroupParentId,actorMemberGroupId,author_mtdt,author_mtdt_search,bitstreamCount,bitstreamId,complete_query,complete_query_search,containerBitstream,containerCollection,containerCommunity,containerItem,core_update_run_nb,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,file_id,filterquery,first_name,geoipcountrycode,geoIpCountryCode,group_id,group_map,group_name,ip_ngram,ip_search,isArchived,isInternal,iso_mtdt,iso_mtdt_search,isWithdrawn,last_name,name,ngram_query_search,ngram_simplequery_search,orphaned,parent_count,p_communities_id,p_communities_map,p_communities_name,p_group_id,p_group_map,p_group_name,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,referrer_ngram,referrer_search,simple_query,simple_query_search,solr_update_time_stamp,storage_nb_of_bitstreams,storage_size,storage_statistics_type,subject_mtdt,subject_mtdt_search,text,userAgent_ngram,userAgent_search,version_id,workflowItemId
  • Then I replaced the IDs with the UUID of the site homepage on DSpace 7 Test:
$ sed -i 's/"id":"-1"/"id":"3945ec23-2426-4fce-a2ea-48b38b91547f"/' /tmp/statistics-fix-uuid.json
  • I re-imported those records and I no longer see the “-1” IDs, but still get the same error in the log
    • I don’t understand, maybe there is some voodoo, so I rebooted the server
    • Hmm, no, it’s not a voodoo cache issue, so I really need to debug this:
2023-08-16 15:44:07,122 WARN  dace8f96-f034-488e-b38c-9f2eb5d0e002 036b88e6-7548-4852-9646-f345ce3bfcc2 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
  • On a related note, I figured out that the root site already has a UUID in DSpace 6, and it’s exactly the one above (3945ec23-2426-4fce-a2ea-48b38b91547f)

2023-08-17

  • I decided to update the “-1” IDs in Solr on DSpace 6
    • Unfortunately, in Solr there is no way to update only documents matching a query, so we have to export and re-import
    • I exported all documents with “type:5” (Homepage) and replaced the ID in the JSON:
$ chrt -b 0 ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-fix-uuid.json -f 'type:5' -k uid -S actingGroupId,actingGroupParentId,actorMemberGroupId,author_mtdt,author_mtdt_search,bitstreamCount,bitstreamId,complete_query,complete_query_search,containerBitstream,containerCollection,containerCommunity,containerItem,core_update_run_nb,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,file_id,filterquery,first_name,geoipcountrycode,geoIpCountryCode,group_id,group_map,group_name,ip_ngram,ip_search,isArchived,isInternal,iso_mtdt,iso_mtdt_search,isWithdrawn,last_name,name,ngram_query_search,ngram_simplequery_search,orphaned,parent_count,p_communities_id,p_communities_map,p_communities_name,p_group_id,p_group_map,p_group_name,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,referrer_ngram,referrer_search,simple_query,simple_query_search,solr_update_time_stamp,storage_nb_of_bitstreams,storage_size,storage_statistics_type,subject_mtdt,subject_mtdt_search,text,userAgent_ngram,userAgent_search,version_id,workflowItemId
$ sed -i 's/"id":"-1"/"id":"3945ec23-2426-4fce-a2ea-48b38b91547f"/' /tmp/statistics-fix-uuid.json 
  • (Oops, skipping the fields above was not necessary, since I’m importing back into DSpace 6 where those fields exist)
  • Then I re-imported:
$ ./run.sh -s http://localhost:8081/solr/statistics -a import -o /tmp/statistics-fix-uuid.json -k uid
  • This worked, but I still see new records coming in that have “id:-1” so I will need to repeat this during the migration.
  • I also notice many stats records that have erroneous cities:
    • "city":"com.maxmind.geoip2.record.City [ {} ]"
    • "city":"com.maxmind.geoip2.record.City [ {\"geoname_id\":1002145,\"names\":{\"de\":\"George\",\"en\":\"George\",\"ru\":\"Джордж\",\"fr\":\"George\",\"ja\":\"ジョージ\"}} ]"

2023-08-18

  • Export CGSpace to check for missing Initiative collection mappings

2023-08-19

  • Start a harvest on AReS

2023-08-21

  • Experiment with the DSpace 7 REST API
    • I wrote a Python script to benchmark harvesting all 100,000+ items using the /api/discover/search/objects endpoint 100 items at a time
    • I was able to harvest the entire 106,000 items in fifty-two minutes, which seems slow, but that’s about ten times faster than with the legacy REST API…
    • Still, I need to benchmark a bit more, as the item response doesn’t include collection mappings or thumbnails
  • Reading the API docs it seems that we should be able to use the standard If-Modified-Since header for some endpoints
    • I tried it on the /api/discover/search/objects and /api/core/items endpoints, but apparently those don’t support this header because I don’t see a Last-Modified header in the response
    • According to the docs, it means that these endpoints indeed don’t support it…

2023-08-22

2023-08-23

  • I benchmarked the DSpace 7 REST API with the new embeds and it took four hours and seventeen minutes to get all 106,000 items on DSpace 7 Test
    • So this is much slower than the results I saw earlier this week, but maybe slightly faster than DSpace 6?
  • Maria from Alliance contacted me to say they have agreed to use UN M.49 regions more strictly in TIP, so they want to replace our non-standard “Latin America” region with “Latin America and the Caribbean”, “Caribbean” and “Americas” on all Alliance outputs
    • I exported their community on CGSpace and fixed the metadata in OpenRefine
  • I tried to run dspace cleanup -v on CGSpace, but got this error:
Caused by: org.postgresql.util.PSQLException: ERROR: update or delete on table "bitstream" violates foreign key constraint "bundle_primary_bitstream_id_fkey" on table "bundle"
  Detail: Key (uuid)=(61bff7da-c8e3-420f-841c-ec5e8238d716) is still referenced from table "bundle".
  • The solution, as always, is to delete those IDs manually in PostgreSQL:
$ psql -d dspace -c "UPDATE bundle SET primary_bitstream_id=NULL WHERE primary_bitstream_id IN ('61bff7da-c8e3-420f-841c-ec5e8238d716');"
UPDATE 1
  • I also tried to delete all users who haven’t logged in since 2017 using the groomer script, but it crashes due to those users still having items or workflows or whatever:
$ dspace dsrun org.dspace.eperson.Groomer -a -b 08/23/2017 -d

2023-08-24

  • I spent some time trying to get themes to extend in DSpace 7
    • I finally got a basic ILRI theme working, but there is a bug that causes theme components to get duplicated

2023-08-25

  • Meeting with Altmetric about the next phase of their integration with CGSpace
  • A bit of cleanup on CGSpace metadata
    • I fixed DOIs, licenses, dates, subjects, affiliations, titles, publishers, types, and titles in 1,240 items