- I also did a duplicate check and found thirteen items that seem to be duplicates, so I sent them to Leigh to check
- I read this [interesting blog post about PostgreSQL's `log_statement` function](https://www.endpointdev.com/blog/2012/06/logstatement-postgres-all-full-logging/)
- Someone pointed out that this also lets you take advantage of [PgBadger](https://github.com/darold/pgbadger) analysis
- I enabled statement logging on DSpace Test and I will check it in a few days
- Reading about DSpace 7 REST API again
- Here is how to get the first page of 100 items: https://dspace7test.ilri.org/server/api/discover/search/objects?dsoType=item&page=1&size=100
- I really want to benchmark this to see how fast we can get all the pages
- Another thing I notice is that the bitstreams are not here, so that will be an extra call...
- I'm checking the PostgreSQL logs now that statement logging has been enabled for a few days on DSpace Test
- I see the logs are about 7 or 8 GB, which is larger than expected—and this is the test server!
- I will now play with pgbadger to see if it gives any useful insights
- Hmm, it sems the `log_statement` advice was old as pgbadger itself says:
> Do not enable log_statement as its log format will not be parsed by pgBadger.
... and:
> Warning: Do not enable both log_min_duration_statement, log_duration and log_statement all together, this will result in wrong counter values. Note that this will also increase drastically the size of your log. log_min_duration_statement should always be preferred.
- So we need to follow pgbadger's instructions rather to get a suitable log file
- After enabling the new settings I see that our log file is going to be reaallllly big... hmmmm will check tomorrow morning
- I noticed that the DSpace statistics pages don't seem to work on communities or collections
- I finally took time to look in the DSpace log file and found this for one:
```console
2023-08-16 14:30:31,873 WARN dace8f96-f034-488e-b38c-9f2eb5d0e002 6cbd0b18-6852-4294-99a5-02dfcab0a469 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
```
- I'm surprised to see this because those should have been dealt with when we upgraded to DSpace 6
- Looking in the Solr statistics core I see ~1,000,000 documents with the ID `-1`, and about 57,000,000 that don't
- Also interesting, faceting by `dateYear` I see:
- 2023: 209566
- 2022: 403871
- 2021: 336548
- 2020: 31659
- ... none before 2020
- They are all type 5, which is "Site" aka the home page, according to `dspace-api/src/main/java/org/dspace/core/Constants.java`
- Ah hah, and I can see in my DSpace 7 test Solr there are a bunch of hits with `type: 5` that have "-1" of course, but also newer ones that have an actual UUID
- I used the `/server/api/dso/find?uuid=3945ec23-2426-4fce-a2ea-48b38b91547f` endpoint to find out that there is a new `/server/api/core/sites` endpoint listing exactly one site (the home page) with this ID
- So for now I can replace all the "-1" documents with this ID on the test server at least, then I will have to remember to do that during the migration of the production instance
- I did a new export from DSpace 6 using solr-import-export-json with a query limiting it to documents of type 5 and negative 1 ID:
```console
$ chrt -b 0 ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-fix-uuid.json -f 'id:\-1 AND type:5 AND time:[2020-01-01T00\:00\:00Z TO 2023-12-31T23\:59\:59Z]' -k uid -S actingGroupId,actingGroupParentId,actorMemberGroupId,author_mtdt,author_mtdt_search,bitstreamCount,bitstreamId,complete_query,complete_query_search,containerBitstream,containerCollection,containerCommunity,containerItem,core_update_run_nb,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,file_id,filterquery,first_name,geoipcountrycode,geoIpCountryCode,group_id,group_map,group_name,ip_ngram,ip_search,isArchived,isInternal,iso_mtdt,iso_mtdt_search,isWithdrawn,last_name,name,ngram_query_search,ngram_simplequery_search,orphaned,parent_count,p_communities_id,p_communities_map,p_communities_name,p_group_id,p_group_map,p_group_name,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,referrer_ngram,referrer_search,simple_query,simple_query_search,solr_update_time_stamp,storage_nb_of_bitstreams,storage_size,storage_statistics_type,subject_mtdt,subject_mtdt_search,text,userAgent_ngram,userAgent_search,version_id,workflowItemId
```
- Then I replaced the IDs with the UUID of the site homepage on DSpace 7 Test:
```console
$ sed -i 's/"id":"-1"/"id":"3945ec23-2426-4fce-a2ea-48b38b91547f"/' /tmp/statistics-fix-uuid.json
```
- I re-imported those records and I no longer see the "-1" IDs, but still get the same error in the log
- I don't understand, maybe there is some voodoo, so I rebooted the server
- Hmm, no, it's not a voodoo cache issue, so I really need to debug this:
```console
2023-08-16 15:44:07,122 WARN dace8f96-f034-488e-b38c-9f2eb5d0e002 036b88e6-7548-4852-9646-f345ce3bfcc2 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
```
- On a related note, I figured out that the root site already has a UUID in DSpace 6, and it's exactly the one above (3945ec23-2426-4fce-a2ea-48b38b91547f)
- I noticed it while looking at the [DSpace 6 REST API's hierarchy page](https://cgspace.cgiar.org/rest/hierarchy)
- So I can update these "-1" IDs with "type:5" in our production I think...
## 2023-08-17
- I decided to update the "-1" IDs in Solr on DSpace 6
- Unfortunately, in Solr there is no way to update only documents matching a query, so we have to export and re-import
- I exported all documents with "type:5" (Homepage) and replaced the ID in the JSON:
- I wrote a Python script to benchmark harvesting all 100,000+ items using the `/api/discover/search/objects` endpoint 100 items at a time
- I was able to harvest the entire 106,000 items in fifty-two minutes, which seems slow, but that's about ten times faster than with the legacy REST API...
- Still, I need to benchmark a bit more, as the item response doesn't include collection mappings or thumbnails
- Reading the [API docs](https://github.com/DSpace/RestContract/blob/main/README.md#etags--conditional-headers) it seems that we should be able to use the standard `If-Modified-Since` header for some endpoints
- I tried it on the `/api/discover/search/objects` and `/api/core/items` endpoints, but apparently those don't support this header because I don't see a `Last-Modified` header in the response
- According to the docs, it means that these endpoints indeed don't support it...
## 2023-08-22
- I was experimenting with the DSpace 7 REST API again
- This time looking at the thumbnail responses in item endpoints
- According to [the documentation](https://github.com/DSpace/RestContract/blob/main/items.md#main-thumbnail) the API will respond with HTTP 200 if there is a thumbnail, and HTTP 204 if there is no content
- That means we need to make the request before we can even find out!
- Tim on DSpace Slack pointed out the DSpace 7 REST API's [projections](https://github.com/DSpace/RestContract/blob/main/projections.md)
- This means we can embed resources like thumbnail and owningCollection in the item (and other) requests, for example: https://dspace7test.ilri.org/server/api/discover/search/objects?dsoType=item&embed=thumbnail,owningCollection
## 2023-08-23
- I benchmarked the DSpace 7 REST API with the new embeds and it took four hours and seventeen minutes to get all 106,000 items on DSpace 7 Test
- So this is much slower than the results I saw earlier this week, but maybe slightly faster than DSpace 6?
- Maria from Alliance contacted me to say they have agreed to use UN M.49 regions more strictly in TIP, so they want to replace our non-standard "Latin America" region with "Latin America and the Caribbean", "Caribbean" and "Americas" on all Alliance outputs
- I exported their community on CGSpace and fixed the metadata in OpenRefine
- I tried to run `dspace cleanup -v` on CGSpace, but got this error:
```
Caused by: org.postgresql.util.PSQLException: ERROR: update or delete on table "bitstream" violates foreign key constraint "bundle_primary_bitstream_id_fkey" on table "bundle"
Detail: Key (uuid)=(61bff7da-c8e3-420f-841c-ec5e8238d716) is still referenced from table "bundle".
```
- The solution, as always, is to delete those IDs manually in PostgreSQL:
```
$ psql -d dspace -c "UPDATE bundle SET primary_bitstream_id=NULL WHERE primary_bitstream_id IN ('61bff7da-c8e3-420f-841c-ec5e8238d716');"
UPDATE 1
```
- I also tried to delete all users who haven't logged in since 2017 using the groomer script, but it crashes due to those users still having items or workflows or whatever:
```console
$ dspace dsrun org.dspace.eperson.Groomer -a -b 08/23/2017 -d
```
- I see that it is now [possible in DSpace 7 to delete such users](https://github.com/DSpace/DSpace/pull/2229) so we will have to wait
- A few weeks ago we received a request from the Fruits and Vegetables Initiative saying that they've gotten approval to begin using the long name instead of the short one everywhere, apparently for SEO reasons
- After communicating with PRMS and other teams working on systems using this metadata I finally updated them in CGSpace
- Export CGSpace to check for missing Initiative collection mappings