<li>I finally got around to working on Peter’s cleanups for affiliations, authors, and donors from last week
<ul>
<li>I did some minor cleanups myself and applied them to CGSpace</li>
</ul>
</li>
<li>Start working on some batch uploads for IFPRI</li>
</ul>
<h2id="2023-08-04">2023-08-04</h2>
<ul>
<li>Minor cleanups on IFPRI’s batch uploads
<ul>
<li>I also did a duplicate check and found thirteen items that seem to be duplicates, so I sent them to Leigh to check</li>
</ul>
</li>
<li>I read this <ahref="https://www.endpointdev.com/blog/2012/06/logstatement-postgres-all-full-logging/">interesting blog post about PostgreSQL’s <code>log_statement</code> function</a>
<ul>
<li>Someone pointed out that this also lets you take advantage of <ahref="https://github.com/darold/pgbadger">PgBadger</a> analysis</li>
<li>I enabled statement logging on DSpace Test and I will check it in a few days</li>
</ul>
</li>
<li>Reading about DSpace 7 REST API again
<ul>
<li>Here is how to get the first page of 100 items: <ahref="https://dspace7test.ilri.org/server/api/discover/search/objects?dsoType=item&page=1&size=100">https://dspace7test.ilri.org/server/api/discover/search/objects?dsoType=item&page=1&size=100</a></li>
<li>I really want to benchmark this to see how fast we can get all the pages</li>
<li>Another thing I notice is that the bitstreams are not here, so that will be an extra call…</li>
<li>I’m checking the PostgreSQL logs now that statement logging has been enabled for a few days on DSpace Test
<ul>
<li>I see the logs are about 7 or 8 GB, which is larger than expected—and this is the test server!</li>
<li>I will now play with pgbadger to see if it gives any useful insights</li>
<li>Hmm, it sems the <code>log_statement</code> advice was old as pgbadger itself says:</li>
</ul>
</li>
</ul>
<blockquote>
<p>Do not enable log_statement as its log format will not be parsed by pgBadger.</p>
</blockquote>
<p>… and:</p>
<blockquote>
<p>Warning: Do not enable both log_min_duration_statement, log_duration and log_statement all together, this will result in wrong counter values. Note that this will also increase drastically the size of your log. log_min_duration_statement should always be preferred.</p>
</blockquote>
<ul>
<li>So we need to follow pgbadger’s instructions rather to get a suitable log file
<ul>
<li>After enabling the new settings I see that our log file is going to be reaallllly big… hmmmm will check tomorrow morning</li>
<li>Ideally we would run this incremental report every day on the postgresql-14-main.log.1 aka yesterday’s version of the log file after it is rotated
<ul>
<li>Now I have to see how large the file will be…</li>
</ul>
</li>
<li>I did some final updates to the ninety IFPRI records and uploaded them to DSpace Test first, then to CGSpace</li>
<li>I noticed that the DSpace statistics pages don’t seem to work on communities or collections
<ul>
<li>I finally took time to look in the DSpace log file and found this for one:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>2023-08-16 14:30:31,873 WARN dace8f96-f034-488e-b38c-9f2eb5d0e002 6cbd0b18-6852-4294-99a5-02dfcab0a469 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
</span></span></code></pre></div><ul>
<li>I’m surprised to see this because those should have been dealt with when we upgraded to DSpace 6
<ul>
<li>Looking in the Solr statistics core I see ~1,000,000 documents with the ID <code>-1</code>, and about 57,000,000 that don’t</li>
<li>Also interesting, faceting by <code>dateYear</code> I see:
<ul>
<li>2023: 209566</li>
<li>2022: 403871</li>
<li>2021: 336548</li>
<li>2020: 31659</li>
<li>… none before 2020</li>
</ul>
</li>
<li>They are all type 5, which is “Site” aka the home page, according to <code>dspace-api/src/main/java/org/dspace/core/Constants.java</code></li>
<li>Ah hah, and I can see in my DSpace 7 test Solr there are a bunch of hits with <code>type: 5</code> that have “-1” of course, but also newer ones that have an actual UUID</li>
<li>I used the <code>/server/api/dso/find?uuid=3945ec23-2426-4fce-a2ea-48b38b91547f</code> endpoint to find out that there is a new <code>/server/api/core/sites</code> endpoint listing exactly one site (the home page) with this ID</li>
<li>So for now I can replace all the “-1” documents with this ID on the test server at least, then I will have to remember to do that during the migration of the production instance</li>
<li>I did a new export from DSpace 6 using solr-import-export-json with a query limiting it to documents of type 5 and negative 1 ID:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ chrt -b <spanstyle="color:#ae81ff">0</span> ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-fix-uuid.json -f <spanstyle="color:#e6db74">'id:\-1 AND type:5 AND time:[2020-01-01T00\:00\:00Z TO 2023-12-31T23\:59\:59Z]'</span> -k uid -S actingGroupId,actingGroupParentId,actorMemberGroupId,author_mtdt,author_mtdt_search,bitstreamCount,bitstreamId,complete_query,complete_query_search,containerBitstream,containerCollection,containerCommunity,containerItem,core_update_run_nb,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,file_id,filterquery,first_name,geoipcountrycode,geoIpCountryCode,group_id,group_map,group_name,ip_ngram,ip_search,isArchived,isInternal,iso_mtdt,iso_mtdt_search,isWithdrawn,last_name,name,ngram_query_search,ngram_simplequery_search,orphaned,parent_count,p_communities_id,p_communities_map,p_communities_name,p_group_id,p_group_map,p_group_name,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,referrer_ngram,referrer_search,simple_query,simple_query_search,solr_update_time_stamp,storage_nb_of_bitstreams,storage_size,storage_statistics_type,subject_mtdt,subject_mtdt_search,text,userAgent_ngram,userAgent_search,version_id,workflowItemId
</span></span></code></pre></div><ul>
<li>Then I replaced the IDs with the UUID of the site homepage on DSpace 7 Test:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ sed -i <spanstyle="color:#e6db74">'s/"id":"-1"/"id":"3945ec23-2426-4fce-a2ea-48b38b91547f"/'</span> /tmp/statistics-fix-uuid.json
</span></span></code></pre></div><ul>
<li>I re-imported those records and I no longer see the “-1” IDs, but still get the same error in the log
<ul>
<li>I don’t understand, maybe there is some voodoo, so I rebooted the server</li>
<li>Hmm, no, it’s not a voodoo cache issue, so I really need to debug this:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>2023-08-16 15:44:07,122 WARN dace8f96-f034-488e-b38c-9f2eb5d0e002 036b88e6-7548-4852-9646-f345ce3bfcc2 org.dspace.app.rest.exception.DSpaceApiExceptionControllerAdvice @ Request is invalid or incorrect (status:400 exception: Invalid UUID string: -1 at: java.base/java.util.UUID.fromString1(UUID.java:280))
</span></span></code></pre></div><ul>
<li>On a related note, I figured out that the root site already has a UUID in DSpace 6, and it’s exactly the one above (3945ec23-2426-4fce-a2ea-48b38b91547f)
<ul>
<li>I noticed it while looking at the <ahref="https://cgspace.cgiar.org/rest/hierarchy">DSpace 6 REST API’s hierarchy page</a></li>
<li>So I can update these “-1” IDs with “type:5” in our production I think…</li>
</ul>
</li>
</ul>
<h2id="2023-08-17">2023-08-17</h2>
<ul>
<li>I decided to update the “-1” IDs in Solr on DSpace 6
<ul>
<li>Unfortunately, in Solr there is no way to update only documents matching a query, so we have to export and re-import</li>
<li>I exported all documents with “type:5” (Homepage) and replaced the ID in the JSON:</li>
</span></span><spanstyle="display:flex;"><span>$ sed -i <spanstyle="color:#e6db74">'s/"id":"-1"/"id":"3945ec23-2426-4fce-a2ea-48b38b91547f"/'</span> /tmp/statistics-fix-uuid.json
</span></span></code></pre></div><ul>
<li>(Oops, skipping the fields above was not necessary, since I’m importing back into DSpace 6 where those fields exist)</li>
<li>Then I re-imported:</li>
</ul>
<pretabindex="0"><code>$ ./run.sh -s http://localhost:8081/solr/statistics -a import -o /tmp/statistics-fix-uuid.json -k uid
</code></pre><ul>
<li>This worked, but I still see new records coming in that have “id:-1” so I will need to repeat this during the migration.</li>
<li>I also notice many stats records that have erroneous cities: