<li>Work on preparation of new server for DSpace 7 migration
<ul>
<li>I’m not quite sure what we need to do for the Handle server</li>
<li>For now I just ran the <code>dspace make-handle-config</code> script and diffed it with the one from DSpace 6</li>
<li>I sent the bundle to the Handle admins to make sure it’s OK before we do the migration</li>
</ul>
</li>
<li>Continue testing and debugging the cgspace-java-helpers on DSpace 7</li>
<li>Work on IFPRI ISNAR archive cleanup</li>
</ul>
<h2id="2024-01-03">2024-01-03</h2>
<ul>
<li>I haven’t heard from the Handle admins so I’m preparing a backup solution using nginx streams</li>
<li>This seems to work in my simple tests (this must be outside the <code>http {}</code> block):</li>
</ul>
<pretabindex="0"><code>stream {
upstream handle_tcp_9000 {
server 188.34.177.10:9000;
}
server {
listen 9000;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass handle_tcp_9000;
}
}
</code></pre><ul>
<li>Here I forwarded a test TCP port 9000 from one server to another and was able to retrieve a test HTML that was running on the target
<ul>
<li>I will have to do TCP and UDP on port 2641, and TCP/HTTP on port 8000.</li>
</ul>
</li>
<li>I did some more minor work on the IFPRI ISNAR archive
<ul>
<li>I got some PDFs from the UMN AgEcon search and fixed some metadata</li>
<li>Then I did some duplicate checking and found five items already on CGSpace</li>
</ul>
</li>
</ul>
<h2id="2024-01-04">2024-01-04</h2>
<ul>
<li>Upload 692 items for the ISNAR archive to CGSpace: <ahref="https://cgspace.cgiar.org/handle/10568/136192">https://cgspace.cgiar.org/handle/10568/136192</a></li>
<li>Help Peter proof and upload 252 items from the 2023 Gender conference to CGSpace</li>
<li>Meeting with IFPRI to discuss their migration to CGSpace
<ul>
<li>We agreed to add two new fields, one for IFPRI project and one for IFPRI publication ranking</li>
<li>Most likely we will use <code>cg.identifier.project</code> as a general field and consolidate other project fields there</li>
<li>Not sure which field to use for the publication rank…</li>
</ul>
</li>
</ul>
<h2id="2024-01-05">2024-01-05</h2>
<ul>
<li>Proof and upload 51 items in bulk for IFPRI</li>
<li>I did a big cleanup of user groups in anticipation of complaints about slow workflow tasks etc in DSpace 7
<ul>
<li>I removed ILRI editors from all the dozens of CCAFS community and collection groups, and I should do the same for other CRPs since they are closed for two years now</li>
<li>High load on the server and UptimeRobot saying the frontend is flapping
<ul>
<li>I noticed tons of logs from pm2 in the systemd journal, so I disabled those in the systemd unit because they are available from pm2’s log directory anyway</li>
<li>I also noticed the same for Solr, so I disabled stdout for that systemd unit as well</li>
</ul>
</li>
<li>I spent a lot of time bringing back the nginx rate limits we used in DSpace 6 and it seems to have helped</li>
<li>Export list of publishers for Peter to select some amount to use as a controlled vocabulary:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>localhost/dspace7= ☘ \COPY (SELECT DISTINCT text_value AS "dcterms.publisher", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 178 GROUP BY "dcterms.publisher" ORDER BY count DESC) to /tmp/2024-01-publishers.csv WITH CSV HEADER;
<li>Address some feedback on DSpace 7 from users, including fileing some issues on GitHub
<ul>
<li><ahref="https://github.com/DSpace/dspace-angular/issues/2730">https://github.com/DSpace/dspace-angular/issues/2730</a>: List of available metadata fields is truncated when adding new metadata in “Edit Item”</li>
</ul>
</li>
<li>The Alliance TIP team was having issues posting to one collection via the legacy DSpace 6 REST API
<ul>
<li>In the DSpace logs I see the same issue that they had last month:</li>
</ul>
</li>
</ul>
<pretabindex="0"><code>ERROR unknown unknown org.dspace.rest.Resource @ Something get wrong. Aborting context in finally statement.
</code></pre><h2id="2024-01-09">2024-01-09</h2>
<ul>
<li>I restarted Tomcat to see if it helps the REST issue</li>
<li>After talking with Peter about publishers we decided to get a clean list of the top ~100 publishers and then make sure all CGIAR centers, Initiatives, and Impact Platforms are there as well
<ul>
<li>I exported a list from PostgreSQL and then filtered by count > 40 in OpenRefine and then extracted the metadata values:</li>
<li>Export a list of ORCID identifiers from PostgreSQL to look them up on ORCID and update our controlled vocabulary:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>localhost/dspace7= ☘ \COPY (SELECT DISTINCT(text_value) FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=247) to /tmp/2024-01-09-orcid-identifiers.txt;
<li>Bizu seems to be having issues due to belonging to too many groups
<ul>
<li>I see some messages from Solr in the DSpace log:</li>
</ul>
</li>
</ul>
<pretabindex="0"><code>2024-01-09 06:23:35,893 ERROR unknown unknown org.dspace.authorize.AuthorizeServiceImpl @ Failed getting getting community/collection admin status for bahhhhh@cgiar.org The search error is: Error from server at http://localhost:8983/solr/search: org.apache.solr.search.SyntaxError: Cannot parse 'search.resourcetype:Community AND (admin:eef481147-daf3-4fd2-bb8d-e18af8131d8c OR admin:g80199ef9-bcd6-4961-9512-501dea076607 OR admin:g4ac29263-cf0c-48d0-8be7-7f09317d50ec OR admin:g0e594148-a0f6-4f00-970d-6b7812f89540 OR admin:g0265b87a-2183-4357-a971-7a5b0c7add3a OR admin:g371ae807-f014-4305-b4ec-f2a8f6f0dcfa OR admin:gdc5cb27c-4a5a-45c2-b656-a399fded70de OR admin:ge36d0ece-7a52-4925-afeb-6641d6a348cc OR admin:g15dc1173-7ddf-43cf-a89a-77a7f81c4cfc OR admin:gc3a599d3-c758-46cd-9855-c98f6ab58ae4 OR admin:g3d648c3e-58c3-4342-b500-07cba10ba52d OR admin:g82bf5168-65c1-4627-8eb4-724fa0ea51a7 OR admin:ge751e973-697d-419c-b59b-5a5644702874 OR admin:g44dd0a80-c1e6-4274-9be4-9f342d74928c OR admin:g4842f9c2-73ed-476a-a81a-7167d8aa7946 OR admin:g5f279b3f-c2ce-4c75-b151-1de52c1a540e OR admin:ga6df8adc-2e1d-40f2-8f1e-f77796d0eecd OR admin:gfdfc1621-382e-437a-8674-c9007627565c OR admin:g15cd114a-0b89-442b-a1b4-1febb6959571 OR admin:g12aede99-d018-4c00-b4d4-a732541d0017 OR admin:gc59529d7-002a-4216-b2e1-d909afd2d4a9 OR admin:gd0806714-bc13-460d-bedd-121bdd5436a4 OR admin:gce70739a-8820-4d56-b19c-f191855479e4 OR admin:g7d3409eb-81e3-4156-afb1-7f02de22065f OR admin:g54bc009e-2954-4dad-8c30-be6a09dc5093 OR admin:gc5e1d6b7-4603-40d7-852f-6654c159dec9 OR admin:g0046214d-c85b-4f12-a5e6-2f57a2c3abb0 OR admin:g4c7b4fd0-938f-40e9-ab3e-447c317296c1 OR admin:gcfae9b69-d8dd-4cf3-9a4e-d6e31ff68731 OR ... admin:g20f366c0-96c0-4416-ad0b-46884010925f)': too many boolean clauses The search resourceType filter was: search.resourcetype:Community
</code></pre><ul>
<li>There are 1,805 OR clauses in the full log!
<ul>
<li>We previous had this issue in 2020-01 and 2020-02 with DSpace 5 and DSpace 6</li>
<li>At the time the solution was to increase the <code>maxBooleanClauses</code> in Solr and to disable access rights awareness, but I don’t think we want to do the second one now</li>
<li>I saw many users of Solr in other applications increasing this to obscenely high numbers, so I think we should be OK to increase it from 1024 to 2048</li>
</ul>
</li>
<li>Re-visiting the DSpace user groomer to delete inactive users
<ul>
<li>In 2023-08 I noticed that this was now <ahref="https://github.com/DSpace/DSpace/pull/2928">possible in DSpace 7</a></li>
<li>As a test I tried to delete all users who have been inactive since six years ago (Janury 9, 2018):</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ dspace dsrun org.dspace.eperson.Groomer -a -b 01/09/2018 -d
</span></span></code></pre></div><ul>
<li>I tested it on DSpace 7 Test and it worked… I am debating running it on CGSpace…
<ul>
<li>I see we have almost 9,000 users:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ dspace user -L > /tmp/users-before.txt
<li>I spent some time deleting old groups on CGSpace</li>
<li>I looked into the use of the <code>cg.identifier.ciatproject</code> field and found there are only a handful of uses, with some even seeming to be a mistake:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>localhost/dspace7= ☘ SELECT DISTINCT text_value AS "cg.identifier.ciatproject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata
</span></span><spanstyle="display:flex;"><span>_field_id = 232 GROUP BY "cg.identifier.ciatproject" ORDER BY count DESC;
<li>Export a list of affiliations to do some cleanup:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>localhost/dspace7= ☘ \COPY (SELECT DISTINCT text_value AS "cg.contributor.affiliation", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 211 GROUP BY "cg.contributor.affiliation" ORDER BY count DESC) to /tmp/2024-01-affiliations.csv WITH CSV HEADER;
<li>I first did some clustering and editing in OpenRefine, then I’ll import those back into CGSpace and then do another export</li>
<li>Troubleshooting the statistics pages that aren’t working on DSpace 7
<ul>
<li>On a hunch, I queried for for Solr statistics documents that <strong>did not have an <code>id</code> matching the 36-character UUID pattern</strong>:</li>
<li>I see that we had 31,744 statistic events yesterday, and 799 have no <code>id</code>!</li>
<li>I asked about this on Slack and will file an issue on GitHub if someone else also finds such records
<ul>
<li>Several people said they have them, so it’s a bug of some sort in DSpace, not our configuration</li>
</ul>
</li>
</ul>
<h2id="2024-01-13">2024-01-13</h2>
<ul>
<li>Yesterday alone we had 37,000 unique IPs making requests to nginx
<ul>
<li>I looked up the ASNs and found 6,000 IPs from this network in Amazon Singapore: 47.128.0.0/14</li>
</ul>
</li>
</ul>
<h2id="2024-01-15">2024-01-15</h2>
<ul>
<li>Investigating the CSS selector warning that I’ve seen in PM2 logs:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>0|dspace-ui | 1 rules skipped due to selector errors:
<li>It seems to be a bug in Angular, as this selector comes from Bootstrap 4.6.x and is not invalid
<ul>
<li>But that led me to a more interesting issue with <code>inlineCritical</code> optimization for styles in Angular SSR that might be responsible for causing high load in the frontend</li>
<li>Looking these IPs up I see there are 18,000 coming from Comcast, 10,000 from AT&T, 4110 from Charter, 3500 from Cox and dozens of other residential IPs
<ul>
<li>I highly doubt these are home users browsing CGSpace… seems super fishy</li>
<li>Also, over 1,000 IPs from SpaceX Starlink in the last week. RIGHT</li>
<li>I will temporarily add a few new datacenter ISP network blocks to our rate limit:
<ul>
<li>16509 Amazon-02</li>
<li>701 UUNET</li>
<li>8075 Microsoft</li>
<li>15169 Google</li>
<li>14618 Amazon-AES</li>
<li>396982 Google Cloud</li>
</ul>
</li>
<li>The load on the server <em>immediately</em> dropped</li>