I tried to run the AtomicStatisticsUpdateCLI CUA migration script on DSpace Test (linode26) again and it is still going very slowly and has tons of errors like I noticed yesterday
I sent Atmire the dspace.log from today and told them to log into the server to debug the process
In other news, I checked the statistics API on DSpace 6 and it’s working
I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:
I tried to run the AtomicStatisticsUpdateCLI CUA migration script on DSpace Test (linode26) again and it is still going very slowly and has tons of errors like I noticed yesterday
I sent Atmire the dspace.log from today and told them to log into the server to debug the process
In other news, I checked the statistics API on DSpace 6 and it’s working
I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:
<li>I tried to run the <code>AtomicStatisticsUpdateCLI</code> CUA migration script on DSpace Test (linode26) again and it is still going very slowly and has tons of errors like I noticed yesterday
<ul>
<li>I sent Atmire the dspace.log from today and told them to log into the server to debug the process</li>
</ul>
</li>
<li>In other news, I checked the statistics API on DSpace 6 and it’s working</li>
<li>I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:</li>
</ul>
<pre><code>$ dspace oai import -c
OAI 2.0 manager action started
Loading @mire database changes for module MQM
Changes have been processed
Clearing index
Index cleared
Using full import.
Full import
java.lang.NullPointerException
at org.dspace.xoai.app.XOAI.willChangeStatus(XOAI.java:438)
at org.dspace.xoai.app.XOAI.index(XOAI.java:368)
at org.dspace.xoai.app.XOAI.index(XOAI.java:280)
at org.dspace.xoai.app.XOAI.indexAll(XOAI.java:227)
at org.dspace.xoai.app.XOAI.index(XOAI.java:134)
at org.dspace.xoai.app.XOAI.main(XOAI.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
</code></pre><h2id="2020-06-02">2020-06-02</h2>
<ul>
<li>I noticed that I was able to do a partial OAI import (ie, without <code>-c</code>)
<ul>
<li>Then I tried to clear the OAI Solr core and import, but I get the same error:</li>
There are no indexed documents, using full import.
Full import
java.lang.NullPointerException
at org.dspace.xoai.app.XOAI.willChangeStatus(XOAI.java:438)
at org.dspace.xoai.app.XOAI.index(XOAI.java:368)
at org.dspace.xoai.app.XOAI.index(XOAI.java:280)
at org.dspace.xoai.app.XOAI.indexAll(XOAI.java:227)
at org.dspace.xoai.app.XOAI.index(XOAI.java:143)
at org.dspace.xoai.app.XOAI.main(XOAI.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
</code></pre><ul>
<li>I found a <ahref="https://jira.lyrasis.org/browse/DS-4363">bug report on DSpace Jira</a> describing this issue affecting someone else running DSpace 6.3
<ul>
<li>They suspect it has to do with the item having some missing group names in its authorization policies</li>
<li>I added some debugging to <code>dspace-oai/src/main/java/org/dspace/xoai/app/XOAI.java</code> to print the Handle of the item that causes the crash and then I looked at its authorization policies</li>
<li>Indeed there are some blank group names:</li>
</ul>
</li>
</ul>
<p><imgsrc="/cgspace-notes/2020/06/item-authorizations-dspace63.png"alt="Missing group names in DSpace 6.3 item authorization policy"></p>
<ul>
<li>The same item on CGSpace (DSpace 5.8) also has groups with no name:</li>
</ul>
<p><imgsrc="/cgspace-notes/2020/06/item-authorizations-dspace58.png"alt="Missing group names in DSpace 5.8 item authorization policy"></p>
<ul>
<li>I added some debugging and found exactly where this happens
<ul>
<li>As it turns out we can just check if the group policy is null there and it allows the OAI import to proceed</li>
<li>Aaaaand as it turns out, this was fixed in <code>dspace-6_x</code> in 2018 after DSpace 6.3 was released (see <ahref="https://jira.lyrasis.org/browse/DS-4019">DS-4019</a>), so that was a waste of three hours.</li>
<li>I cherry picked 150e83558103ed7f50e8f323b6407b9cbdf33717 into our current <code>6_x-dev-atmire-modules</code> branch</li>
<li>Maria was asking about some items they are trying to map from the CGIAR Big Data collection into their Alliance of Bioversity and CIAT journal articles collection, but for some reason the items don’t show up in the item mapper
<ul>
<li>The items don’t even show up in the XMLUI Discover advanced search, and actually I don’t even see any recent items on the recently submitted part of the collection (but the item pages exist of course)</li>
<li>Perhaps I need to try a full Discovery re-index:</li>
<li>Still I don’t see the item in XMLUI search or in the item mapper (and I made sure to clear the Cocoon cache)
<ul>
<li>I’m starting to think it’s something related to the database transaction issue…</li>
<li>I removed our custom JDBC driver from <code>/usr/local/apache-tomcat...</code> so that DSpace will use its own much older one, version 9.1-901-1.jdbc4</li>
<li>I ran all system updates on the server (linode18) and rebooted it</li>
<li>After it came back up I had to restart Tomcat five times before all Solr statistics cores came up properly</li>
<li>Unfortunately this means that the Tomcat JDBC pooling via JNDI doesn’t work, so we’re using only the 30 connections reserved for the DSpace CLI from DSpace’s own internal pool</li>
<li>Perhaps our previous issues with the database pool from a few years ago will be less now that we have much more aggressive blocking and rate limiting of bots in nginx</li>
<li>I will also import a fresh database snapshot from CGSpace and check if I can map the item in my local environment
<ul>
<li>After importing and forcing a full reindex locally I can see the item in search and in the item mapper</li>
</ul>
</li>
<li>Abenet sent another message about two users who are having issues with submission, and I see the number of locks in PostgreSQL has sky rocketed again as of a few days ago:</li>
<li>I think I need to just leave this as is with the DSpace default JDBC driver for now, but perhaps I could also downgrade the Tomcat version (I deployed Tomcat 7.0.103 in March, so perhaps that’s relevant)</li>
<li>Also, I’ll start <em>another</em> full reindexing to see if the issue with mapping is somehow also resolved now that the database connections are working better
<ul>
<li>Perhaps related, but this one finished much faster:</li>
<li>Peter said he was annoyed with a CSV export from CGSpace because of the different <code>text_lang</code> attributes and asked if we can fix it</li>
<li>The last time I normalized these was in 2019-06, and currently it looks like this:</li>
</ul>
<pre><code>dspace=# SELECT DISTINCT text_lang, count(text_lang) FROM metadatavalue WHERE resource_type_id=2 GROUP BY text_lang ORDER BY count DESC;
<li>In theory we can have different languages for metadata fields but in practice we don’t do that, so we might as well normalize everything to “en_US” (and perhaps I should make a curation task to do this)</li>
<li>For now I will do it manually on CGSpace and DSpace Test:</li>
</ul>
<pre><code>dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE resource_type_id=2;
UPDATE 2414738
</code></pre><ul>
<li>Note: DSpace Test doesn’t have the <code>resource_type_id</code> column because it’s running DSpace 6 and <ahref="https://wiki.lyrasis.org/display/DSPACE/DSpace+Service+based+api">the schema changed to use an object model there</a>
<ul>
<li>We need to use this on DSpace 6:</li>
</ul>
</li>
</ul>
<pre><code>dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE dspace_object_id IN (SELECT uuid FROM item);
</code></pre><ul>
<li>Peter asked if it was possible to find all ILRI items that have “zoonoses” or “zoonotic” in their titles and check if they have the ILRI subject “ZOONOTIC DISEASES” (and add it if not)
<ul>
<li>Unfortunately the only way we have currently would be to export the entire ILRI community as a CSV and filter/edit it in OpenRefine</li>
</ul>
</li>
</ul>
<h2id="2020-06-08">2020-06-08</h2>
<ul>
<li>I manually mapped the two Big Data items that Maria had asked about last week by exporting their metadata to CSV and re-importing it
<ul>
<li>I still need to look into the underlying issue there, seems to be something in Solr</li>
<li>Something strang is that, when I search for part of the title in Discovery I get 2,000 results on CGSpace, while on my local DSpace 5.8 environment I get 2!</li>
<li>On DSpace Test, which is currently running DSpace 6, I get 2,000 results but the top one is the correct match and the item does show up in the item mapper
<ul>
<li>Interestingly, if I search directly in the Solr <code>search</code> core on CGSpace with a query like <code>handle:10568/108315</code> I don’t see the item, but on my local Solr I see them!</li>
</ul>
</li>
<li>Peter asked if it was easy for me to add ILRI subject “ZOONOTIC DISEASES” to any items in the ILRI community that had “zoonotic” or “zoonoses” in their title, but were missing the ILRI subject
<ul>
<li>I exported the ILRI community metadata, cut the three fields I needed, and then filtered and edited the CSV in OpenRefine:</li>
<li>So it seems to be related to the database, perhaps that there are less connections in the pool?
<ul>
<li>… and on that note, working without the JDBC driver and DSpace’s built-in connection pool since 2020-06-04 hasn’t actually solved anything, the issue with locks and idle in transaction connections is creeping up again!</li>
<li>It seems to have started today around 10:00 AM… I need to pore over the logs to see if there is a correlation
<ul>
<li>I think there is some kind of attack going on because I see a bunch of requests for sequential Handles from a similar IP range in a datacenter in Sweden where the user <em>does not</em> re-use their DSpace <code>session_id</code></li>
<li>Looking in the nginx logs I see most (all?) of these requests are using the following user agent:</li>
</ul>
</li>
</ul>
<pre><code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36
</code></pre><ul>
<li>Looking at the nginx access logs I see that, other than something that seems like Google Feedburner, all hosts using this user agent are all in Sweden!</li>
Purging 1423 hits from 192.36.136.246 in statistics
Purging 1387 hits from 192.36.241.95 in statistics
Purging 1398 hits from 192.165.45.204 in statistics
Purging 1413 hits from 192.36.119.28 in statistics
Purging 1418 hits from 192.36.217.7 in statistics
Purging 1418 hits from 192.121.146.160 in statistics
Purging 1416 hits from 192.36.23.35 in statistics
Purging 1449 hits from 192.36.109.94 in statistics
Purging 1440 hits from 192.36.24.93 in statistics
Purging 1465 hits from 192.36.154.13 in statistics
Purging 1447 hits from 192.36.137.125 in statistics
Purging 1453 hits from 192.176.249.42 in statistics
Purging 1462 hits from 192.36.166.120 in statistics
Purging 1499 hits from 192.36.172.86 in statistics
Purging 1457 hits from 192.36.198.145 in statistics
Purging 1467 hits from 192.36.226.212 in statistics
Purging 1489 hits from 192.121.136.49 in statistics
Purging 1478 hits from 192.36.207.54 in statistics
Purging 1502 hits from 192.36.121.98 in statistics
Purging 1544 hits from 192.36.173.93 in statistics
Total number of bot hits purged: 29025
</code></pre><ul>
<li>Skype with Enrico, Moayad, Jane, Peter, and Abenet to see the latest OpenRXV/AReS developments
<ul>
<li>One thing Enrico mentioned to me during the call was that they had issues with Altmetric’s user agents, and he said they are apparently using <code>Altmetribot</code> and <code>Postgenomic V2</code></li>
<li>I looked in our logs and indeed we have those, so I will add them to the nginx rate limit bypass</li>
<li>I checked the Solr stats and it seems there are only a few thousand in 2016 and a few hundred in other years so I won’t bother adding it to the DSpace robot user agents list</li>
</ul>
</li>
<li>Atmire sent an updated pull request for the Font Awesome 5 update for CUA (<ahref="https://github.com/ilri/DSpace/pull/445">#445</a>) so I filed feedback on <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706">their tracker</a></li>
<li>Atmire sent some questions about DSpace Test related to our ongoing CUA indexing issue
<ul>
<li>I had to clarify a few build steps and directories on the test server</li>
</ul>
</li>
<li>I notice that the PostgreSQL connection issues have not come back since 2020-06-09 when I downgraded Tomcat to 7.0.99… fingers crossed that it was something related to that!
<ul>
<li>On that note I notice that the AReS explorer is still not harvesting CGSpace properly…</li>
<li>I looked at the REST API logs on CGSpace (linode18) and saw that the AReS harvester is being denied due to not having a user agent, oops:</li>
<li>I created an nginx map based on the host’s IP address that sets a temporary user agent (ua) and then changed the conditional in the REST API location block so that it checks this mapped ua instead of the default one
<ul>
<li>That should allow AReS to harvest for now until they update their user agent</li>
<li>I restarted the AReS server’s docker containers with <code>docker-compose down</code> and <code>docker-compose up -d</code> and the next day I saw CGSpace was in AReS again finally</li>
<li>Then I formatted it into a SQL query and exported a CSV:</li>
</ul>
<pre><code>dspace=# \COPY (SELECT DISTINCT text_value AS author, COUNT(*) FROM metadatavalue WHERE metadata_field_id = (SELECT metadata_field_id FROM metadatafieldregistry WHERE element = 'contributor' AND qualifier = 'author') AND resource_type_id = 2 AND resource_id IN (SELECT item_id FROM collection2item WHERE collection_id IN (SELECT resource_id FROM hANDle WHERE hANDle IN ('10568/100533', '10568/100653', '10568/101955', '10568/106580', '10568/108469', '10568/51671', '10568/53085', '10568/53086', '10568/53087', '10568/53088', '10568/53089', '10568/53090', '10568/53091', '10568/53092', '10568/53093', '10568/53094', '10568/64874', '10568/69069', '10568/70150', '10568/88229', '10568/89346', '10568/89347', '10568/99301', '10568/99302', '10568/99303', '10568/99304', '10568/99428'))) GROUP BY text_value ORDER BY count DESC) TO /tmp/cip-authors.csv WITH CSV;