- I tried to run the `AtomicStatisticsUpdateCLI` CUA migration script on DSpace Test (linode26) again and it is still going very slowly and has tons of errors like I noticed yesterday
- I sent Atmire the dspace.log from today and told them to log into the server to debug the process
- In other news, I checked the statistics API on DSpace 6 and it's working
- I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:
<!--more-->
```
$ dspace oai import -c
OAI 2.0 manager action started
Loading @mire database changes for module MQM
Changes have been processed
Clearing index
Index cleared
Using full import.
Full import
java.lang.NullPointerException
at org.dspace.xoai.app.XOAI.willChangeStatus(XOAI.java:438)
at org.dspace.xoai.app.XOAI.index(XOAI.java:368)
at org.dspace.xoai.app.XOAI.index(XOAI.java:280)
at org.dspace.xoai.app.XOAI.indexAll(XOAI.java:227)
at org.dspace.xoai.app.XOAI.index(XOAI.java:134)
at org.dspace.xoai.app.XOAI.main(XOAI.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
There are no indexed documents, using full import.
Full import
java.lang.NullPointerException
at org.dspace.xoai.app.XOAI.willChangeStatus(XOAI.java:438)
at org.dspace.xoai.app.XOAI.index(XOAI.java:368)
at org.dspace.xoai.app.XOAI.index(XOAI.java:280)
at org.dspace.xoai.app.XOAI.indexAll(XOAI.java:227)
at org.dspace.xoai.app.XOAI.index(XOAI.java:143)
at org.dspace.xoai.app.XOAI.main(XOAI.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
```
- I found a [bug report on DSpace Jira](https://jira.lyrasis.org/browse/DS-4363) describing this issue affecting someone else running DSpace 6.3
- They suspect it has to do with the item having some missing group names in its authorization policies
- I added some debugging to `dspace-oai/src/main/java/org/dspace/xoai/app/XOAI.java` to print the Handle of the item that causes the crash and then I looked at its authorization policies
- Indeed there are some blank group names:
![Missing group names in DSpace 6.3 item authorization policy](/cgspace-notes/2020/06/item-authorizations-dspace63.png)
- The same item on CGSpace (DSpace 5.8) also has groups with no name:
![Missing group names in DSpace 5.8 item authorization policy](/cgspace-notes/2020/06/item-authorizations-dspace58.png)
- I added some debugging and found exactly where this happens
- As it turns out we can just check if the group policy is null there and it allows the OAI import to proceed
- Aaaaand as it turns out, this was fixed in `dspace-6_x` in 2018 after DSpace 6.3 was released (see [DS-4019](https://jira.lyrasis.org/browse/DS-4019)), so that was a waste of three hours.
- I cherry picked 150e83558103ed7f50e8f323b6407b9cbdf33717 into our current `6_x-dev-atmire-modules` branch
- Maria was asking about some items they are trying to map from the CGIAR Big Data collection into their Alliance of Bioversity and CIAT journal articles collection, but for some reason the items don't show up in the item mapper
- The items don't even show up in the XMLUI Discover advanced search, and actually I don't even see any recent items on the recently submitted part of the collection (but the item pages exist of course)
- Perhaps I need to try a full Discovery re-index:
- Still I don't see the item in XMLUI search or in the item mapper (and I made sure to clear the Cocoon cache)
- I'm starting to think it's something related to the database transaction issue...
- I removed our custom JDBC driver from `/usr/local/apache-tomcat...` so that DSpace will use its own much older one, version 9.1-901-1.jdbc4
- I ran all system updates on the server (linode18) and rebooted it
- After it came back up I had to restart Tomcat five times before all Solr statistics cores came up properly
- Unfortunately this means that the Tomcat JDBC pooling via JNDI doesn't work, so we're using only the 30 connections reserved for the DSpace CLI from DSpace's own internal pool
- Perhaps our previous issues with the database pool from a few years ago will be less now that we have much more aggressive blocking and rate limiting of bots in nginx
- I will also import a fresh database snapshot from CGSpace and check if I can map the item in my local environment
- After importing and forcing a full reindex locally I can see the item in search and in the item mapper
- Abenet sent another message about two users who are having issues with submission, and I see the number of locks in PostgreSQL has sky rocketed again as of a few days ago:
- I think I need to just leave this as is with the DSpace default JDBC driver for now, but perhaps I could also downgrade the Tomcat version (I deployed Tomcat 7.0.103 in March, so perhaps that's relevant)
- Also, I'll start *another* full reindexing to see if the issue with mapping is somehow also resolved now that the database connections are working better
- Perhaps related, but this one finished much faster:
- Peter said he was annoyed with a CSV export from CGSpace because of the different `text_lang` attributes and asked if we can fix it
- The last time I normalized these was in 2019-06, and currently it looks like this:
```
dspace=# SELECT DISTINCT text_lang, count(text_lang) FROM metadatavalue WHERE resource_type_id=2 GROUP BY text_lang ORDER BY count DESC;
text_lang | count
-------------+---------
en_US | 2158377
en | 149540
| 49206
es_ES | 18
fr | 4
Peer Review | 1
| 0
(7 rows)
```
- In theory we can have different languages for metadata fields but in practice we don't do that, so we might as well normalize everything to "en_US" (and perhaps I should make a curation task to do this)
- For now I will do it manually on CGSpace and DSpace Test:
```
dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE resource_type_id=2;
UPDATE 2414738
```
- Note: DSpace Test doesn't have the `resource_type_id` column because it's running DSpace 6 and [the schema changed to use an object model there](https://wiki.lyrasis.org/display/DSPACE/DSpace+Service+based+api)
- We need to use this on DSpace 6:
```
dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE dspace_object_id IN (SELECT uuid FROM item);
```
- Peter asked if it was possible to find all ILRI items that have "zoonoses" or "zoonotic" in their titles and check if they have the ILRI subject "ZOONOTIC DISEASES" (and add it if not)
- Unfortunately the only way we have currently would be to export the entire ILRI community as a CSV and filter/edit it in OpenRefine
## 2020-06-08
- I manually mapped the two Big Data items that Maria had asked about last week by exporting their metadata to CSV and re-importing it
- I still need to look into the underlying issue there, seems to be something in Solr
- Something strang is that, when I search for part of the title in Discovery I get 2,000 results on CGSpace, while on my local DSpace 5.8 environment I get 2!
- On DSpace Test, which is currently running DSpace 6, I get 2,000 results but the top one is the correct match and the item does show up in the item mapper
- Interestingly, if I search directly in the Solr `search` core on CGSpace with a query like `handle:10568/108315` I don't see the item, but on my local Solr I see them!
- Peter asked if it was easy for me to add ILRI subject "ZOONOTIC DISEASES" to any items in the ILRI community that had "zoonotic" or "zoonoses" in their title, but were missing the ILRI subject
- I exported the ILRI community metadata, cut the three fields I needed, and then filtered and edited the CSV in OpenRefine:
- So it seems to be related to the database, perhaps that there are less connections in the pool?
- ... and on that note, working without the JDBC driver and DSpace's built-in connection pool since 2020-06-04 hasn't actually solved anything, the issue with locks and idle in transaction connections is creeping up again!
- It seems to have started today around 10:00 AM... I need to pore over the logs to see if there is a correlation
- I think there is some kind of attack going on because I see a bunch of requests for sequential Handles from a similar IP range in a datacenter in Sweden where the user *does not* re-use their DSpace `session_id`
- Looking in the nginx logs I see most (all?) of these requests are using the following user agent:
```
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36
```
- Looking at the nginx access logs I see that, other than something that seems like Google Feedburner, all hosts using this user agent are all in Sweden!
Purging 1423 hits from 192.36.136.246 in statistics
Purging 1387 hits from 192.36.241.95 in statistics
Purging 1398 hits from 192.165.45.204 in statistics
Purging 1413 hits from 192.36.119.28 in statistics
Purging 1418 hits from 192.36.217.7 in statistics
Purging 1418 hits from 192.121.146.160 in statistics
Purging 1416 hits from 192.36.23.35 in statistics
Purging 1449 hits from 192.36.109.94 in statistics
Purging 1440 hits from 192.36.24.93 in statistics
Purging 1465 hits from 192.36.154.13 in statistics
Purging 1447 hits from 192.36.137.125 in statistics
Purging 1453 hits from 192.176.249.42 in statistics
Purging 1462 hits from 192.36.166.120 in statistics
Purging 1499 hits from 192.36.172.86 in statistics
Purging 1457 hits from 192.36.198.145 in statistics
Purging 1467 hits from 192.36.226.212 in statistics
Purging 1489 hits from 192.121.136.49 in statistics
Purging 1478 hits from 192.36.207.54 in statistics
Purging 1502 hits from 192.36.121.98 in statistics
Purging 1544 hits from 192.36.173.93 in statistics
Total number of bot hits purged: 29025
```
- Skype with Enrico, Moayad, Jane, Peter, and Abenet to see the latest OpenRXV/AReS developments
- One thing Enrico mentioned to me during the call was that they had issues with Altmetric's user agents, and he said they are apparently using `Altmetribot` and `Postgenomic V2`
- I looked in our logs and indeed we have those, so I will add them to the nginx rate limit bypass
- I checked the Solr stats and it seems there are only a few thousand in 2016 and a few hundred in other years so I won't bother adding it to the DSpace robot user agents list
- Atmire sent an updated pull request for the Font Awesome 5 update for CUA ([#445](https://github.com/ilri/DSpace/pull/445)) so I filed feedback on [their tracker](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706)
## 2020-06-09
- I'm still thinking about the issue with PostgreSQL "idle in transaction" and "waiting for lock" connections
- As far as I can see from the Munin graphs this issue started in late April or early May
- I don't see any PostgreSQL updates around then, though I did update Tomcat to version 7.0.103 in March
- I will try to downgrade Tomcat to 7.0.99, which was the version I was using until early February, before we had seen any issues
- Also, I will use the PostgreSQL JDBC driver version 42.2.9, which is what we were using back then as well
- After deploying Tomcat 7.0.99 I had to restart Tomcat three times before all the Solr statistics cores came up OK
- Well look at that, the "idle in transaction" and locking issues started in April on DSpace Test too...
![PostgreSQL connections year DSpace Test](/cgspace-notes/2020/06/postgres_connections_ALL-day2.png)
- Atmire sent some questions about DSpace Test related to our ongoing CUA indexing issue
- I had to clarify a few build steps and directories on the test server
- I notice that the PostgreSQL connection issues have not come back since 2020-06-09 when I downgraded Tomcat to 7.0.99... fingers crossed that it was something related to that!
- On that note I notice that the AReS explorer is still not harvesting CGSpace properly...
- I looked at the REST API logs on CGSpace (linode18) and saw that the AReS harvester is being denied due to not having a user agent, oops:
- I created an nginx map based on the host's IP address that sets a temporary user agent (ua) and then changed the conditional in the REST API location block so that it checks this mapped ua instead of the default one
- That should allow AReS to harvest for now until they update their user agent
- I restarted the AReS server's docker containers with `docker-compose down` and `docker-compose up -d` and the next day I saw CGSpace was in AReS again finally
- Then I formatted it into a SQL query and exported a CSV:
```
dspace=# \COPY (SELECT DISTINCT text_value AS author, COUNT(*) FROM metadatavalue WHERE metadata_field_id = (SELECT metadata_field_id FROM metadatafieldregistry WHERE element = 'contributor' AND qualifier = 'author') AND resource_type_id = 2 AND resource_id IN (SELECT item_id FROM collection2item WHERE collection_id IN (SELECT resource_id FROM hANDle WHERE hANDle IN ('10568/100533', '10568/100653', '10568/101955', '10568/106580', '10568/108469', '10568/51671', '10568/53085', '10568/53086', '10568/53087', '10568/53088', '10568/53089', '10568/53090', '10568/53091', '10568/53092', '10568/53093', '10568/53094', '10568/64874', '10568/69069', '10568/70150', '10568/88229', '10568/89346', '10568/89347', '10568/99301', '10568/99302', '10568/99303', '10568/99304', '10568/99428'))) GROUP BY text_value ORDER BY count DESC) TO /tmp/cip-authors.csv WITH CSV;
- Macaroni Bros emailed me to say that they are having issues with thumbnail links on the REST API
- For some reason all the bitstream `retrieveLink` links are wrong because they use `/bitstreams/` instead of `/rest/bitstreams/`, which leads to an HTTP 404
- I looked on DSpace Test, which is running DSpace 6 dev branch right now, and the links are OK there!
- Looks like someone [reported this issue on DSpace 6](https://jira.lyrasis.org/browse/DS-3193)
- Other large DSpace 5 sites have this same issue: https://openknowledge.worldbank.org/handle/10986/30568
- I can't believe nobody ever noticed this before...
- I tried to port the patch from DS-3193 to DSpace 5.x and it builds, but causes an HTTP 500 Internal Server error when generating bitstream links
- Well the correct URL should have `/rest/` anyways, and that's how the URLs are in DSpace 6 anyways, so I will tell Macaroni to just make sure that those links use `/rest/`
## 2020-06-16
- Looks like the PostgreSQL connection/lock issue might be fixed because it's been six days with no reoccurrence:
- The PostgreSQL connections and locks still look OK on both CGSpace (linode18) and DSpace Test (linode26) after ten days or so
- I decided to upgrade the JDBC driver on DSpace Test from 42.2.9 to 42.2.14 to leave it for a few weeks and see if the issue comes back, as it was present on the test server with DSpace 6 as well
- As far as I can tell the issue is related to something in Tomcat > 7.0.99
- Run system updates and reboot DSpace Test
- Re-deploy `5_x-prod` branch on CGSpace, run system updates, and reboot the server
- I had to restart Tomcat 7 once after the server came back up because some of the Solr statistics cores didn't come up the first time
- Unfortunately I realized that I had forgotten to update the `5_x-prod` branch to include the REST API fix from last week so I had to rebuild and restart Tomcat again
## 2020-06-22
- Skype with Peter, Abenet, Enrico, Jane, and Moayad about the latest OpenRXV developments
- I will go visit CodeObia later this week to run through the list of issues and close the ones that are finished
- Peter said he's having problems submitting an item on CGSpace and shit, it seems to be the same fucking PostgreSQL "idle in transaction" and "waiting for lock" issue we've been having sporadically the last few months
![PostgreSQL connections year CGSpace](/cgspace-notes/2020/06/postgres_connections_ALL-day3.png)
- The issue hadn't occurred in almost two weeks after I downgraded Tomcat to 7.0.99 with the PostgreSQL JDBC driver version 42.2.9 so I thought it was fixed
- Apparently it's not related to the Tomcat or JDBC driver version, as even when I reverted back to DSpace's really old built-in JDBC driver it still did the same thing!
- Could it be a memory leak or something? Why now?
- For now I will revert to the latest Tomcat 7.0.104 and PostgreSQL JDBC 42.2.14