- Discuss some OpenRXV issues with Abdullah from CodeObia
- He's trying to work on the DSpace 6+ metadata schema autoimport using the DSpace 6+ REST API
- Also, we found some issues building and running OpenRXV currently due to ecosystem shift in the Node.js dependencies
<!--more-->
## 2021-03-02
- I fixed three build and runtime issues in OpenRXV:
- [fix highcharts-angular and ngx-tour-core build](https://github.com/ilri/OpenRXV/pull/80)
- [frontend/package.json: Pin @types/ramda at 0.27.34](https://github.com/ilri/OpenRXV/pull/82)
- Then I merged a few fixes that Abdullah had worked on last week
## 2021-03-03
- I [fixed another frontend build warning on OpenRXV](https://github.com/ilri/OpenRXV/issues/83)
- Then I [updated the frontend container to use Node.js 12 and Ubuntu 20.04](https://github.com/ilri/OpenRXV/pull/84)
- Also, I [added a GitHub Actions workflow to build the frontend](https://github.com/ilri/OpenRXV/pull/85)
- I did some testing of Abdullah's patch for the values mapping search on OpenRXV
- It still doesn't work with multi-word values, so I recorded a video with wf-recorder and uploaded it to [the issue](https://github.com/ilri/OpenRXV/issues/43) for him to investigate
## 2021-03-04
- Peter is having issues with the workflow since yesterday
- I looked at the Munin stats and see a high number of database locks since yesterday
- I looked at the number of connections in PostgreSQL and it's definitely high again:
```console
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
1020
```
- I reported it to Atmire to take a look, on the [same issue](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=851) we had been tracking this before
- Abenet asked me to add a new ORCID for ILRI staff member Zoe Campbell
- I added it to the controlled vocabulary and then tagged her existing items on CGSpace using my `add-orcid-identifier.py` script:
- I still need to do cleanup on the journal articles metadata
- Peter sent me some cleanups but I can't use them in the search/replace format he gave
- I think it's better to export the metadata values with IDs and import cleaned up ones as CSV
```console
localhost/dspace63= > \COPY (SELECT dspace_object_id AS id, text_value as "cg.journal" FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=251) to /tmp/2021-02-24-journals.csv WITH CSV HEADER;
COPY 32087
```
- I used OpenRefine to remove all journal values that didn't have one of these values: ; ( )
- Then I cloned the `cg.journal` field to `cg.volume` and `cg.issue`
- I used some GREL expressions like these to extract the journal name, volume, and issue:
```console
value.partition(';')[0].trim() # to get journal names
value.partition(/[0-9]+\([0-9]+\)/)[1].replace(/^(\d+)\(\d+\)/,"$1") # to get journal volumes
value.partition(/[0-9]+\([0-9]+\)/)[1].replace(/^\d+\((\d+)\)/,"$1") # to get journal issues
```
- Then I uploaded the changes to CGSpace using `dspace metadata-import`
- Margarita from CCAFS was asking about an error deleting some items that were showing up in Google and should have been private
- The error was "Authorization denied for action OBSOLETE (DELETE) on BITSTREAM:bd157345-448e ..."
- I searched the DSpace issue tracker and found several issues reporting this:
- [DS-4004 Authorization denied Exception when trying to delete permanently an item, collection or community as a non-Admin user](https://jira.lyrasis.org/browse/DS-4004)
- [DS-4297 Authorization error when trying to delete item by submitter/administrator](https://jira.lyrasis.org/browse/DS-4297)
- The issue is apparently with non-admin users who are in the admin and submit groups of the owning collection...
- In this case the item was uploaded to the CCAFS Reports collection, and Margarita is a non-admin user who is a member of the collection's admin and submit groups, exactly as the issue described
- I added a comment about our issue to [DS-4297](https://jira.lyrasis.org/browse/DS-4297)
- Yesterday Abenet added me to a WLE collection approver/editer steps so we can try to figure out why Niroshini is having issues adding metadata to Udana's submissions
- I edited Udana's submission to CGSpace:
- corrected the title
- added language English
- changed the link to the external item page instead of PDF
- added SDGs from the external item page
- added AGROVOC subjects from the external item page
- added pagination (extent)
- changed the license to "other" because CC-BY-NC-ND is not printed anywhere in the PDF or external item page
- I realized there is something wrong with the Elasticsearch indexes on AReS
- On a new test environment I see `openrxv-items` is correctly an alias of `openrxv-items-final`:
```console
$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
"openrxv-items-final": {
"aliases": {
"openrxv-items": {}
}
},
```
- But on AReS production `openrxv-items` has somehow become an index:
```console
$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
"openrxv-items": {
"aliases": {}
},
"openrxv-items-final": {
"aliases": {}
},
"openrxv-items-temp": {
"aliases": {}
},
```
- I fixed the issue on production by cloning the `openrxv-items` index to `openrxv-items-final`, deleting `openrxv-items`, and then re-creating it as an alias:
- They seem to make requests twice, once with the Delphi user agent that we know and already mark as a bot, and once with a "normal" user agent
- Looking in Solr I see they have been using this IP for awhile, as they have 100,000 hits going back into 2020
- I will add this IP to the list of bots in nginx and purge it from Solr with my `check-spider-ip-hits.sh` script
- I made a few changes to OpenRXV:
- [Migrated away from links to use networks](https://github.com/ilri/OpenRXV/issues/89)
- [Converted the backend container to use a custom image that includes `unoconv`](https://github.com/ilri/OpenRXV/issues/68) so we don't have to manually install it anymore
- I approved the WLE item that I edited last week, and all the metadata is there: https://hdl.handle.net/10568/111810
- So I'm not sure what Niroshini's issue with metadata is...
- Peter sent a message yesterday saying that his item finally got committed
- I looked at the Munin graphs and there was a MASSIVE spike in database activity two days ago, and now database locks are back down to normal levels (from 1000+):
```console
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
13
```
- On 2021-03-03 the PostgreSQL transactions started rising:
- I sent another message to Atmire to ask if they have time to look into this
- CIFOR is pressuring me to upload the batch items from last week
- Vika sent me a final file with some duplicates that Peter identified removed
- I extracted and re-applied my basic corrections from last week in OpenRefine, then ran the items through `csv-metadata-quality` checker and uploaded them to CGSpace
- In total there are 1,088 items
- Udana from IWMI emailed to ask about CGSpace thumbnails
- Udana from IWMI emailed to ask about an item uploaded recently that does not appear in AReS
- [The item](https://hdl.handle.net/10568/111794) was added to the archive on 2021-03-05, and I last harvested on 2021-03-06, so this might be an issue of a missing item
- Abenet got a quote from Atmire to buy 125 credits for 3750€
- Maria at Bioversity sent some feedback about duplicate items on AReS
- I'm wondering if the issue of the `openrxv-items-final` index not getting cleared after a successful harvest (which results in having 200,000, then 300,000, etc items) has to do with the alias issue I fixed yesterday
- I will start a fresh harvest on AReS without now to check, but first back up the current index just in case:
$ curl -s -X POST http://localhost:9200/openrxv-items-final/_clone/openrxv-items-final-2021-03-08
# start harvesting on AReS
```
- As I saw on my local test instance, even when you cancel a harvesting, it replaces the `openrxv-items-final` index with whatever is in `openrxv-items-temp` automatically, so I assume it will do the same now
- The harvesting on AReS finished last night and everything worked as expected, with no manual intervention
- This means that [the issue](https://github.com/ilri/OpenRXV/issues/64) we were facing for a few months was due to the `openrxv-items` index being deleted and re-created as a standalone index instead of an alias of `openrxv-items-final`
- Talk to Moayad about OpenRXV development
- We realized that the missing/duplicate items issue is probably due to the long harvesting time on the REST API, as the time between starting the harvesting on page 0 and finishing the harvesting on page 900 (in the CGSpace example), some items will have been added to the repository, which causes the pages to shift
- I proposed a solution in the [GitHub issue](https://github.com/ilri/OpenRXV/issues/67), where we consult the site's XML sitemap after harvesting to see if we missed any items, and then we harvest them individually
- Peter sent me a list of 356 DOIs from Altmetric that don't have our Handles, so we need to Tweet them
- I used my `doi-to-handle.py` script to generate a list of handles and titles for him:
- Colleagues from ICARDA asked about how we should handle ISI journals in CG Core, as CGSpace uses `cg.isijournal` and MELSpace uses `mel.impact-factor`
- I filed [an issue](https://github.com/AgriculturalSemantics/cg-core/issues/39) on the cg-core project to ask colleagues for ideas
- Peter said he doesn't see "Source Code" or "Software" in the [output type facet on the ILRI community](https://cgspace.cgiar.org/handle/10568/1/search-filter?field=type), but I see it on the home page, so I will try to do a full Discovery re-index:
- After the harvesting finished it seems the indexes got messed up again, as `openrxv-items` is an alias of `openrxv-items-temp` instead of `openrxv-items-final`:
```console
$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
"openrxv-items-final": {
"aliases": {}
},
"openrxv-items-temp": {
"aliases": {
"openrxv-items": {}
}
},
```
- Anyways, the number of items in `openrxv-items` seems OK and the AReS Explorer UI is working fine
- I will have to manually fix the indexes before the next harvesting
- Publish the web version of the DSpace CSV Metadata Quality checker tool that I wrote this weekend on GitHub: https://github.com/ilri/csv-metadata-quality-web
- Also, it is deployed on Heroku: https://fierce-ocean-30836.herokuapp.com/
- I was running it on Google App Engine originally, but they have *way* too aggressive caching of static assets
- I added the ability to check for, and fix, "mojibake" characters in csv-metadata-quality
## 2021-03-21
- Last week Atmire asked me which browser I was using to test the duplicate checker, which I had [reported](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=934) as not loading
- I tried to load it in Chrome and it works... hmmm
- Back up the current `openrxv-items-final` index to start a fresh AReS Harvest:
$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
"openrxv-items-temp": {
"aliases": {}
},
"openrxv-items-final": {
"aliases": {
"openrxv-items": {}
}
}
```
- Then I started a new harvesting
- I switched the Node.js in the [Ansible infrastructure scripts](https://github.com/ilri/rmg-ansible-public) to v12 since v10 will cease to be supported soon
- I re-deployed DSpace Test (linode26) with Node.js 12 and restarted the server
- The AReS harvest finally finished, with 1047 pages of items, but the `openrxv-items-final` index is empty and the `openrxv-items-temp` index has a 103,000 items:
- I looked in the Docker logs for Elasticsearch and saw a few memory errors:
```console
java.lang.OutOfMemoryError: Java heap space
```
- According to `/usr/share/elasticsearch/config/jvm.options` in the Elasticsearch container the default JVM heap is 1g
- I see the running Java process has `-Xms 1g -Xmx 1g` in its process invocation so I guess that it must be indeed using 1g
- We can [change the heap size with the ES_JAVA_OPTS environment variable](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html)
- Or perhaps better, we should [use a jvm.options.d file](https://www.elastic.co/guide/en/elasticsearch/reference/master/jvm-options.html) because if you use the environment variable it overrides all other JVM options from the default `jvm.options`
- I tried to set memory to 1536m by binding an options file and restarting the container, but it didn't seem to work
- Nevertheless, after restarting I see 103,000 items in the Explorer...
- But the indexes are still kinda messed up... the `openrxv-items` index is an alias of the wrong index!
```console
"openrxv-items-final": {
"aliases": {}
},
"openrxv-items-temp": {
"aliases": {
"openrxv-items": {}
}
},
```
## 2021-03-23
- For reference you can also get the Elasticsearch JVM stats from the API:
- Atmire responded to the [ticket about the Duplicate Checker](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=934)
- He says it works for him in Firefox, so I checked and it seems to have been an issue with my LocalCDN addon
- I re-deployed DSpace Test (linode26) from the latest CGSpace (linode18) data
- I want to try to finish up processing the duplicates in Solr that [Atmire advised on last month](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=839)
- The current statistics core is 57861236 kilobytes:
```console
# du -s /home/dspacetest.cgiar.org/solr/statistics
- The AReS harvesting that I started yesterday finished successfully and all indexes look OK:
-`openrxv-items` is an alias of `openrxv-items-final` and has a correct number of items
- Last week Bosede from IITA said she was trying to move an item from one collection to another and the system was "rolling" and never finished
- I looked in Munin and I don't see anything particularly wrong that day, so I told her to try again
- Marianne Gadeberg asked about mapping an item last week
- Searched for [the item](https://hdl.handle.net/10568/110633)'s handle, the title, the title in quotes, the UUID, with pluses instead of spaces, etc in the item mapper... but I can never find it in the results
- I see someone has reported this issue on Jira in DSpace 5.x's XMLUI item mapper: https://jira.lyrasis.org/browse/DS-2761
- The Solr log shows that my query (with and without quotes, etc) has 143 results:
- Marianne Gadeberg wrote to ask why the item she wanted to map a few days ago still doesn't appear in the mapped collection
- I looked on the item page itself and it lists the collection, but doesn't appear in the collection list
- I tried to forceably reindex the collection and the item, but it didn't seem to work
- Now I will try a complete Discovery re-index
## 2021-03-31
- The Discovery re-index finished, but [the CIP item](https://hdl.handle.net/10568/110633) still does not appear in the GENDER Platform grants collection
- The item page itself DOES list the grants collection! WTF
- I sent a message to the dspace-tech mailing list to see if someone can comment
- I even tried unmapping and re-mapping, but it doesn't change anything: the item still doesn't appear in the collection, but I can see that it is mapped
- I signed up for a SHERPA API key so I can try to write something to get journal names from ISSN
- This code seems to get a journal title, though I only tried it with a few ISSNs:
- I exported a list of all our ISSNs from CGSpace:
```console
localhost/dspace63= > \COPY (SELECT DISTINCT text_value FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=253) to /tmp/2021-03-31-issns.csv;