- Minor edits to the `crossref_doi_lookup.py` script while running some checks from 22,000 CGSpace DOIs
## 2023-07-03
- I analyzed the licenses declared by Crossref and found with high confidence that ~400 of ours were incorrect
- I took the more accurate ones from Crossref and updated the items on CGSpace
- I took a few hundred ISBNs as well for where we were missing them
- I also tagged ~4,700 items with missing licenses as "Copyrighted; all rights reserved" based on their Crossref license status being TDM, mostly from Elsevier, Wiley, and Springer
- Checking a dozen or so manually, I confirmed that if Crossref only has a TDM license then it's usually copyrighted (could still be open access, but we can't tell via Crossref)
- I would be curious to write a script to check the Unpaywall API for open access status...
- In the past I found that their *license* status was not very accurate, but the open access status might be more reliable
- More minor work on the DSpace 7 item views
- I learned some new Angular template syntax
- I created a custom component to show Creative Commons licenses on the simple item page
- I also decided that I don't like the Impact Area icons as a component because they don't have any visual meaning
- I spent some time trying to update OpenRXV from Angular 9 to 10 to 11 to 12 to 13
- Most things work but there are some minor bugs it seems
- Mishell from CIP emailed me to say she was having problems approving an item on CGSpace
- Looking at PostgreSQL I saw there were a dozen or so locks that were several hours and even over one day old so I killed those processes and told her to try again
## 2023-07-06
- Types meeting
- I wrote a Python script to check Unpaywall for some information about DOIs
## 2023-07-7
- Continue exploring Unpaywall data for some of our DOIs
- In the past I've found their _licensing_ information to not be very reliable (preferring Crossref), but I think their _open access status_ is more reliable, especially when the provider is listed as being the publisher
- Even so, sometimes the version can be "acceptedVersion", which is presumably the author's version, as opposed to the "publishedVersion", which means it's available as open access on the publisher's website
- I did some quality assurance and found ~100 that were marked as Limited Access, but should have been Open Access, and fixed a handful of licenses
- Delete duplicate metadata as describe in my DSpace issue from last year: https://github.com/DSpace/DSpace/issues/8253
- Start working on some statistics on AGROVOC usage for my presenation next week
- I used the following SQL query to dump values from all subject fields and lower case them:
```console
localhost/dspacetest= ☘ \COPY (SELECT DISTINCT(lower(text_value)) AS "subject" FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (187, 120, 210, 122, 215, 127, 208, 124, 128, 123, 125, 135, 203, 236, 238, 119)) to /tmp/2023-07-07-cgspace-subjects.csv WITH CSV HEADER;
COPY 26443
Time: 2564.851 ms (00:02.565)
```
- Then I extracted the subjects and looked them up against AGROVOC:
```console
$ csvcut -c subject /tmp/2023-07-07-cgspace-subjects.csv | sed '1d' > /tmp/2023-07-07-cgspace-subjects.txt
- I did a lot more work on OpenRXV to test and update dependencies
- I deployed the latest version on the production server
## 2023-07-12
- CGSpace upgrade meeting with Americas and Africa group
## 2023-07-13
- Michael Victor asked me to help Aditi extract some information from CGSpace
- She was interested in journal articles published between 2018 and 2023 with a range of subjects related to drought, flooding, resilience, etc
- I used an advanced query with some AGROVOC terms:
```console
dcterms.issued:[2018 TO 2023] AND dcterms.type:"Journal Article" AND (dcterms.subject:flooding OR dcterms.subject:flood OR dcterms.subject:"extreme weather events" OR dcterms.subject:drought OR dcterms.subject:"drought resistance" OR dcterms.subject:"drought tolerance" OR dcterms.subject:"soil salinity" OR dcterms.subject:"pests of plants" OR dcterms.subject:pests OR dcterms.subject:heat OR dcterms.subject:fertilizers OR dcterms.subject:"fertilizer technology" OR dcterms.subject:"rice fields" OR dcterms.subject:"landscape conservation" OR dcterms.subject:"landscape restoration" OR dcterms.subject:livestock)
```
- Interestingly, some variations of this same exact query produce no search results, and I see this error in the DSpace log:
```console
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse 'dcterms.issued:[2018 TO 2023] AND dcterms.type:"Journal Article" AND (dcterms.subject:flooding OR dcterms.subject:flood OR dcterms.subject:"extreme weather events" OR dcterms.subject:drought OR dcterms.subject:"drought resistance" OR dcterms.subject:"drought tolerance" OR dcterms.subject:"soil salinity" OR dcterms.subject:"pests of plants" OR dcterms.subject:pests OR dcterms.subject:heat OR dcterms.subject:fertilizers OR dcterms.subject:"fertilizer technology" OR dcterms.subject:"rice fields" OR dcterms.subject:livestock OR dcterms.subject:"landscape conservation" OR dcterms.subject:"landscape restoration\"\)': Lexical error at line 1, column 617. Encountered: <EOF> after : "\"landscape restoration\\\"\\)"
```
- It seems to be when there is a quoted search term at the end of the parenthesized group
- For what it's worth this same query worked fine on DSpace 7.6
## 2023-07-15
- Export CGSpace to fix missing Initiative collection mappings
- Start a harvest on AReS
## 2023-07-17
- Rasika had sent me a list of new ORCID identifiers for new IWMI staff so I combined them with our existing list and ran `resolve_orcids.py` to refresh the names in our database
- I updated the list, updated names in the database, and tagged new authors with missing identifiers in existing items
## 2023-07-18
- Meeting with IWMI, IRRI, and IITA colleagues about CGSpace upgrade plans
- Maria from the Alliance mentioned having some submissions stuck on CGSpace
- I looked and found a number of locks stuck for many nineteen, eighteen, and more hours...
- I'm curious how long it takes and how much data there will be
- The size of the Solr data directory is currently 82GB
- The export took about 2.5 hours and created 6,000 individual CSVs, one for each day of Solr stats
- The size of the exported CSVs is about 88GB
- I will copy just a few years to import on the DSpace 7 test server
- So importing these is going to require removing the Atmire custom fields:
```console
$ dspace solr-import-statistics -i statistics
Exception: Error from server at http://localhost:8983/solr/statistics: ERROR: [doc=1a92472e-e39d-4602-9b4d-da022df8f233] unknown field 'containerCommunity'
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://localhost:8983/solr/statistics: ERROR: [doc=1a92472e-e39d-4602-9b4d-da022df8f233] unknown field 'containerCommunity'
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:681)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:266)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290)
at org.dspace.util.SolrImportExport.importIndex(SolrImportExport.java:465)
at org.dspace.util.SolrImportExport.main(SolrImportExport.java:148)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:277)
at org.dspace.app.launcher.ScriptLauncher.handleScript(ScriptLauncher.java:133)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:98)
```
- I will try using solr-import-export-json, which I've used in the past to skip Atmire custom fields in Solr:
```console
$ chrt -b 0 ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-2022.json -f 'time:[2022-01-01T00\:00\:00Z TO 2022-12-31T23\:59\:59Z]' -k uid -S author_mtdt,author_mtdt_search,iso_mtdt_search,iso_mtdt,subject_mtdt,subject_mtdt_search,containerCollection,containerCommunity,containerItem,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,geoipcountrycode,geoIpCountryCode,ip_ngram,ip_search,isArchived,isInternal,isWithdrawn,containerBitstream,file_id,referrer_ngram,referrer_search,userAgent_ngram,userAgent_search,version_id,complete_query,complete_query_search,filterquery,ngram_query_search,ngram_simplequery_search,simple_query,simple_query_search,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,actingGroupId,actorMemberGroupId,bitstreamCount,solr_update_time_stamp,bitstreamId,core_update_run_nb
```
- Some users complained that CGSpace was slow and I found a handful of locks that were hours and days old...
- I killed those and told them to try again
- After importing the Solr statistics into DSpace 7 I realized that my DSpace Statistics API will work fine
- I made some minor modifications to the Ansible infrastructure scripts to make sure it is enabled and then activated it on DSpace 7 Test