cgspace-notes/content/posts/2020-04.md

13 KiB

title date author categories
April, 2020 2020-04-02T10:53:24+03:00 Alan Orth
Notes

2020-04-02

  • Maria asked me to update Charles Staver's ORCID iD in the submission template and on CGSpace, as his name was lower case before, and now he has corrected it
    • I updated the fifty-eight existing items on CGSpace
  • Looking into the items Udana had asked about last week that were missing Altmetric donuts:
  • On the same note, the one item Abenet pointed out last week now has a donut with score of 104 after I tweeted it last week
  • Altmetric responded about one item that had no donut since at least 2019-12 and said they fixed some problems with their bot's user agent
    • I decided to tweet the item, as I can't remember if I ever did it before

2020-04-05

  • Update PostgreSQL JDBC driver to version 42.2.12

2020-04-07

  • Yesterday Atmire sent me their pull request for DSpace 6 modules
  • Peter pointed out that some items have his ORCID identifier (cg.creator.id) twice
    • I think this is because my early add-orcid-identifiers.py script was adding identifiers to existing records without properly checking if there was already one present (at first it only checked if there was one with the exact place value)
    • As a test I dropped all his ORCID identifiers and added them back with the add-orcid-identifiers.py script:
$ psql -h localhost -U postgres dspace -c "DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=240 AND text_value LIKE '%Ballantyne%';"
DELETE 97
$ ./add-orcid-identifiers-csv.py -i 2020-04-07-peter-orcids.csv -db dspace -u dspace -p 'fuuu' -d
  • I used this CSV with the script (all records with his name have the name standardized like this):
dc.contributor.author,cg.creator.id
"Ballantyne, Peter G.","Peter G. Ballantyne: 0000-0001-9346-2893"
  • Then I tried another way, to identify all duplicate ORCID identifiers for a given resource ID and group them so I can see if count is greater than 1:
dspace=# \COPY (SELECT DISTINCT(resource_id, text_value) as distinct_orcid, COUNT(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 240 GROUP BY distinct_orcid ORDER BY count DESC) TO /tmp/2020-04-07-duplicate-orcids.csv WITH CSV HEADER;
COPY 15209
  • Of those, about nine authors had duplicate ORCID identifiers over about thirty records, so I created a CSV with all their name variations and ORCID identifiers:
dc.contributor.author,cg.creator.id
"Ballantyne, Peter G.","Peter G. Ballantyne: 0000-0001-9346-2893"
"Ramirez-Villegas, Julian","Julian Ramirez-Villegas: 0000-0002-8044-583X"
"Villegas-Ramirez, J","Julian Ramirez-Villegas: 0000-0002-8044-583X"
"Ishitani, Manabu","Manabu Ishitani: 0000-0002-6950-4018"
"Manabu, Ishitani","Manabu Ishitani: 0000-0002-6950-4018"
"Ishitani, M.","Manabu Ishitani: 0000-0002-6950-4018"
"Ishitani, M.","Manabu Ishitani: 0000-0002-6950-4018"
"Buruchara, Robin A.","Robin Buruchara: 0000-0003-0934-1218"
"Buruchara, Robin","Robin Buruchara: 0000-0003-0934-1218"
"Jarvis, Andy","Andy Jarvis: 0000-0001-6543-0798"
"Jarvis, Andrew","Andy Jarvis: 0000-0001-6543-0798"
"Jarvis, A.","Andy Jarvis: 0000-0001-6543-0798"
"Tohme, Joseph M.","Joe Tohme: 0000-0003-2765-7101"
"Hansen, James","James Hansen: 0000-0002-8599-7895"
"Hansen, James W.","James Hansen: 0000-0002-8599-7895"
"Asseng, Senthold","Senthold Asseng: 0000-0002-7583-3811"
  • Then I deleted all their existing ORCID identifier records:
dspace=# DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=240 AND text_value SIMILAR TO '%(0000-0001-6543-0798|0000-0001-9346-2893|0000-0002-6950-4018|0000-0002-7583-3811|0000-0002-8044-583X|0000-0002-8599-7895|0000-0003-0934-1218|0000-0003-2765-7101)%';
DELETE 994
  • And then I added them again using the add-orcid-identifiers records:
$ ./add-orcid-identifiers-csv.py -i 2020-04-07-fix-duplicate-orcids.csv -db dspace -u dspace -p 'fuuu' -d
  • I ran the fixes on DSpace Test and CGSpace as well
  • I started testing the pull request sent by Atmire yesterday
    • I notice that we now need yarn to build, and I need to bump the Node.js engine version in our Mirage 2 theme in order to get it to build on Node.js 10.x
    • Font Awesome icons for GitHub etc weren't loading, and after a bit of troubleshooting I replaced version 4.5.0 with 5.13.0 and to my surprise they now include Mendeley and ORCID so we can get rid of the Academicons dependency

2020-04-12

  • Testing the Atmire DSpace 6.3 code with a clean CGSpace DSpace 5.8 database snapshot
    • One Flyway migration failed so I had to manually remove it (and of course create the pgcrypto extension):
dspace63=# DELETE FROM schema_version WHERE version IN ('5.8.2015.12.03.3');
dspace63=# CREATE EXTENSION pgcrypto;
  • Then DSpace 6.3 started up OK and I was able to see some statistics in the Content and Usage Analysis (CUA) module, but not on community, collection, or item pages
    • I also noticed at least one of these errors in the DSpace log:
2020-04-12 16:34:33,363 ERROR com.atmire.dspace.app.xmlui.aspect.statistics.editorparts.DataTableTransformer @ java.lang.IllegalArgumentException: Invalid UUID string: 1
  • And I remembered I actually need to run the DSpace 6.4 Solr UUID migrations:
$ export JAVA_OPTS="-Xmx1024m -Dfile.encoding=UTF-8"
$ ~/dspace63/bin/dspace solr-upgrade-statistics-6x
  • Run system updates on DSpace Test (linode26) and reboot it
  • More work on the DSpace 6.3 stuff, improving the GDPR consent logic to use haven instead of cookieconsent
    • It works better by injecting the Google Analytics script after the user clicks agree, and it also has a preferences section that gets automatically injected on the privacy page!

2020-04-13

  • I realized that solr-upgrade-statistics-6x only processes 100,000 records by default so I think we actually need to finish running it for all legacy Solr records before asking Atmire why CUA statlets and detailed statistics aren't working
  • For now I am just doing 250,000 records at a time on my local environment:
$ export JAVA_OPTS="-Xmx2000m -Dfile.encoding=UTF-8"
$ ~/dspace63/bin/dspace solr-upgrade-statistics-6x -n 250000
  • Despite running the migration for all of my local 1.5 million Solr records, I still see a few hundred thousand like -1 and 0-unmigrated
    • I will purge them all and try to import only a subset...
    • After importing again I see there are indeed tens of thousands of these documents with IDs "-1" and "0"
    • They are all type: 5, which is "SITE" according to Constants.java:
/** DSpace site type */
public static final int SITE = 5;
  • Even after deleting those documents and re-running solr-upgrade-statistics-6x I still get the UUID errors when using CUA and the statlets
  • I have sent some feedback and questions to Atmire (including about the  issue with glypicons in the header trail)
  • In other news, my local Artifactory container stopped working for some reason so I re-created it and it seems some things have changed upstream (port 8082 for web UI?):
$ podman rm artifactory
$ podman pull docker.bintray.io/jfrog/artifactory-oss:latest
$ podman create --ulimit nofile=32000:32000 --name artifactory -v artifactory_data:/var/opt/jfrog/artifactory -p 8081-8082:8081-8082 docker.bintray.io/jfrog/artifactory-oss
$ podman start artifactory

2020-04-14

  • A few days ago Peter asked me to update an author's name on CGSpace and in the controlled vocabularies:
dspace=# UPDATE metadatavalue SET text_value='Knight-Jones, Theodore J.D.' WHERE resource_type_id=2 AND metadata_field_id=3 AND text_value='Knight-Jones, T.J.D.';
  • I updated his existing records on CGSpace, changed the controlled lists, added his ORCID identifier to the controlled list, and tagged his thirty-nine items with the ORCID iD
  • The new DSpace 6 stuff that Atmire sent modifies the Mirage 2's pom.xml to copy the each theme's resulting node_modules to each theme after building and installing with ant update because they moved some packages from bower to npm and now reference them in page-structure.xsl
    • This is a good idea, because bower is no longer supported, and npm has gotten a lot better, but it causes an extra 200,000 files to get copied!
    • Most scripts are concatenated into theme.js during build, so we don't need the node_modules after that, but there are three scripts in page-structure.xsl that are not included there
    • The scripts are a very old version of modernizr which is not even available on npm, html5shiv, and respond.js
    • For modernizr I can simply download a static copy and put it in 0_CGIAR/scripts and concatenate it into theme.js
    • For the others, I can revert to using them from bower's vendor directory, which is installed by the parent XMLUI Mirage 2 theme
    • During this process I also realized that mvn clean doesn't actually clean everything, and dspace/modules/xmlui-mirage2/target is remaining from previous builds and contains a bunch of shit from previous builds (including all the themes which I was trying to build without!)
      • This must be a DSpace bug, but I should theoretically check on vanilla DSpace and then file a bug...

2020-04-17

  • Atmire responded to some of the issues I raised earlier this week about the DSpace 6 pull request
    • They said they don't think the glyphicon encoding issue is due to their changes, but I built a new clean version of the vanilla 6_x-dev branch from before their pull request and it does not have the encoding issue in the Mirage 2 header trails
    • Also, they said we need to use something called AtomicStatisticsUpdateCLI to do the Solr legacy integer ID to UUID conversion so I asked for more information about that workflow

2020-04-20

  • Looking into a high rate of outgoing bandwidth from yesterday on CGSpace (linode18):
# cat /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "19/Apr/2020:0[6789]" | goaccess --log-format=COMBINED -
  • One host in Russia (91.241.19.70) download 23GiB over those few hours in the morning
    • It looks like all the requests were for one single item's bitstreams:
# grep -c 91.241.19.70 /var/log/nginx/access.log.1
8900
# grep 91.241.19.70 /var/log/nginx/access.log.1 | grep -c '10568/35187'
8900
  • I thought the host might have been Yandex misbehaving, but its user agent is:
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_3; nl-nl) AppleWebKit/527  (KHTML, like Gecko) Version/3.1.1 Safari/525.20
  • I will purge that IP from the Solr statistics using my check-spider-ip-hits.sh script:
$ ./check-spider-ip-hits.sh -d -f /tmp/ip -p
(DEBUG) Using spider IPs file: /tmp/ip
(DEBUG) Checking for hits from spider IP: 91.241.19.70
Purging 8909 hits from 91.241.19.70 in statistics

Total number of bot hits purged: 8909
  • While investigating that I noticed ORCID identifiers missing from a few authors names, so I added them with my add-orcid-identifiers.py script:
$ ./add-orcid-identifiers-csv.py -i 2020-04-20-add-orcids.csv -db dspace -u dspace -p 'fuuu' -d
  • The contents of 2020-04-20-add-orcids.csv was:
dc.contributor.author,cg.creator.id
"Schut, Marc","Marc Schut: 0000-0002-3361-4581"
"Schut, M.","Marc Schut: 0000-0002-3361-4581"
"Kamau, G.","Geoffrey Kamau: 0000-0002-6995-4801"
"Kamau, G","Geoffrey Kamau: 0000-0002-6995-4801"
"Triomphe, Bernard","Bernard Triomphe: 0000-0001-6657-3002"
"Waters-Bayer, Ann","Ann Waters-Bayer: 0000-0003-1887-7903"
"Klerkx, Laurens","Laurens Klerkx: 0000-0002-1664-886X"
  • I confirmed some of the authors' names from the report itself, then by looking at their profiles on ORCID.org
  • Add new ILRI subject "COVID19" to the 5_x-prod branch
  • Add new CCAFS Phase II project tags to the 5_x-prod branch
  • I will deploy these to CGSpace in the next few days

2020-04-24

  • Atmire responded to my ticket about the  issue with glypicons and said their test server does not show this same issue
    • They asked if I am using the JAVA_OPTS=-Dfile.encoding=UTF-8 when building DSpace and running Tomcat
    • I set it explicitly for Maven and Ant just now (and cleared all XMLUI caches) but the issue is still there
    • I asked them if they are building on macOS or Linux, and which Node.js version (I'm using 10.20.1, which is the current LTS branch).