2022-12-01
- Fix some incorrect regions on CGSpace
- I exported the CCAFS and IITA communities, extracted just the country and region columns, then ran them through csv-metadata-quality to fix the regions
- Add a few more authors to my CSV with author names and ORCID identifiers and tag 283 items!
- Replace “East Asia” with “Eastern Asia” region on CGSpace (UN M.49 region)
- CGSpace and PRMS information session with Enrico and a bunch of researchers
- I noticed some minor issues with SPDX licenses and AGROVOC terms in items submitted by TIP so I sent a message to Daniel from Alliance
- I startd a harvest on AReS since we’ve updated so much metadata recently
2022-12-02
- File some issues related to metadata on the MEL issue tracker
2022-12-03
- I downloaded a fresh copy of CLARISA’s institutions list as well as ROR’s latest dump from 2022-12-01 to check how many are matching:
$ curl -s https://api.clarisa.cgiar.org/api/institutions | json_pp > ~/Downloads/2022-12-03-CLARISA-institutions.json
$ jq -r '.[] | .name' ~/Downloads/2022-12-03-CLARISA-institutions.json > ~/Downloads/2022-12-03-CLARISA-institutions.txt
$ ./ilri/ror-lookup.py -i ~/Downloads/2022-12-03-CLARISA-institutions.txt -o /tmp/clarisa-ror-matches.csv -r v1.15-2022-12-01-ror-data.json
$ csvgrep -c matched -m true /tmp/clarisa-ror-matches.csv | wc -l
1864
$ wc -l ~/Downloads/2022-12-03-CLARISA-institutions.txt
7060 /home/aorth/Downloads/2022-12-03-CLARISA-institutions.txt
- Out of the box they match 26.4%, but there are many institutions with multiple languages in the text value, as well as countries in parentheses so I think it could be higher
- If I replace the slashes and remove the countries at the end there are slightly more matches, around 29%:
$ sed -e 's_ / _\n_' -e 's_/_\n_' -e 's/ \?(.*)$//' ~/Downloads/2022-12-03-CLARISA-institutions.txt > ~/Downloads/2022-12-03-CLARISA-institutions-alan.txt
- I checked CGSpace’s top 1,000 affiliations too, first exporting from PostgreSQL:
localhost/dspacetest= ☘ \COPY (SELECT DISTINCT text_value as "cg.contributor.affiliation", count(*) FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id = 211 GROUP BY text_value ORDER BY count DESC LIMIT 1000) to /tmp/2022-11-22-affiliations.csv;
- Then cutting (tab is the default delimeter):
$ cut -f 1 /tmp/2022-11-22-affiliations.csv > 2022-11-22-affiliations.txt
$ ./ilri/ror-lookup.py -i 2022-11-22-affiliations.txt -o /tmp/cgspace-matches.csv -r v1.15-2022-12-01-ror-data.json
$ csvgrep -c matched -m true /tmp/cgspace-matches.csv | wc -l
542
- So that’s a 54% match for our top affiliations
- I realized we should actually check affiliations and sponsors, since those are stored in separate fields
- When I add those the matches go down a bit to 45%
- Oh man, I realized institutions like
Université d'Abomey Calavi
don’t match in ROR because they are like this in the JSON:
"name": "Universit\u00e9 d'Abomey-Calavi"
- So we likely match a bunch more than 50%…
- I exported a list of affiliations and donors from CGSpace for Peter to look over and send corrections
2022-12-05
- First day of PRMS technical workshop in Rome
- Last night I submitted a CSV import with changes to 1,500 Alliance items (adding regions) and it hadn’t completed after twenty-four hours so I canceled it
- Not sure if there is some rollback that will happen or what state the database will be in, so I will wait a few hours to see what happens before trying to modify those items again
- I started it again a few hours later with a subset of the items and 4GB of RAM instead of 2
- It completed successfully…
2022-12-07
- I found a bug in my csv-metadata-quality script regarding the regions
- I was accidentally checking
cg.coverage.subregion
due to a sloppy regex
- This means I’ve added a few thousand UN M.49 regions to the
cg.coverage.subregion
field in the last few days
- I had to extract them from CGSpace and delete them using
delete-metadata-values.py
- My DSpace 7.x pull request to tell ImageMagick about the PDF CropBox was merged
- Start a harvest on AReS
2022-12-08
- While on the plane I decided to fix some ORCID identifiers, as I had seen some poorly formatted ones
- I couldn’t remember the XPath syntax so this was kinda ghetto:
$ xmllint --xpath '//node/isComposedBy/node()' dspace/config/controlled-vocabularies/cg-creator-identifier.xml | grep -oE 'label=".*"' | sed -e 's/label="//' -e 's/"$//' > /tmp/orcid-names.txt
$ ./ilri/update-orcids.py -i /tmp/orcid-names.txt -db dspace -u dspace -p 'fuuu' -m 247
- After that there were still some poorly formatted ones that my script didn’t fix, so perhaps these are new ones not in our list
- I dumped them and combined with the existing ones to resolve later:
localhost/dspace= ☘ \COPY (SELECT dspace_object_id,text_value FROM metadatavalue WHERE metadata_field_id=247 AND text_value LIKE '%http%') to /tmp/orcid-formatting.txt;
COPY 36
- I think there are really just some new ones…
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml /tmp/orcid-formatting.txt| grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort -u > /tmp/2022-12-08-orcids.txt
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort -u | wc -l
1907
$ wc -l /tmp/2022-12-08-orcids.txt
1939 /tmp/2022-12-08-orcids.txt
- Then I applied these updates on CGSpace
- Maria mentioned that she was getting a lot more items in her daily subscription emails
- I had a hunch it was related to me updating the
last_modified
timestamp after updating a bunch of countries, regions, etc in items
- Then today I noticed this option in
dspace.cfg
: eperson.subscription.onlynew
- By default DSpace sends notifications for modified items too! I’ve disabled it now…
- I applied 498 fixes and two deletions to affiliations sent by Peter
- I applied 206 fixes and eighty-one deletions to donors sent by Peter
- I tried to figure out how to authenticate to the DSpace 7 REST API
- First you need a CSRF token, before you can even try to authenticate
- Then you can authenticate, but I can’t get it to work:
$ curl -v https://dspace7test.ilri.org/server/api
...
dspace-xsrf-token: 0b7861fb-9c8a-4eea-be70-b3be3bd0a0b4
...
$ curl -v -X POST --data "user=aorth@omg.com&password=myPassword" "https://dspace7test.ilri.org/server/authn/login" -H "X-XSRF-TOKEN: 0b7861fb-9c8a-4eea-be70-b3be3bd0a0b4"
2022-12-09
- I found a way to check the owner of a Handle prefix
2022-12-11
- I got LDAP authentication working on DSpace 7
2022-12-12
- Submit some issues to MEL GitHub:
- PRMS planning meeting before tomorrow’s meeting with researchers and submitters
2022-12-13
- I made some minor changes to csv-metadata-quality
- I switched to using the SPDX license data as a JSON directly from SPDX, instead of via the now-deprecated spdx-license-list package on pypi
- I exported the Initiatives collection to tag missing regions
- I submitted an issue to MEL GitHub:
- Submit a pull request to fix the Handle link in the Citizen Lab test URLs for Iran
- I had originally submitted this in 2018, but it seems someone updated the URL in 2020… hmmm
- I normalized the
text_lang
values on CGSpace again:
dspace=# SELECT DISTINCT text_lang, count(text_lang) FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) GROUP BY text_lang ORDER BY count DESC;
text_lang | count
-----------+---------
en_US | 3050302
en | 618
| 605
fr | 2
vi | 2
es | 1
| 0
(7 rows)
dspace=# BEGIN;
BEGIN
dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE dspace_object_id IN (SELECT uuid FROM item) AND text_lang IN ('en', '', NULL);
UPDATE 1223
dspace=# COMMIT;
COMMIT
- I wrote an initial version of a script to map CGSpace items to Initiative collections based on their
cg.contributor.initiative
metadata
- I am still considering if I want to add a mode to un-map items that are mapped to collections, but do not have the corresponding metadata tag
2022-12-14
- Lots of work on PRMS related metadata issues with CGSpace
- We noticed that PRMS uses
cg.identifier.dataurl
for the FAIR score, but not cg.identifier.url
- We don’t use these consistently for datasets in CGSpace so I decided to move them to the dataurl field, but we will also ask the PRMS team to consider the normal URL field, as there are commonly other external resources related to the knowledge product there
- I updated the
move-metadata-values.py
script to use the latest best practices from my other scripts and some of the helper functions from util.py
- Then I exported a list of text values pointing to Dataverse instances from
cg.identifier.url
:
localhost/dspace= ☘ \COPY (SELECT text_value FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=219 AND (text_value LIKE '%persistentId%' OR text_value LIKE '%20.500.11766.1/%')) to /tmp/data.txt;
COPY 61
- Then I moved them to
cg.identifier.dataurl
on CGSpace:
$ ./ilri/move-metadata-values.py -i /tmp/data.txt -db dspace -u dspace -p 'dom@in34sniper' -f cg.identifier.url -t cg.identifier.dataurl
- I still need to add a note to the CGSpace submission form to inform submitters about the correct field for dataset URLs
- I finalized work on my new
fix-initiative-mappings.py
script
- It has two modes:
- Check item metadata to see which Initiatives are tagged and then map the item if it is not yet mapped to the corresponding Initiative collection
- Check item collections to see which Initiatives are mapped and then unmap the item if the corresponding Initiative metadata is missing
- The second one is disabled by default until I can get more feedback from Abenet, Michael, and others
- After I applied a handful of collection mappings I started a harvest on AReS
2022-12-15
- I did some metadata quality checks on the Initiatives collection, adding some missing regions and removing a few duplicate ones