October, 2019
2019-10-01
Udana from IWMI asked me for a CSV export of their community on CGSpace
- I exported it, but a quick run through the
csv-metadata-quality
tool shows that there are some low-hanging fruits we can fix before I send him the data I will limit the scope to the titles, regions, subregions, and river basins for now to manually fix some non-breaking spaces (U+00A0) there that would otherwise be removed by the csv-metadata-quality script’s “unneccesary Unicode” fix:
$ csvcut -c 'id,dc.title[en_US],cg.coverage.region[en_US],cg.coverage.subregion[en_US],cg.river.basin[en_US]' ~/Downloads/10568-16814.csv > /tmp/iwmi-title-region-subregion-river.csv
Then I replace them in vim with
:% s/\%u00a0/ /g
because I can’t figure out the correct sed syntax to do it directly from the pipe aboveI uploaded those to CGSpace and then re-exported the metadata
Now that I think about it, I shouldn’t be removing non-breaking spaces (U+00A0), I should be replacing them with normal spaces!
I modified the script so it replaces the non-breaking spaces instead of removing them
Then I ran the csv-metadata-quality script to do some general cleanups (though I temporarily commented out the whitespace fixes because it was too many thousands of rows):
$ csv-metadata-quality -i ~/Downloads/10568-16814.csv -o /tmp/iwmi.csv -x 'dc.date.issued,dc.date.issued[],dc.date.issued[en_US]' -u
That fixed 153 items (unnecessary Unicode, duplicates, comma–space fixes, etc)
- I exported it, but a quick run through the
Release version 0.3.1 of the csv-metadata-quality script with the non-breaking spaces change
2019-10-03
- Upload the 117 IITA records that we had been working on last month (aka 20196th.xls aka Sept 6) to CGSpace
2019-10-04
Create an account for Bioversity’s ICT consultant Francesco on DSpace Test:
$ dspace user -a -m blah@mail.it -g Francesco -s Vernocchi -p 'fffff'
Email Francesca and Carol to ask for follow up about the test upload I did on 2019-09-21
- I suggested that if they still want to do value addition of those records (like adding countries, regions, etc) that they could maybe do it after we migrate the records to CGSpace
- Carol responded to tell me where to map the items with type Brochure, Journal Item, and Thesis, so I applied them to the collection on DSpace Test
2019-10-06
- Hector from CCAFS responded about my feedback of their CLARISA API
- He made some fixes to the metadata values they are using based on my feedback and said they are happy if we would use it
- Gabriela from CIP asked me if it was possible to generate an RSS feed of items that have the CIP subject “POTATO AGRI-FOOD SYSTEMS”
- I notice that there is a similar term “SWEETPOTATO AGRI-FOOD SYSTEMS” so I had to come up with a way to exclude that using the boolean “AND NOT” in the OpenSearch query
- Again, the
sort_by=3
parameter is the accession date, as configured indspace.cfg
2019-10-08
- Fix 108 more issues with authors in the ongoing Bioversity migration on DSpace Test, for example:
- Europeanooperative Programme for Plant Genetic Resources
- Bioversity International. Capacity Development Unit
- W.M. van der Heide, W.M., Tripp, R.
- Internationallant Genetic Resources Institute
- Start looking at duplicates in the Bioversity migration data on DSpace Test
- I’m keeping track of the originals and duplicates in a Google Docs spreadsheet that I will share with Bioversity
2019-10-09
- Continue working on identifying duplicates in the Bioversity migration
- I have been recording the originals and duplicates in a spreadsheet so I can map them later
- For now I am just reconciling any incorrect or missing metadata in the original items on CGSpace, deleting the duplicate from DSpace Test, and mapping the original to the correct place on CGSpace
- So far I have deleted thirty duplicates and mapped fourteen
- Run all system updates on DSpace Test (linode19) and reboot the server
2019-10-10
Felix Shaw from Earlham emailed me to ask about his admin account on DSpace Test
- His old one got lost when I re-sync’d DSpace Test with CGSpace a few weeks ago
I added a new account for him and added it to the Administrators group:
$ dspace user -a -m wow@me.com -g Felix -s Shaw -p 'fuananaaa'
2019-10-11
I ran the DSpace cleanup function on CGSpace and it found some errors:
$ dspace cleanup -v ... Error: ERROR: update or delete on table "bitstream" violates foreign key constraint "bundle_primary_bitstream_id_fkey" on table "bundle" Detail: Key (bitstream_id)=(171221) is still referenced from table "bundle".
The solution, as always, is (repeat as many times as needed):
# su - postgres $ psql dspace -c 'update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (171221);' UPDATE 1
2019-10-12
- More work on identifying duplicates in the Bioversity migration data on DSpace Test
- I mapped twenty-five more items on CGSpace and deleted them from the migration test collection on DSpace Test
- After a few hours I think I finished all the duplicates that were identified by Atmire’s Duplicate Checker module
- According to my spreadsheet there were fifty-two in total
I was preparing to check the affiliations on the Bioversity records when I noticed that the last list of top affiliations I generated has some anomalies
I made some corrections in a CSV:
from,to CIAT,International Center for Tropical Agriculture International Centre for Tropical Agriculture,International Center for Tropical Agriculture International Maize and Wheat Improvement Center (CIMMYT),International Maize and Wheat Improvement Center International Centre for Agricultural Research in the Dry Areas,International Center for Agricultural Research in the Dry Areas International Maize and Wheat Improvement Centre,International Maize and Wheat Improvement Center "Agricultural Information Resource Centre, Kenya.","Agricultural Information Resource Centre, Kenya" "Centre for Livestock and Agricultural Development, Cambodia","Centre for Livestock and Agriculture Development, Cambodia"
Then I applied it with my
fix-metadata-values.py
script on CGSpace:$ ./fix-metadata-values.py -i /tmp/affiliations.csv -db dspace -u dspace -p 'fuuu' -f from -m 211 -t to
I did some manual curation of about 300 authors in OpenRefine in preparation for telling Peter and Abenet that the migration is almost ready
- I would still like to perhaps (re)move institutional authors from
dc.contributor.author
tocg.contributor.affiliation
, but I will have to run that by Francesca, Carol, and Abenet - I could use a custom text facet like this in OpenRefine to find authors that likely match the “Last, F.” pattern:
isNotNull(value.match(/^.*, \p{Lu}\.?.*$/))
- The
\p{Lu}
is a cool regex character class to make sure this works for letters with accents - As cool as that is, it’s actually more effective to just search for authors that have “.” in them!
- I’ve decided to add a
cg.contributor.affiliation
column to 1,025 items based on the logic above where the author name is not an actual person
- I would still like to perhaps (re)move institutional authors from
2019-10-13
- More cleanup work on the authors in the Bioversity migration
- Now I sent the final feedback to Francesca, Carol, and Abenet
Peter is still seeing some authors listed with “|” in the “Top Authors” statistics for some collections
- I looked in some of the items that are listed and the author field does not contain those invalid separators
I decided to try doing a full Discovery re-indexing on CGSpace (linode18):
$ time schedtool -B -e ionice -c2 -n7 nice -n19 dspace index-discovery -b real 82m35.993s
After the re-indexing the top authors still list the following:
Jagwe, J.|Ouma, E.A.|Brandes-van Dorresteijn, D.|Kawuma, Brian|Smith, J.
I looked in the database to find authors that had “|” in them:
dspace=# SELECT text_value, resource_id FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=3 AND text_value LIKE '%|%'; text_value | resource_id ----------------------------------+------------- Anandajayasekeram, P.|Puskur, R. | 157 Morales, J.|Renner, I. | 22779 Zahid, A.|Haque, M.A. | 25492 (3 rows)
Then I found their handles and corrected them, for example:
dspacetest=# select handle from item, handle where handle.resource_id = item.item_id AND item.item_id = '157' and handle.resource_type_id=2; handle ----------- 10568/129 (1 row)
So I’m still not sure where these weird authors in the “Top Author” stats are coming from
2019-10-14
- I talked to Peter about the Bioversity items and he said that we should add the institutional authors back to
dc.contributor.author
, because I had moved them tocg.contributor.affiliation
- Otherwise he said the data looks good
2019-10-15
I did a test export / import of the Bioversity migration items on DSpace Test
First export them:
$ export JAVA_OPTS='-Dfile.encoding=UTF-8 -Xmx512m' $ mkdir 2019-10-15-Bioversity $ dspace export -i 10568/108684 -t COLLECTION -m -n 0 -d 2019-10-15-Bioversity $ sed -i '/<dcvalue element="identifier" qualifier="uri">/d' 2019-10-15-Bioversity/*/dublin_core.xml
It’s really stupid, but for some reason the handles are included even though I specified the
-m
option, so after the export I removed thedc.identifier.uri
metadata values from the itemsThen I imported a test subset of them in my local test environment:
$ ~/dspace/bin/dspace import -a -c 10568/104049 -e fuu@cgiar.org -m 2019-10-15-Bioversity.map -s /tmp/2019-10-15-Bioversity
I had forgotten (again) that the
dspace export
command doesn’t preserve collection ownership or mappings, so I will have to create a temporary collection on CGSpace to import these to, then do the mappings again after import…On CGSpace I will increase the RAM of the command line Java process for good luck before import…
$ export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xmx1024m" $ dspace import -a -c 10568/104057 -e fuu@cgiar.org -m 2019-10-15-Bioversity.map -s 2019-10-15-Bioversity
After importing the 1,367 items I re-exported the metadata, changed the owning collections to those based on their type, then re-imported them
2019-10-21
- Re-sync the DSpace Test database and assetstore with CGSpace
- Run system updates on DSpace Test (linode19) and reboot it