2018-05-01
- I cleared the Solr statistics core on DSpace Test by issuing two commands directly to the Solr admin interface:
- Then I reduced the JVM heap size from 6144 back to 5120m
- Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the Ansible infrastructure scripts to support hosts choosing which distribution they want to use
2018-05-02
- Advise Fabio Fidanza about integrating CGSpace content in the new CGIAR corporate website
- I think they can mostly rely on using the
cg.contributor.crp
field
- Looking over some IITA records for Sisay
- Other than trimming and collapsing consecutive whitespace, I made some other corrections
- I need to check the correct formatting of COTE D’IVOIRE vs COTE D’IVOIRE
- I replaced all DOIs with HTTPS
- I checked a few DOIs and found at least one that was missing, so I Googled the title of the paper and found the correct DOI
- Also, I found an FAQ for DOI that says the
dx.doi.org
syntax is older, so I will replace all the DOIs with doi.org
instead
- I found five records with “ISI Jounal” instead of “ISI Journal”
- I found one item with IITA subject “.”
- Need to remember to check the facets for things like this in sponsorship:
- Deutsche Gesellschaft für Internationale Zusammenarbeit
- Deutsche Gesellschaft fur Internationale Zusammenarbeit
- Eight records with language “fn” instead of “fr”
- One incorrect type (lowercase “proceedings”): Conference proceedings
- Found some capitalized CRPs in
cg.contributor.crp
- Found some incorrect author affiliations, ie “Institut de Recherche pour le Developpement Agricolc” should be “Institut de Recherche pour le Developpement Agricole“
- Wow, and for sponsors there are the following:
- Incorrect: Flemish Agency for Development Cooperation and Technical Assistance
- Incorrect: Flemish Organization for Development Cooperation and Technical Assistance
- Correct: Flemish Association for Development Cooperation and Technical Assistance
- One item had region “WEST” (I corrected it to “WEST AFRICA”)
2018-05-03
- It turns out that the IITA records that I was helping Sisay with in March were imported in 2018-04 without a final check by Abenet or I
- There are lots of errors on language, CRP, and even some encoding errors on abstract fields
- I export them and include the hidden metadata fields like
dc.date.accessioned
so I can filter the ones from 2018-04 and correct them in Open Refine:
$ dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
- Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my resolve-orcids.py script and merge them into our controlled vocabulary
- On the messed up IITA records from 2018-04 I see sixty DOIs in incorrect format (cg.identifier.doi)
2018-05-06
- Fixing the IITA records from Sisay, sixty DOIs have completely invalid format like
http:dx.doi.org10.1016j.cropro.2008.07.003
- I corrected all the DOIs and then checked them for validity with a quick bash loop:
$ for line in $(< /tmp/links.txt); do echo $line; http --print h $line; done
- Most of the links are good, though one is duplicate and one seems to even be incorrect in the publisher’s site so…
- Also, there are some duplicates:
10568/92241
and 10568/92230
(same DOI)
10568/92151
and 10568/92150
(same ISBN)
10568/92291
and 10568/92286
(same citation, title, authors, year)
- Messed up abstracts:
- Fixed some issues in regions, countries, sponsors, ISSN, and cleaned whitespace errors from citation, abstract, author, and titles
- Fixed all issues with CRPs
- A few more interesting Unicode characters to look for in text fields like author, abstracts, and citations might be:
’
(0x2019), ·
(0x00b7), and €
(0x20ac)
- A custom text facit in OpenRefine with this GREL expression could be a good for finding invalid characters or encoding errors in authors, abstracts, etc:
or(
isNotNull(value.match(/.*[(|)].*/)),
isNotNull(value.match(/.*\uFFFD.*/)),
isNotNull(value.match(/.*\u00A0.*/)),
isNotNull(value.match(/.*\u200A.*/)),
isNotNull(value.match(/.*\u2019.*/)),
isNotNull(value.match(/.*\u00b7.*/)),
isNotNull(value.match(/.*\u20ac.*/))
)
- I found some more IITA records that Sisay imported on 2018-03-23 that have invalid CRP names, so now I kinda want to check those ones!
- Combine the ORCID identifiers Abenet sent with our existing list and resolve their names using the resolve-orcids.py script:
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/ilri-orcids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2018-05-06-combined.txt
$ ./resolve-orcids.py -i /tmp/2018-05-06-combined.txt -o /tmp/2018-05-06-combined-names.txt -d
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
- I made a pull request (#373) for this that I’ll merge some time next week (I’m expecting Atmire to get back to us about DSpace 5.8 soon)
- After testing quickly I just decided to merge it, and I noticed that I don’t even need to restart Tomcat for the changes to get loaded
2018-05-07
- I spent a bit of time playing with conciliator and Solr, trying to figure out how to reconcile columns in OpenRefine with data in our existing Solr cores (like CRP subjects)
- The documentation regarding the Solr stuff is limited, and I cannot figure out what all the fields in
conciliator.properties
are supposed to be
- But then I found reconcile-csv, which allows you to reconcile against values in a CSV file!
- That, combined with splitting our multi-value fields on “||” in OpenRefine is amaaaaazing, because after reconciliation you can just join them again
- Oh wow, you can also facet on the individual values once you’ve split them! That’s going to be amazing for proofing CRPs, subjects, etc.