- Then I reduced the JVM heap size from 6144 back to 5120m
- Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the [Ansible infrastructure scripts](https://github.com/ilri/rmg-ansible-public) to support hosts choosing which distribution they want to use
- Advise Fabio Fidanza about integrating CGSpace content in the new CGIAR corporate website
- I think they can mostly rely on using the `cg.contributor.crp` field
- Looking over some IITA records for Sisay
- Other than trimming and collapsing consecutive whitespace, I made some other corrections
- I need to check the correct formatting of COTE D'IVOIRE vs COTE D’IVOIRE
- I replaced all DOIs with HTTPS
- I checked a few DOIs and found at least one that was missing, so I Googled the title of the paper and found the correct DOI
- Also, I found an [FAQ for DOI that says the `dx.doi.org` syntax is older](https://www.doi.org/factsheets/DOI_PURL.html), so I will replace all the DOIs with `doi.org` instead
- I found five records with "ISI Jounal" instead of "ISI Journal"
- I found one item with IITA subject "."
- Need to remember to check the facets for things like this in sponsorship:
- Deutsche Gesellschaft für Internationale Zusammenarbeit
- Deutsche Gesellschaft fur Internationale Zusammenarbeit
- Eight records with language "fn" instead of "fr"
- One incorrect type (lowercase "proceedings"): Conference proceedings
- Found some capitalized CRPs in `cg.contributor.crp`
- Found some incorrect author affiliations, ie "Institut de Recherche pour le Developpement Agricolc" should be "Institut de Recherche pour le Developpement *Agricole*"
- Wow, and for sponsors there are the following:
- Incorrect: Flemish Agency for Development Cooperation and Technical Assistance
- Incorrect: Flemish Organization for Development Cooperation and Technical Assistance
- Correct: Flemish *Association* for Development Cooperation and Technical Assistance
- One item had region "WEST" (I corrected it to "WEST AFRICA")
- I export them and include the hidden metadata fields like `dc.date.accessioned` so I can filter the ones from 2018-04 and correct them in Open Refine:
```
$ dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
- Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script and merge them into our controlled vocabulary
- On the messed up IITA records from 2018-04 I see sixty DOIs in incorrect format (cg.identifier.doi)
## 2018-05-06
- Fixing the IITA records from Sisay, sixty DOIs have completely invalid format like `http:dx.doi.org10.1016j.cropro.2008.07.003`
- I corrected all the DOIs and then checked them for validity with a quick bash loop:
```
$ for line in $(< /tmp/links.txt); do echo $line; http --print h $line; done
```
- Most of the links are good, though one is duplicate and one seems to even be incorrect in the publisher's site so...
- Also, there are some duplicates:
-`10568/92241` and `10568/92230` (same DOI)
-`10568/92151` and `10568/92150` (same ISBN)
-`10568/92291` and `10568/92286` (same citation, title, authors, year)
- Messed up abstracts:
-`10568/92309`
- Fixed some issues in regions, countries, sponsors, ISSN, and cleaned whitespace errors from citation, abstract, author, and titles
- Fixed all issues with CRPs
- A few more interesting Unicode characters to look for in text fields like author, abstracts, and citations might be: `’` (0x2019), `·` (0x00b7), and `€` (0x20ac)
- A custom text facit in OpenRefine with this GREL expression could be a good for finding invalid characters or encoding errors in authors, abstracts, etc:
```
or(
isNotNull(value.match(/.*[(|)].*/)),
isNotNull(value.match(/.*\uFFFD.*/)),
isNotNull(value.match(/.*\u00A0.*/)),
isNotNull(value.match(/.*\u200A.*/)),
isNotNull(value.match(/.*\u2019.*/)),
isNotNull(value.match(/.*\u00b7.*/)),
isNotNull(value.match(/.*\u20ac.*/))
)
```
- I found some more IITA records that Sisay imported on 2018-03-23 that have invalid CRP names, so now I kinda want to check those ones!
- Combine the ORCID identifiers Abenet sent with our existing list and resolve their names using the [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script: