mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2018-05-06
This commit is contained in:
@ -44,4 +44,54 @@ tags: ["Notes"]
|
||||
|
||||
- It turns out that the IITA records that I was helping Sisay with in March were imported in 2018-04 without a final check by Abenet or I
|
||||
- There are lots of errors on language, CRP, and even some encoding errors on abstract fields
|
||||
- I export them and include the hidden metadata fields like `dc.date.accessioned` so I can filter the ones from 2018-04 and correct them in Open Refine:
|
||||
|
||||
```
|
||||
$ dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
|
||||
```
|
||||
|
||||
- Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script and merge them into our controlled vocabulary
|
||||
- On the messed up IITA records from 2018-04 I see sixty DOIs in incorrect format (cg.identifier.doi)
|
||||
|
||||
## 2018-05-06
|
||||
|
||||
- Fixing the IITA records from Sisay, sixty DOIs have completely invalid format like `http:dx.doi.org10.1016j.cropro.2008.07.003`
|
||||
- I corrected all the DOIs and then checked them for validity with a quick bash loop:
|
||||
|
||||
```
|
||||
$ for line in $(< /tmp/links.txt); do echo $line; http --print h $line; done
|
||||
```
|
||||
|
||||
- Most of the links are good, though one is duplicate and one seems to even be incorrect in the publisher's site so...
|
||||
- Also, there are some duplicates:
|
||||
- `10568/92241` and `10568/92230` (same DOI)
|
||||
- `10568/92151` and `10568/92150` (same ISBN)
|
||||
- `10568/92291` and `10568/92286` (same citation, title, authors, year)
|
||||
- Messed up abstracts:
|
||||
- `10568/92309`
|
||||
- Fixed some issues in regions, countries, sponsors, ISSN, and cleaned whitespace errors from citation, abstract, author, and titles
|
||||
- Fixed all issues with CRPs
|
||||
- A few more interesting Unicode characters to look for in text fields like author, abstracts, and citations might be: `’` (0x2019), `·` (0x00b7), and `€` (0x20ac)
|
||||
- A custom text facit in OpenRefine with this GREL expression could be a good for finding invalid characters or encoding errors in authors, abstracts, etc:
|
||||
|
||||
```
|
||||
or(
|
||||
isNotNull(value.match(/.*[(|)].*/)),
|
||||
isNotNull(value.match(/.*\uFFFD.*/)),
|
||||
isNotNull(value.match(/.*\u00A0.*/)),
|
||||
isNotNull(value.match(/.*\u200A.*/)),
|
||||
isNotNull(value.match(/.*\u2019.*/)),
|
||||
isNotNull(value.match(/.*\u00b7.*/)),
|
||||
isNotNull(value.match(/.*\u20ac.*/))
|
||||
)
|
||||
```
|
||||
|
||||
- I found some more IITA records that Sisay imported on 2018-03-23 that have invalid CRP names, so now I kinda want to check those ones!
|
||||
- Combine the ORCID identifiers Abenet sent with our existing list and resolve their names using the [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script:
|
||||
|
||||
```
|
||||
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/ilri-orcids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2018-05-06-combined.txt
|
||||
$ ./resolve-orcids.py -i /tmp/2018-05-06-combined.txt -o /tmp/2018-05-06-combined-names.txt -d
|
||||
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
|
||||
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
|
||||
```
|
||||
|
Reference in New Issue
Block a user