mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-26 00:18:21 +01:00
Add notes for 2018-05-06
This commit is contained in:
parent
2171818eab
commit
5608649365
@ -44,4 +44,54 @@ tags: ["Notes"]
|
||||
|
||||
- It turns out that the IITA records that I was helping Sisay with in March were imported in 2018-04 without a final check by Abenet or I
|
||||
- There are lots of errors on language, CRP, and even some encoding errors on abstract fields
|
||||
- I export them and include the hidden metadata fields like `dc.date.accessioned` so I can filter the ones from 2018-04 and correct them in Open Refine:
|
||||
|
||||
```
|
||||
$ dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
|
||||
```
|
||||
|
||||
- Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script and merge them into our controlled vocabulary
|
||||
- On the messed up IITA records from 2018-04 I see sixty DOIs in incorrect format (cg.identifier.doi)
|
||||
|
||||
## 2018-05-06
|
||||
|
||||
- Fixing the IITA records from Sisay, sixty DOIs have completely invalid format like `http:dx.doi.org10.1016j.cropro.2008.07.003`
|
||||
- I corrected all the DOIs and then checked them for validity with a quick bash loop:
|
||||
|
||||
```
|
||||
$ for line in $(< /tmp/links.txt); do echo $line; http --print h $line; done
|
||||
```
|
||||
|
||||
- Most of the links are good, though one is duplicate and one seems to even be incorrect in the publisher's site so...
|
||||
- Also, there are some duplicates:
|
||||
- `10568/92241` and `10568/92230` (same DOI)
|
||||
- `10568/92151` and `10568/92150` (same ISBN)
|
||||
- `10568/92291` and `10568/92286` (same citation, title, authors, year)
|
||||
- Messed up abstracts:
|
||||
- `10568/92309`
|
||||
- Fixed some issues in regions, countries, sponsors, ISSN, and cleaned whitespace errors from citation, abstract, author, and titles
|
||||
- Fixed all issues with CRPs
|
||||
- A few more interesting Unicode characters to look for in text fields like author, abstracts, and citations might be: `’` (0x2019), `·` (0x00b7), and `€` (0x20ac)
|
||||
- A custom text facit in OpenRefine with this GREL expression could be a good for finding invalid characters or encoding errors in authors, abstracts, etc:
|
||||
|
||||
```
|
||||
or(
|
||||
isNotNull(value.match(/.*[(|)].*/)),
|
||||
isNotNull(value.match(/.*\uFFFD.*/)),
|
||||
isNotNull(value.match(/.*\u00A0.*/)),
|
||||
isNotNull(value.match(/.*\u200A.*/)),
|
||||
isNotNull(value.match(/.*\u2019.*/)),
|
||||
isNotNull(value.match(/.*\u00b7.*/)),
|
||||
isNotNull(value.match(/.*\u20ac.*/))
|
||||
)
|
||||
```
|
||||
|
||||
- I found some more IITA records that Sisay imported on 2018-03-23 that have invalid CRP names, so now I kinda want to check those ones!
|
||||
- Combine the ORCID identifiers Abenet sent with our existing list and resolve their names using the [resolve-orcids.py](https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b) script:
|
||||
|
||||
```
|
||||
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/ilri-orcids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2018-05-06-combined.txt
|
||||
$ ./resolve-orcids.py -i /tmp/2018-05-06-combined.txt -o /tmp/2018-05-06-combined-names.txt -d
|
||||
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
|
||||
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
|
||||
```
|
||||
|
@ -27,7 +27,7 @@ Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked
|
||||
|
||||
<meta property="article:published_time" content="2018-05-01T16:43:54+03:00"/>
|
||||
|
||||
<meta property="article:modified_time" content="2018-05-03T17:31:12+03:00"/>
|
||||
<meta property="article:modified_time" content="2018-05-03T17:34:41+03:00"/>
|
||||
|
||||
|
||||
|
||||
@ -65,9 +65,9 @@ Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked
|
||||
"@type": "BlogPosting",
|
||||
"headline": "May, 2018",
|
||||
"url": "https://alanorth.github.io/cgspace-notes/2018-05/",
|
||||
"wordCount": "380",
|
||||
"wordCount": "695",
|
||||
"datePublished": "2018-05-01T16:43:54+03:00",
|
||||
"dateModified": "2018-05-03T17:31:12+03:00",
|
||||
"dateModified": "2018-05-03T17:34:41+03:00",
|
||||
"author": {
|
||||
"@type": "Person",
|
||||
"name": "Alan Orth"
|
||||
@ -183,9 +183,69 @@ Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked
|
||||
<ul>
|
||||
<li>It turns out that the IITA records that I was helping Sisay with in March were imported in 2018-04 without a final check by Abenet or I</li>
|
||||
<li>There are lots of errors on language, CRP, and even some encoding errors on abstract fields</li>
|
||||
<li>Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my <a href="https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b">resolve-orcids.py</a> script and merge them into our controlled vocabulary</li>
|
||||
<li>I export them and include the hidden metadata fields like <code>dc.date.accessioned</code> so I can filter the ones from 2018-04 and correct them in Open Refine:</li>
|
||||
</ul>
|
||||
|
||||
<pre><code>$ dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
|
||||
</code></pre>
|
||||
|
||||
<ul>
|
||||
<li>Abenet sent a list of 46 ORCID identifiers for ILRI authors so I need to get their names using my <a href="https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b">resolve-orcids.py</a> script and merge them into our controlled vocabulary</li>
|
||||
<li>On the messed up IITA records from 2018-04 I see sixty DOIs in incorrect format (cg.identifier.doi)</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="2018-05-06">2018-05-06</h2>
|
||||
|
||||
<ul>
|
||||
<li>Fixing the IITA records from Sisay, sixty DOIs have completely invalid format like <code>http:dx.doi.org10.1016j.cropro.2008.07.003</code></li>
|
||||
<li>I corrected all the DOIs and then checked them for validity with a quick bash loop:</li>
|
||||
</ul>
|
||||
|
||||
<pre><code>$ for line in $(< /tmp/links.txt); do echo $line; http --print h $line; done
|
||||
</code></pre>
|
||||
|
||||
<ul>
|
||||
<li>Most of the links are good, though one is duplicate and one seems to even be incorrect in the publisher’s site so…</li>
|
||||
<li>Also, there are some duplicates:
|
||||
|
||||
<ul>
|
||||
<li><code>10568/92241</code> and <code>10568/92230</code> (same DOI)</li>
|
||||
<li><code>10568/92151</code> and <code>10568/92150</code> (same ISBN)</li>
|
||||
<li><code>10568/92291</code> and <code>10568/92286</code> (same citation, title, authors, year)</li>
|
||||
</ul></li>
|
||||
<li>Messed up abstracts:
|
||||
|
||||
<ul>
|
||||
<li><code>10568/92309</code></li>
|
||||
</ul></li>
|
||||
<li>Fixed some issues in regions, countries, sponsors, ISSN, and cleaned whitespace errors from citation, abstract, author, and titles</li>
|
||||
<li>Fixed all issues with CRPs</li>
|
||||
<li>A few more interesting Unicode characters to look for in text fields like author, abstracts, and citations might be: <code>’</code> (0x2019), <code>·</code> (0x00b7), and <code>€</code> (0x20ac)</li>
|
||||
<li>A custom text facit in OpenRefine with this GREL expression could be a good for finding invalid characters or encoding errors in authors, abstracts, etc:</li>
|
||||
</ul>
|
||||
|
||||
<pre><code>or(
|
||||
isNotNull(value.match(/.*[(|)].*/)),
|
||||
isNotNull(value.match(/.*\uFFFD.*/)),
|
||||
isNotNull(value.match(/.*\u00A0.*/)),
|
||||
isNotNull(value.match(/.*\u200A.*/)),
|
||||
isNotNull(value.match(/.*\u2019.*/)),
|
||||
isNotNull(value.match(/.*\u00b7.*/)),
|
||||
isNotNull(value.match(/.*\u20ac.*/))
|
||||
)
|
||||
</code></pre>
|
||||
|
||||
<ul>
|
||||
<li>I found some more IITA records that Sisay imported on 2018-03-23 that have invalid CRP names, so now I kinda want to check those ones!</li>
|
||||
<li>Combine the ORCID identifiers Abenet sent with our existing list and resolve their names using the <a href="https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b">resolve-orcids.py</a> script:</li>
|
||||
</ul>
|
||||
|
||||
<pre><code>$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/ilri-orcids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2018-05-06-combined.txt
|
||||
$ ./resolve-orcids.py -i /tmp/2018-05-06-combined.txt -o /tmp/2018-05-06-combined-names.txt -d
|
||||
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
|
||||
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
|
||||
</code></pre>
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
<url>
|
||||
<loc>https://alanorth.github.io/cgspace-notes/2018-05/</loc>
|
||||
<lastmod>2018-05-03T17:31:12+03:00</lastmod>
|
||||
<lastmod>2018-05-03T17:34:41+03:00</lastmod>
|
||||
</url>
|
||||
|
||||
<url>
|
||||
@ -164,7 +164,7 @@
|
||||
|
||||
<url>
|
||||
<loc>https://alanorth.github.io/cgspace-notes/</loc>
|
||||
<lastmod>2018-05-03T17:31:12+03:00</lastmod>
|
||||
<lastmod>2018-05-03T17:34:41+03:00</lastmod>
|
||||
<priority>0</priority>
|
||||
</url>
|
||||
|
||||
@ -175,7 +175,7 @@
|
||||
|
||||
<url>
|
||||
<loc>https://alanorth.github.io/cgspace-notes/tags/notes/</loc>
|
||||
<lastmod>2018-05-03T17:31:12+03:00</lastmod>
|
||||
<lastmod>2018-05-03T17:34:41+03:00</lastmod>
|
||||
<priority>0</priority>
|
||||
</url>
|
||||
|
||||
@ -187,13 +187,13 @@
|
||||
|
||||
<url>
|
||||
<loc>https://alanorth.github.io/cgspace-notes/posts/</loc>
|
||||
<lastmod>2018-05-03T17:31:12+03:00</lastmod>
|
||||
<lastmod>2018-05-03T17:34:41+03:00</lastmod>
|
||||
<priority>0</priority>
|
||||
</url>
|
||||
|
||||
<url>
|
||||
<loc>https://alanorth.github.io/cgspace-notes/tags/</loc>
|
||||
<lastmod>2018-05-03T17:31:12+03:00</lastmod>
|
||||
<lastmod>2018-05-03T17:34:41+03:00</lastmod>
|
||||
<priority>0</priority>
|
||||
</url>
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user