cgspace-notes/docs/2021-03/index.html

721 lines
34 KiB
HTML
Raw Normal View History

2021-03-04 21:46:05 +01:00
<!DOCTYPE html>
<html lang="en" >
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta property="og:title" content="March, 2021" />
<meta property="og:description" content="2021-03-01
Discuss some OpenRXV issues with Abdullah from CodeObia
He&rsquo;s trying to work on the DSpace 6&#43; metadata schema autoimport using the DSpace 6&#43; REST API
Also, we found some issues building and running OpenRXV currently due to ecosystem shift in the Node.js dependencies
" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2021-03/" />
<meta property="article:published_time" content="2021-03-01T10:13:54+02:00" />
2021-03-23 14:20:52 +01:00
<meta property="article:modified_time" content="2021-03-23T09:34:40+02:00" />
2021-03-04 21:46:05 +01:00
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="March, 2021"/>
<meta name="twitter:description" content="2021-03-01
Discuss some OpenRXV issues with Abdullah from CodeObia
He&rsquo;s trying to work on the DSpace 6&#43; metadata schema autoimport using the DSpace 6&#43; REST API
Also, we found some issues building and running OpenRXV currently due to ecosystem shift in the Node.js dependencies
"/>
2021-03-23 08:34:40 +01:00
<meta name="generator" content="Hugo 0.82.0" />
2021-03-04 21:46:05 +01:00
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "March, 2021",
"url": "https://alanorth.github.io/cgspace-notes/2021-03/",
2021-03-23 08:34:40 +01:00
"wordCount": "2914",
2021-03-04 21:46:05 +01:00
"datePublished": "2021-03-01T10:13:54+02:00",
2021-03-23 14:20:52 +01:00
"dateModified": "2021-03-23T09:34:40+02:00",
2021-03-04 21:46:05 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2021-03/">
<title>March, 2021 | CGSpace Notes</title>
<!-- combined, minified CSS -->
<link href="https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel="stylesheet" integrity="sha256-vrgBLtwIuhC&#43;AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin="anonymous">
<!-- minified Font Awesome for SVG icons -->
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.ffbfea088a9a1666ec65c3a8cb4906e2a0e4f92dc70dbbf400a125ad2422123a.js" integrity="sha256-/7/qCIqaFmbsZcOoy0kG4qDk&#43;S3HDbv0AKElrSQiEjo=" crossorigin="anonymous"></script>
<!-- RSS 2.0 feed -->
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
<header class="blog-header">
<div class="container">
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
</div>
</header>
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2021-03/">March, 2021</a></h2>
<p class="blog-post-meta">
<time datetime="2021-03-01T10:13:54+02:00">Mon Mar 01, 2021</time>
in
<span class="fas fa-folder" aria-hidden="true"></span>&nbsp;<a href="/cgspace-notes/categories/notes/" rel="category tag">Notes</a>
</p>
</header>
<h2 id="2021-03-01">2021-03-01</h2>
<ul>
<li>Discuss some OpenRXV issues with Abdullah from CodeObia
<ul>
<li>He&rsquo;s trying to work on the DSpace 6+ metadata schema autoimport using the DSpace 6+ REST API</li>
<li>Also, we found some issues building and running OpenRXV currently due to ecosystem shift in the Node.js dependencies</li>
</ul>
</li>
</ul>
<h2 id="2021-03-02">2021-03-02</h2>
<ul>
<li>I fixed three build and runtime issues in OpenRXV:
<ul>
<li><a href="https://github.com/ilri/OpenRXV/pull/80">fix highcharts-angular and ngx-tour-core build</a></li>
<li><a href="https://github.com/ilri/OpenRXV/pull/82">frontend/package.json: Pin @types/ramda at 0.27.34</a></li>
</ul>
</li>
<li>Then I merged a few fixes that Abdullah had worked on last week</li>
</ul>
<h2 id="2021-03-03">2021-03-03</h2>
<ul>
<li>I <a href="https://github.com/ilri/OpenRXV/issues/83">fixed another frontend build warning on OpenRXV</a></li>
<li>Then I <a href="https://github.com/ilri/OpenRXV/pull/84">updated the frontend container to use Node.js 12 and Ubuntu 20.04</a></li>
<li>Also, I <a href="https://github.com/ilri/OpenRXV/pull/85">added a GitHub Actions workflow to build the frontend</a></li>
<li>I did some testing of Abdullah&rsquo;s patch for the values mapping search on OpenRXV
<ul>
<li>It still doesn&rsquo;t work with multi-word values, so I recorded a video with wf-recorder and uploaded it to <a href="https://github.com/ilri/OpenRXV/issues/43">the issue</a> for him to investigate</li>
</ul>
</li>
</ul>
<h2 id="2021-03-04">2021-03-04</h2>
<ul>
<li>Peter is having issues with the workflow since yesterday
<ul>
<li>I looked at the Munin stats and see a high number of database locks since yesterday</li>
</ul>
</li>
</ul>
<p><img src="/cgspace-notes/2021/03/postgres_locks_ALL-week.png" alt="PostgreSQL locks week">
<img src="/cgspace-notes/2021/03/postgres_connections_cgspace-week.png" alt="PostgreSQL connections week"></p>
<ul>
<li>I looked at the number of connections in PostgreSQL and it&rsquo;s definitely high again:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
1020
</code></pre><ul>
<li>I reported it to Atmire to take a look, on the <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=851">same issue</a> we had been tracking this before</li>
<li>Abenet asked me to add a new ORCID for ILRI staff member Zoe Campbell</li>
<li>I added it to the controlled vocabulary and then tagged her existing items on CGSpace using my <code>add-orcid-identifier.py</code> script:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ cat 2021-03-04-add-zoe-campbell-orcid.csv
dc.contributor.author,cg.creator.identifier
&quot;Campbell, Zoë&quot;,&quot;Zoe Campbell: 0000-0002-4759-9976&quot;
&quot;Campbell, Zoe A.&quot;,&quot;Zoe Campbell: 0000-0002-4759-9976&quot;
$ ./ilri/add-orcid-identifiers-csv.py -i 2021-03-04-add-zoe-campbell-orcid.csv -db dspace -u dspace -p 'fuuu'
</code></pre><ul>
<li>I still need to do cleanup on the journal articles metadata
<ul>
<li>Peter sent me some cleanups but I can&rsquo;t use them in the search/replace format he gave</li>
<li>I think it&rsquo;s better to export the metadata values with IDs and import cleaned up ones as CSV</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">localhost/dspace63= &gt; \COPY (SELECT dspace_object_id AS id, text_value as &quot;cg.journal&quot; FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=251) to /tmp/2021-02-24-journals.csv WITH CSV HEADER;
COPY 32087
</code></pre><ul>
<li>I used OpenRefine to remove all journal values that didn&rsquo;t have one of these values: ; ( )
<ul>
<li>Then I cloned the <code>cg.journal</code> field to <code>cg.volume</code> and <code>cg.issue</code></li>
<li>I used some GREL expressions like these to extract the journal name, volume, and issue:</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">value.partition(';')[0].trim() # to get journal names
value.partition(/[0-9]+\([0-9]+\)/)[1].replace(/^(\d+)\(\d+\)/,&quot;$1&quot;) # to get journal volumes
value.partition(/[0-9]+\([0-9]+\)/)[1].replace(/^\d+\((\d+)\)/,&quot;$1&quot;) # to get journal issues
</code></pre><ul>
<li>Then I uploaded the changes to CGSpace using <code>dspace metadata-import</code></li>
<li>Margarita from CCAFS was asking about an error deleting some items that were showing up in Google and should have been private
<ul>
<li>The error was &ldquo;Authorization denied for action OBSOLETE (DELETE) on BITSTREAM:bd157345-448e &hellip;&rdquo;</li>
<li>I searched the DSpace issue tracker and found several issues reporting this:
<ul>
<li><a href="https://jira.lyrasis.org/browse/DS-3985">DS-3985 Delete item fails</a></li>
<li><a href="https://jira.lyrasis.org/browse/DS-4004">DS-4004 Authorization denied Exception when trying to delete permanently an item, collection or community as a non-Admin user</a></li>
<li><a href="https://jira.lyrasis.org/browse/DS-4297">DS-4297 Authorization error when trying to delete item by submitter/administrator</a></li>
</ul>
</li>
<li>The issue is apparently with non-admin users who are in the admin and submit groups of the owning collection&hellip;</li>
<li>In this case the item was uploaded to the CCAFS Reports collection, and Margarita is a non-admin user who is a member of the collection&rsquo;s admin and submit groups, exactly as the issue described</li>
<li>I added a comment about our issue to <a href="https://jira.lyrasis.org/browse/DS-4297">DS-4297</a></li>
</ul>
</li>
<li>Yesterday Abenet added me to a WLE collection approver/editer steps so we can try to figure out why Niroshini is having issues adding metadata to Udana&rsquo;s submissions
<ul>
<li>I edited Udana&rsquo;s submission to CGSpace:
<ul>
<li>corrected the title</li>
<li>added language English</li>
<li>changed the link to the external item page instead of PDF</li>
<li>added SDGs from the external item page</li>
<li>added AGROVOC subjects from the external item page</li>
<li>added pagination (extent)</li>
<li>changed the license to &ldquo;other&rdquo; because CC-BY-NC-ND is not printed anywhere in the PDF or external item page</li>
</ul>
</li>
</ul>
</li>
</ul>
2021-03-05 19:52:36 +01:00
<h2 id="2021-03-05">2021-03-05</h2>
<ul>
<li>I migrated the Docker bind mount for the AReS Elasticsearch container to a Docker volume:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ docker-compose -f docker/docker-compose.yml down
$ docker volume create docker_esData_7
$ docker container create --name es_dummy -v docker_esData_7:/usr/share/elasticsearch/data:rw elasticsearch:7.6.2
2021-03-06 12:35:20 +01:00
$ docker cp docker/esData_7/nodes es_dummy:/usr/share/elasticsearch/data
2021-03-05 19:52:36 +01:00
$ docker rm es_dummy
# edit docker/docker-compose.yml to switch from bind mount to volume
$ docker-compose -f docker/docker-compose.yml up -d
</code></pre><ul>
<li>The trick is that when you create a volume like &ldquo;myvolume&rdquo; from a <code>docker-compose.yml</code> file, Docker will create it with the name &ldquo;docker_myvolume&rdquo;
<ul>
<li>If you create it manually on the command line with <code>docker volume create myvolume</code> then the name is literally &ldquo;myvolume&rdquo;</li>
</ul>
</li>
<li>I still need to make the changes to git master and add these notes to the pull request so Moayad and others can benefit</li>
<li>Delete the <code>openrxv-items-temp</code> index to test a fresh harvesting:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -XDELETE 'http://localhost:9200/openrxv-items-temp'
2021-03-06 12:35:20 +01:00
</code></pre><h2 id="2021-03-05-1">2021-03-05</h2>
<ul>
<li>Check the results of the AReS harvesting from last night:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/openrxv-items-temp/_count?q=*&amp;pretty'
{
&quot;count&quot; : 101761,
&quot;_shards&quot; : {
&quot;total&quot; : 1,
&quot;successful&quot; : 1,
&quot;skipped&quot; : 0,
&quot;failed&quot; : 0
}
}
</code></pre><ul>
<li>Set the current items index to read only and make a backup:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -X PUT &quot;localhost:9200/openrxv-items/_settings&quot; -H 'Content-Type: application/json' -d' {&quot;settings&quot;: {&quot;index.blocks.write&quot;:true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items/_clone/openrxv-items-2021-03-05
</code></pre><ul>
<li>Delete the current items index and clone the temp one to it:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -XDELETE 'http://localhost:9200/openrxv-items'
$ curl -X PUT &quot;localhost:9200/openrxv-items-temp/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items-temp/_clone/openrxv-items
</code></pre><ul>
<li>Then delete the temp and backup:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -XDELETE 'http://localhost:9200/openrxv-items-temp'
{&quot;acknowledged&quot;:true}%
$ curl -XDELETE 'http://localhost:9200/openrxv-items-2021-03-05'
</code></pre><ul>
<li>I made some pull requests to OpenRXV:
<ul>
<li><a href="https://github.com/ilri/OpenRXV/pull/86">docker/docker-compose.yml: Use docker volumes</a></li>
<li><a href="https://github.com/ilri/OpenRXV/pull/87">docker/docker-compose.yml: Pin Redis to version 5</a></li>
</ul>
</li>
<li>I deployed the latest changes from the last few days on AReS production</li>
</ul>
2021-03-07 14:51:12 +01:00
<h2 id="2021-03-07">2021-03-07</h2>
<ul>
<li>I realized there is something wrong with the Elasticsearch indexes on AReS
<ul>
<li>On a new test environment I see <code>openrxv-items</code> is correctly an alias of <code>openrxv-items-final</code>:</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
&quot;openrxv-items-final&quot;: {
&quot;aliases&quot;: {
&quot;openrxv-items&quot;: {}
}
},
</code></pre><ul>
<li>But on AReS production <code>openrxv-items</code> has somehow become an index:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
&quot;openrxv-items&quot;: {
&quot;aliases&quot;: {}
},
&quot;openrxv-items-final&quot;: {
&quot;aliases&quot;: {}
},
&quot;openrxv-items-temp&quot;: {
&quot;aliases&quot;: {}
},
</code></pre><ul>
<li>I fixed the issue on production by cloning the <code>openrxv-items</code> index to <code>openrxv-items-final</code>, deleting <code>openrxv-items</code>, and then re-creating it as an alias:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -X PUT &quot;localhost:9200/openrxv-items/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items/_clone/openrxv-items-2021-03-07
$ curl -XDELETE 'http://localhost:9200/openrxv-items-final'
$ curl -s -X POST http://localhost:9200/openrxv-items/_clone/openrxv-items-final
$ curl -XDELETE 'http://localhost:9200/openrxv-items'
$ curl -s -X POST 'http://localhost:9200/_aliases' -H 'Content-Type: application/json' -d'{&quot;actions&quot; : [{&quot;add&quot; : { &quot;index&quot; : &quot;openrxv-items-final&quot;, &quot;alias&quot; : &quot;openrxv-items&quot;}}]}'
</code></pre><ul>
<li>Delete backups and remove read-only mode on <code>openrxv-items</code>:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -XDELETE 'http://localhost:9200/openrxv-items-2021-03-07'
$ curl -X PUT &quot;localhost:9200/openrxv-items/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: false}}'
</code></pre><ul>
<li>Linode sent alerts about the CPU usage on CGSpace yesterday and the day before
<ul>
<li>Looking in the logs I see a few IPs making heavy usage on the REST API and XMLUI:</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console"># zcat --force /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz | grep -E '0[56]/Mar/2021' | goaccess --log-format=COMBINED -
</code></pre><ul>
<li>I see the usual IPs for CCAFS and ILRI importer bots, but also <code>143.233.242.132</code> which appears to be for GARDIAN:</li>
</ul>
<pre><code class="language-console" data-lang="console"># zgrep '143.233.242.132' /var/log/nginx/access.log.1 | grep -c Delphi
6237
# zgrep '143.233.242.132' /var/log/nginx/access.log.1 | grep -c -v Delphi
6418
</code></pre><ul>
<li>They seem to make requests twice, once with the Delphi user agent that we know and already mark as a bot, and once with a &ldquo;normal&rdquo; user agent
<ul>
<li>Looking in Solr I see they have been using this IP for awhile, as they have 100,000 hits going back into 2020</li>
<li>I will add this IP to the list of bots in nginx and purge it from Solr with my <code>check-spider-ip-hits.sh</code> script</li>
</ul>
</li>
<li>I made a few changes to OpenRXV:
<ul>
<li><a href="https://github.com/ilri/OpenRXV/issues/89">Migrated away from links to use networks</a></li>
<li><a href="https://github.com/ilri/OpenRXV/issues/68">Converted the backend container to use a custom image that includes <code>unoconv</code></a> so we don&rsquo;t have to manually install it anymore</li>
</ul>
</li>
</ul>
2021-03-08 19:13:40 +01:00
<h2 id="2021-03-08">2021-03-08</h2>
<ul>
<li>I approved the WLE item that I edited last week, and all the metadata is there: <a href="https://hdl.handle.net/10568/111810">https://hdl.handle.net/10568/111810</a>
<ul>
<li>So I&rsquo;m not sure what Niroshini&rsquo;s issue with metadata is&hellip;</li>
</ul>
</li>
<li>Peter sent a message yesterday saying that his item finally got committed
<ul>
<li>I looked at the Munin graphs and there was a MASSIVE spike in database activity two days ago, and now database locks are back down to normal levels (from 1000+):</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
13
</code></pre><ul>
<li>On 2021-03-03 the PostgreSQL transactions started rising:</li>
</ul>
<p><img src="/cgspace-notes/2021/03/postgres_querylength_ALL-week.png" alt="PostgreSQL query length week"></p>
<ul>
<li>After that the connections and locks started going up, peaking on 2021-03-06:</li>
</ul>
<p><img src="/cgspace-notes/2021/03/postgres_locks_ALL-week.png" alt="PostgreSQL locks week">
<img src="/cgspace-notes/2021/03/postgres_connections_ALL-week.png" alt="PostgreSQL connections week"></p>
<ul>
<li>I sent another message to Atmire to ask if they have time to look into this</li>
<li>CIFOR is pressuring me to upload the batch items from last week
<ul>
<li>Vika sent me a final file with some duplicates that Peter identified removed</li>
<li>I extracted and re-applied my basic corrections from last week in OpenRefine, then ran the items through <code>csv-metadata-quality</code> checker and uploaded them to CGSpace</li>
<li>In total there are 1,088 items</li>
</ul>
</li>
<li>Udana from IWMI emailed to ask about CGSpace thumbnails</li>
<li>Udana from IWMI emailed to ask about an item uploaded recently that does not appear in AReS
<ul>
<li><a href="https://hdl.handle.net/10568/111794">The item</a> was added to the archive on 2021-03-05, and I last harvested on 2021-03-06, so this might be an issue of a missing item</li>
</ul>
</li>
<li>Abenet got a quote from Atmire to buy 125 credits for 3750€</li>
<li>Maria at Bioversity sent some feedback about duplicate items on AReS</li>
<li>I&rsquo;m wondering if the issue of the <code>openrxv-items-final</code> index not getting cleared after a successful harvest (which results in having 200,000, then 300,000, etc items) has to do with the alias issue I fixed yesterday
<ul>
<li>I will start a fresh harvest on AReS without now to check, but first back up the current index just in case:</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -X PUT &quot;localhost:9200/openrxv-items-final/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items-final/_clone/openrxv-items-final-2021-03-08
# start harvesting on AReS
</code></pre><ul>
<li>As I saw on my local test instance, even when you cancel a harvesting, it replaces the <code>openrxv-items-final</code> index with whatever is in <code>openrxv-items-temp</code> automatically, so I assume it will do the same now</li>
</ul>
2021-03-09 14:48:47 +01:00
<h2 id="2021-03-09">2021-03-09</h2>
<ul>
<li>The harvesting on AReS finished last night and everything worked as expected, with no manual intervention
<ul>
<li>This means that <a href="https://github.com/ilri/OpenRXV/issues/64">the issue</a> we were facing for a few months was due to the <code>openrxv-items</code> index being deleted and re-created as a standalone index instead of an alias of <code>openrxv-items-final</code></li>
</ul>
</li>
<li>Talk to Moayad about OpenRXV development
<ul>
<li>We realized that the missing/duplicate items issue is probably due to the long harvesting time on the REST API, as the time between starting the harvesting on page 0 and finishing the harvesting on page 900 (in the CGSpace example), some items will have been added to the repository, which causes the pages to shift</li>
<li>I proposed a solution in the <a href="https://github.com/ilri/OpenRXV/issues/67">GitHub issue</a>, where we consult the site&rsquo;s XML sitemap after harvesting to see if we missed any items, and then we harvest them individually</li>
</ul>
</li>
<li>Peter sent me a list of 356 DOIs from Altmetric that don&rsquo;t have our Handles, so we need to Tweet them
<ul>
<li>I used my <code>doi-to-handle.py</code> script to generate a list of handles and titles for him:</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console">$ ./ilri/doi-to-handle.py -i /tmp/dois.txt -o /tmp/handles.txt -db dspace -u dspace -p 'fuuu'
2021-03-10 14:04:01 +01:00
</code></pre><h2 id="2021-03-10">2021-03-10</h2>
<ul>
<li>Colleagues from ICARDA asked about how we should handle ISI journals in CG Core, as CGSpace uses <code>cg.isijournal</code> and MELSpace uses <code>mel.impact-factor</code>
<ul>
<li>I filed <a href="https://github.com/AgriculturalSemantics/cg-core/issues/39">an issue</a> on the cg-core project to ask colleagues for ideas</li>
</ul>
</li>
2021-03-11 07:46:33 +01:00
<li>Peter said he doesn&rsquo;t see &ldquo;Source Code&rdquo; or &ldquo;Software&rdquo; in the <a href="https://cgspace.cgiar.org/handle/10568/1/search-filter?field=type">output type facet on the ILRI community</a>, but I see it on the home page, so I will try to do a full Discovery re-index:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ time chrt -b 0 ionice -c2 -n7 nice -n19 dspace index-discovery -b
real 318m20.485s
user 215m15.196s
sys 2m51.529s
</code></pre><ul>
<li>Now I see ten items for &ldquo;Source Code&rdquo; in the facets&hellip;</li>
2021-03-14 09:50:02 +01:00
<li>Add GPL and MIT licenses to the list of licenses on CGSpace input form since we will start capturing more software and source code</li>
<li>Added the ability to check <code>dcterms.license</code> values against the SPDX licenses in the csv-metadata-quality tool
<ul>
<li>Also, I made some other minor fixes and released <a href="https://github.com/ilri/csv-metadata-quality/releases/tag/v0.4.6">version 0.4.6</a> on GitHub</li>
</ul>
</li>
<li>Proof and upload twenty-seven items to CGSpace for Peter Ballantyne
<ul>
<li>Mostly Ugandan outputs for CRP Livestock and Livestock and Fish</li>
</ul>
</li>
2021-03-10 14:04:01 +01:00
</ul>
2021-03-14 09:50:02 +01:00
<h2 id="2021-03-14">2021-03-14</h2>
<ul>
<li>Switch to linux-kvm kernel on linode20 and linode18:</li>
</ul>
<pre><code class="language-console" data-lang="console"># apt update &amp;&amp; apt full-upgrade
# apt install linux-kvm
# apt remove linux-generic linux-image-generic linux-headers-generic linux-firmware
# apt autoremove &amp;&amp; apt autoclean
# reboot
</code></pre><ul>
<li>Deploy latest changes from <code>6_x-prod</code> branch on CGSpace</li>
<li>Deploy latest changes from OpenRXV <code>master</code> branch on AReS</li>
<li>Last week Peter added OpenRXV to CGSpace: <a href="https://hdl.handle.net/10568/112982">https://hdl.handle.net/10568/112982</a></li>
<li>Back up the current <code>openrxv-items-final</code> index on AReS to start a new harvest:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -X PUT &quot;localhost:9200/openrxv-items-final/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items-final/_clone/openrxv-items-final-2021-03-14
$ curl -X PUT &quot;localhost:9200/openrxv-items-final/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: false}}'
2021-03-14 20:34:07 +01:00
</code></pre><ul>
<li>After the harvesting finished it seems the indexes got messed up again, as <code>openrxv-items</code> is an alias of <code>openrxv-items-temp</code> instead of <code>openrxv-items-final</code>:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
&quot;openrxv-items-final&quot;: {
&quot;aliases&quot;: {}
},
&quot;openrxv-items-temp&quot;: {
&quot;aliases&quot;: {
&quot;openrxv-items&quot;: {}
}
},
</code></pre><ul>
<li>Anyways, the number of items in <code>openrxv-items</code> seems OK and the AReS Explorer UI is working fine
<ul>
<li>I will have to manually fix the indexes before the next harvesting</li>
</ul>
</li>
<li>Publish the web version of the DSpace CSV Metadata Quality checker tool that I wrote this weekend on GitHub: <a href="https://github.com/ilri/csv-metadata-quality-web">https://github.com/ilri/csv-metadata-quality-web</a>
<ul>
<li>Also, it is deployed on Heroku: <a href="https://fierce-ocean-30836.herokuapp.com/">https://fierce-ocean-30836.herokuapp.com/</a></li>
<li>I was running it on Google App Engine originally, but they have <em>way</em> too aggressive caching of static assets</li>
</ul>
</li>
</ul>
2021-03-17 13:57:45 +01:00
<h2 id="2021-03-16">2021-03-16</h2>
<ul>
<li>Review ten items for Livestock and Fish and Dryland Systems from Peter
<ul>
<li>I told him to try the new web-based CSV Metadata Qualiter checker and he thought it was cool</li>
<li>I found one exact duplicate item and it gave me an idea to try to detect this in the tool</li>
</ul>
</li>
</ul>
<h2 id="2021-03-17">2021-03-17</h2>
<ul>
<li>I added the ability to check for duplicate items to csv-metadata-quality</li>
<li>I also made some minor optimizations in the Pandas code</li>
<li>I <a href="https://github.com/ilri/csv-metadata-quality/releases/tag/v0.4.7">tagged version 0.4.7 of csv-metadata-quality on GitHub</a></li>
</ul>
2021-03-23 08:34:40 +01:00
<h2 id="2021-03-18">2021-03-18</h2>
<ul>
<li>I added the ability to check for, and fix, &ldquo;mojibake&rdquo; characters in csv-metadata-quality</li>
</ul>
<h2 id="2021-03-21">2021-03-21</h2>
<ul>
<li>Last week Atmire asked me which browser I was using to test the duplicate checker, which I had <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=934">reported</a> as not loading
<ul>
<li>I tried to load it in Chrome and it works&hellip; hmmm</li>
</ul>
</li>
<li>Back up the current <code>openrxv-items-final</code> index to start a fresh AReS Harvest:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -X PUT &quot;localhost:9200/openrxv-items-final/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: true}}'
$ curl -s -X POST http://localhost:9200/openrxv-items-final/_clone/openrxv-items-final-2021-03-21
$ curl -X PUT &quot;localhost:9200/openrxv-items-final/_settings&quot; -H 'Content-Type: application/json' -d'{&quot;settings&quot;: {&quot;index.blocks.write&quot;: false}}'
</code></pre><ul>
<li>Then start harvesting in the AReS Explorer admin UI</li>
</ul>
<h2 id="2021-03-22">2021-03-22</h2>
<ul>
<li>The harvesting on AReS yesterday completed, but somehow I have twice the number of items:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/openrxv-items-final/_count?q=*&amp;pretty'
{
&quot;count&quot; : 206204,
&quot;_shards&quot; : {
&quot;total&quot; : 1,
&quot;successful&quot; : 1,
&quot;skipped&quot; : 0,
&quot;failed&quot; : 0
}
}
</code></pre><ul>
<li>Hmmm and even my backup index has a strange number of items:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/openrxv-items-final-2021-03-21/_count?q=*&amp;pretty'
{
&quot;count&quot; : 844,
&quot;_shards&quot; : {
&quot;total&quot; : 1,
&quot;successful&quot; : 1,
&quot;skipped&quot; : 0,
&quot;failed&quot; : 0
}
}
</code></pre><ul>
<li>I deleted all indexes and re-created the openrxv-items alias:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s -X POST 'http://localhost:9200/_aliases' -H 'Content-Type: application/json' -d'{&quot;actions&quot; : [{&quot;add&quot; : { &quot;index&quot; : &quot;openrxv-items-final&quot;, &quot;alias&quot; : &quot;openrxv-items&quot;}}]}'
$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
...
&quot;openrxv-items-temp&quot;: {
&quot;aliases&quot;: {}
},
&quot;openrxv-items-final&quot;: {
&quot;aliases&quot;: {
&quot;openrxv-items&quot;: {}
}
}
</code></pre><ul>
<li>Then I started a new harvesting</li>
<li>I switched the Node.js in the <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> to v12 since v10 will cease to be supported soon
<ul>
<li>I re-deployed DSpace Test (linode26) with Node.js 12 and restarted the server</li>
</ul>
</li>
<li>The AReS harvest finally finished, with 1047 pages of items, but the <code>openrxv-items-final</code> index is empty and the <code>openrxv-items-temp</code> index has a 103,000 items:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/openrxv-items-temp/_count?q=*&amp;pretty'
{
&quot;count&quot; : 103162,
&quot;_shards&quot; : {
&quot;total&quot; : 1,
&quot;successful&quot; : 1,
&quot;skipped&quot; : 0,
&quot;failed&quot; : 0
}
}
</code></pre><ul>
<li>I tried to clone the temp index to the final, but got an error:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s -X POST http://localhost:9200/openrxv-items-temp/_clone/openrxv-items-final
{&quot;error&quot;:{&quot;root_cause&quot;:[{&quot;type&quot;:&quot;resource_already_exists_exception&quot;,&quot;reason&quot;:&quot;index [openrxv-items-final/LmxH-rQsTRmTyWex2d8jxw] already exists&quot;,&quot;index_uuid&quot;:&quot;LmxH-rQsTRmTyWex2d8jxw&quot;,&quot;index&quot;:&quot;openrxv-items-final&quot;}],&quot;type&quot;:&quot;resource_already_exists_exception&quot;,&quot;reason&quot;:&quot;index [openrxv-items-final/LmxH-rQsTRmTyWex2d8jxw] already exists&quot;,&quot;index_uuid&quot;:&quot;LmxH-rQsTRmTyWex2d8jxw&quot;,&quot;index&quot;:&quot;openrxv-items-final&quot;},&quot;status&quot;:400}%
</code></pre><ul>
<li>I looked in the Docker logs for Elasticsearch and saw a few memory errors:</li>
</ul>
<pre><code class="language-console" data-lang="console">java.lang.OutOfMemoryError: Java heap space
</code></pre><ul>
<li>According to <code>/usr/share/elasticsearch/config/jvm.options</code> in the Elasticsearch container the default JVM heap is 1g
<ul>
<li>I see the running Java process has <code>-Xms 1g -Xmx 1g</code> in its process invocation so I guess that it must be indeed using 1g</li>
<li>We can <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html">change the heap size with the ES_JAVA_OPTS environment variable</a></li>
<li>Or perhaps better, we should <a href="https://www.elastic.co/guide/en/elasticsearch/reference/master/jvm-options.html">use a jvm.options.d file</a> because if you use the environment variable it overrides all other JVM options from the default <code>jvm.options</code></li>
<li>I tried to set memory to 1536m by binding an options file and restarting the container, but it didn&rsquo;t seem to work</li>
<li>Nevertheless, after restarting I see 103,000 items in the Explorer&hellip;</li>
<li>But the indexes are still kinda messed up&hellip; the <code>openrxv-items</code> index is an alias of the wrong index!</li>
</ul>
</li>
</ul>
<pre><code class="language-console" data-lang="console"> &quot;openrxv-items-final&quot;: {
&quot;aliases&quot;: {}
},
&quot;openrxv-items-temp&quot;: {
&quot;aliases&quot;: {
&quot;openrxv-items&quot;: {}
}
},
</code></pre><h2 id="2021-03-23">2021-03-23</h2>
<ul>
<li>For reference you can also get the Elasticsearch JVM stats from the API:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_nodes/jvm?human' | python -m json.tool
</code></pre><ul>
<li>I re-deployed AReS with 1.5GB of heap using the <code>ES_JAVA_OPTS</code> environment variable
<ul>
<li>It turns out that this <em>is</em> the recommended way to set the heap: <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.6/jvm-options.html">https://www.elastic.co/guide/en/elasticsearch/reference/7.6/jvm-options.html</a></li>
</ul>
</li>
<li>Then I fixed the aliases to make sure <code>openrxv-items</code> was an alias of <code>openrxv-items-final</code>, similar to how I did a few weeks ago</li>
<li>I re-created the temp index:</li>
</ul>
<pre><code class="language-console" data-lang="console">$ curl -XPUT 'http://localhost:9200/openrxv-items-temp'
</code></pre><!-- raw HTML omitted -->
2021-03-04 21:46:05 +01:00
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
<li><a href="/cgspace-notes/2021-03/">March, 2021</a></li>
<li><a href="/cgspace-notes/2021-02/">February, 2021</a></li>
<li><a href="/cgspace-notes/2021-01/">January, 2021</a></li>
<li><a href="/cgspace-notes/2020-12/">December, 2020</a></li>
<li><a href="/cgspace-notes/cgspace-dspace6-upgrade/">CGSpace DSpace 6 Upgrade</a></li>
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p dir="auto">
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>