Add notes for 2019-12-17

This commit is contained in:
2019-12-17 14:49:24 +02:00
parent d83c951532
commit d54e5b69f1
90 changed files with 1420 additions and 1377 deletions

View File

@ -14,7 +14,7 @@
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="Notes"/>
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
<meta name="generator" content="Hugo 0.60.1" />
<meta name="generator" content="Hugo 0.61.0" />
@ -84,7 +84,7 @@
</p>
</header>
<h2 id="20190301">2019-03-01</h2>
<h2 id="2019-03-01">2019-03-01</h2>
<ul>
<li>I checked IITA's 259 Feb 14 records from last month for duplicates using Atmire's Duplicate Checker on a fresh snapshot of CGSpace on my local machine and everything looks good</li>
<li>I am now only waiting to hear from her about where the items should go, though I assume Journal Articles go to IITA Journal Articles collection, etc&hellip;</li>
@ -116,7 +116,7 @@
</p>
</header>
<h2 id="20190201">2019-02-01</h2>
<h2 id="2019-02-01">2019-02-01</h2>
<ul>
<li>Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!</li>
<li>The top IPs before, during, and after this latest alert tonight were:</li>
@ -161,7 +161,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20190102">2019-01-02</h2>
<h2 id="2019-01-02">2019-01-02</h2>
<ul>
<li>Linode alerted that CGSpace (linode18) had a higher outbound traffic rate than normal early this morning</li>
<li>I don't see anything interesting in the web server logs around that time though:</li>
@ -195,13 +195,13 @@ sys 0m1.979s
</p>
</header>
<h2 id="20181201">2018-12-01</h2>
<h2 id="2018-12-01">2018-12-01</h2>
<ul>
<li>Switch CGSpace (linode18) to use OpenJDK instead of Oracle JDK</li>
<li>I manually installed OpenJDK, then removed Oracle JDK, then re-ran the <a href="http://github.com/ilri/rmg-ansible-public">Ansible playbook</a> to update all configuration files, etc</li>
<li>Then I ran all system updates and restarted the server</li>
</ul>
<h2 id="20181202">2018-12-02</h2>
<h2 id="2018-12-02">2018-12-02</h2>
<ul>
<li>I noticed that there is another issue with PDF thumbnails on CGSpace, and I see there was another <a href="https://usn.ubuntu.com/3831-1/">Ghostscript vulnerability last week</a></li>
</ul>
@ -222,12 +222,12 @@ sys 0m1.979s
</p>
</header>
<h2 id="20181101">2018-11-01</h2>
<h2 id="2018-11-01">2018-11-01</h2>
<ul>
<li>Finalize AReS Phase I and Phase II ToRs</li>
<li>Send a note about my <a href="https://github.com/ilri/dspace-statistics-api">dspace-statistics-api</a> to the dspace-tech mailing list</li>
</ul>
<h2 id="20181103">2018-11-03</h2>
<h2 id="2018-11-03">2018-11-03</h2>
<ul>
<li>Linode has been sending mails a few times a day recently that CGSpace (linode18) has had high CPU usage</li>
<li>Today these are the top 10 IPs:</li>
@ -249,7 +249,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20181001">2018-10-01</h2>
<h2 id="2018-10-01">2018-10-01</h2>
<ul>
<li>Phil Thornton got an ORCID identifier so we need to add it to the list on CGSpace and tag his existing items</li>
<li>I created a GitHub issue to track this <a href="https://github.com/ilri/DSpace/issues/389">#389</a>, because I'm super busy in Nairobi right now</li>
@ -271,7 +271,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20180902">2018-09-02</h2>
<h2 id="2018-09-02">2018-09-02</h2>
<ul>
<li>New <a href="https://jdbc.postgresql.org/documentation/changelog.html#version_42.2.5">PostgreSQL JDBC driver version 42.2.5</a></li>
<li>I'll update the DSpace role in our <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure playbooks</a> and run the updated playbooks on CGSpace and DSpace Test</li>
@ -295,7 +295,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20180801">2018-08-01</h2>
<h2 id="2018-08-01">2018-08-01</h2>
<ul>
<li>DSpace Test had crashed at some point yesterday morning and I see the following in <code>dmesg</code>:</li>
</ul>
@ -327,7 +327,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20180701">2018-07-01</h2>
<h2 id="2018-07-01">2018-07-01</h2>
<ul>
<li>I want to upgrade DSpace Test to DSpace 5.8 so I took a backup of its current database just in case:</li>
</ul>
@ -354,7 +354,7 @@ sys 0m1.979s
</p>
</header>
<h2 id="20180604">2018-06-04</h2>
<h2 id="2018-06-04">2018-06-04</h2>
<ul>
<li>Test the <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">DSpace 5.8 module upgrades from Atmire</a> (<a href="https://github.com/ilri/DSpace/pull/378">#378</a>)
<ul>

View File

@ -14,7 +14,7 @@
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="Notes"/>
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
<meta name="generator" content="Hugo 0.60.1" />
<meta name="generator" content="Hugo 0.61.0" />
@ -84,7 +84,7 @@
</p>
</header>
<h2 id="20180501">2018-05-01</h2>
<h2 id="2018-05-01">2018-05-01</h2>
<ul>
<li>I cleared the Solr statistics core on DSpace Test by issuing two commands directly to the Solr admin interface:
<ul>
@ -112,7 +112,7 @@
</p>
</header>
<h2 id="20180401">2018-04-01</h2>
<h2 id="2018-04-01">2018-04-01</h2>
<ul>
<li>I tried to test something on DSpace Test but noticed that it's down since god knows when</li>
<li>Catalina logs at least show some memory errors yesterday:</li>
@ -134,7 +134,7 @@
</p>
</header>
<h2 id="20180302">2018-03-02</h2>
<h2 id="2018-03-02">2018-03-02</h2>
<ul>
<li>Export a CSV of the IITA community metadata for Martin Mueller</li>
</ul>
@ -155,7 +155,7 @@
</p>
</header>
<h2 id="20180201">2018-02-01</h2>
<h2 id="2018-02-01">2018-02-01</h2>
<ul>
<li>Peter gave feedback on the <code>dc.rights</code> proof of concept that I had sent him last week</li>
<li>We don't need to distinguish between internal and external works, so that makes it just a simple list</li>
@ -179,7 +179,7 @@
</p>
</header>
<h2 id="20180102">2018-01-02</h2>
<h2 id="2018-01-02">2018-01-02</h2>
<ul>
<li>Uptime Robot noticed that CGSpace went down and up a few times last night, for a few minutes each time</li>
<li>I didn't get any load alerts from Linode and the REST and XMLUI logs don't show anything out of the ordinary</li>
@ -263,7 +263,7 @@ dspace.log.2018-01-02:34
</p>
</header>
<h2 id="20171201">2017-12-01</h2>
<h2 id="2017-12-01">2017-12-01</h2>
<ul>
<li>Uptime Robot noticed that CGSpace went down</li>
<li>The logs say &ldquo;Timeout waiting for idle object&rdquo;</li>
@ -287,11 +287,11 @@ dspace.log.2018-01-02:34
</p>
</header>
<h2 id="20171101">2017-11-01</h2>
<h2 id="2017-11-01">2017-11-01</h2>
<ul>
<li>The CORE developers responded to say they are looking into their bot not respecting our robots.txt</li>
</ul>
<h2 id="20171102">2017-11-02</h2>
<h2 id="2017-11-02">2017-11-02</h2>
<ul>
<li>Today there have been no hits by CORE and no alerts from Linode (coincidence?)</li>
</ul>
@ -320,7 +320,7 @@ COPY 54701
</p>
</header>
<h2 id="20171001">2017-10-01</h2>
<h2 id="2017-10-01">2017-10-01</h2>
<ul>
<li>Peter emailed to point out that many items in the <a href="https://cgspace.cgiar.org/handle/10568/2703">ILRI archive collection</a> have multiple handles:</li>
</ul>