<li><p>I restarted Tomcat and everything came back up</p></li>
<li><p>I can add Indy Library to the Tomcat crawler session manager valve but it would be nice if I could simply remap the useragent in nginx</p></li>
<li><p>I will also add ‘Drupal’ to the Tomcat crawler session manager valve because there are Drupals out there harvesting and they should be considered as bots</p>
<li>Linode alerted that CGSpace’s load was 327.5% from 6 to 8 AM again</li>
</ul>
<h2id="2017-12-04">2017-12-04</h2>
<ul>
<li>Linode alerted that CGSpace’s load was 255.5% from 8 to 10 AM again</li>
<li>I looked at the Munin stats on DSpace Test (linode02) again to see how the PostgreSQL tweaks from a few weeks ago were holding up:</li>
</ul>
<p><imgsrc="/cgspace-notes/2017/12/postgres-connections-month.png"alt="DSpace Test PostgreSQL connections month"/></p>
<ul>
<li>The results look fantastic! So the <code>random_page_cost</code> tweak is massively important for informing the PostgreSQL scheduler that there is no “cost” to accessing random pages, as we’re on an SSD!</li>
<li>I guess we could probably even reduce the PostgreSQL connections in DSpace / PostgreSQL after using this</li>
<li>Run system updates on DSpace Test (linode02) and reboot it</li>
<li>I’m going to enable the PostgreSQL <code>random_page_cost</code> tweak on CGSpace</li>
<li>For reference, here is the past month’s connections:</li>
<li>Linode alerted again that the CPU usage on CGSpace was high this morning from 8 to 10 AM</li>
<li>CORE updated the entry for CGSpace on their index: <ahref="https://core.ac.uk/search?q=repositories.id:(1016)&fullTextOnly=false">https://core.ac.uk/search?q=repositories.id:(1016)&fullTextOnly=false</a></li>
<li>Linode alerted again that the CPU usage on CGSpace was high this evening from 8 to 10 PM</li>
</ul>
<h2id="2017-12-06">2017-12-06</h2>
<ul>
<li>Linode alerted again that the CPU usage on CGSpace was high this morning from 6 to 8 AM</li>
<li>Uptime Robot alerted that the server went down and up around 8:53 this morning</li>
<li>Uptime Robot alerted that CGSpace was down and up again a few minutes later</li>
<li>I don’t see any errors in the DSpace logs but I see in nginx’s access.log that UptimeRobot was returned with HTTP 499 status (Client Closed Request)</li>
<li><p>I’ve adjusted the nginx IP mapping that I set up last month to account for 124.17.34.60 and 124.17.34.59 using a regex, as it’s the same bot on the same subnet</p></li>
<li><p>I was running the DSpace cleanup task manually and it hit an error:</p>
<li>Linode alerted that CGSpace was using high CPU from 10:13 to 12:13 this morning</li>
</ul>
<h2id="2017-12-16">2017-12-16</h2>
<ul>
<li>Re-work the XMLUI base theme to allow child themes to override the header logo’s image and link destination: <ahref="https://github.com/ilri/DSpace/pull/349">#349</a></li>
<li>This required a little bit of work to restructure the XSL templates</li>
<li>Optimize PNG and SVG image assets in the CGIAR base theme using pngquant and svgo: <ahref="https://github.com/ilri/DSpace/pull/350">#350</a></li>
</ul>
<h2id="2017-12-17">2017-12-17</h2>
<ul>
<li>Reboot DSpace Test to get new Linode Linux kernel</li>
<li>Looking at CCAFS bulk import for Magdalena Haman (she originally sent them in November but some of the thumbnails were missing and dates were messed up so she resent them now)</li>
<li>A few issues with the data and thumbnails:
<ul>
<li>Her thumbnail files all use capital JPG so I had to rename them to lowercase: <code>rename -fc *.JPG</code></li>
<li>thumbnail20.jpg is 1.7MB so I have to resize it</li>
<li>I also had to add the .jpg to the thumbnail string in the CSV</li>
<li>The thumbnail11.jpg is missing</li>
<li>The dates are in super long ISO8601 format (from Excel?) like <code>2016-02-07T00:00:00Z</code> so I converted them to simpler forms in GREL: <code>value.toString("yyyy-MM-dd")</code></li>
<li>I trimmed the whitespaces in a few fields but it wasn’t many</li>
<li>Rename her thumbnail column to filename, and format it so SAFBuilder adds the files to the thumbnail bundle with this GREL in OpenRefine: <code>value + "__bundle:THUMBNAIL"</code></li>
<li>Rename dc.identifier.status and dc.identifier.url columns to cg.identifier.status and cg.identifier.url</li>
<li>Item 4 has weird characters in citation, ie: Nagoya et de Trait</li>
<li>Some author names need normalization, ie: <code>Aggarwal, Pramod</code> and <code>Aggarwal, Pramod K.</code></li>
<li>Something weird going on with duplicate authors that have the same text value, like <code>Berto, Jayson C.</code> and <code>Balmeo, Katherine P.</code></li>
<li>I will send her feedback on some author names like UNEP and ICRISAT and ask her for the missing thumbnail11.jpg</li>
<li><p>I did a test import of the data locally after building with SAFBuilder but for some reason I had to specify the collection (even though the collections were specified in the <code>collection</code> field)</p>
<li><p>We’re on DSpace 5.5 but there is a one-word fix to the addItem() function here: <ahref="https://github.com/DSpace/DSpace/pull/1731">https://github.com/DSpace/DSpace/pull/1731</a></p></li>
<li><p>I will apply it on our branch but I need to make a note to NOT cherry-pick it when I rebase on to the latest 5.x upstream later</p></li>
<li><p>On the API side (REST and OAI) there is still the same CIAT bot (45.5.184.196) from last night making quite a number of requests this morning:</p>
<li><p>I need to keep an eye on this issue because it has nice fixes for reducing the number of database connections in DSpace 5.7: <ahref="https://jira.duraspace.org/browse/DS-3551">https://jira.duraspace.org/browse/DS-3551</a></p></li>
<li><p>Update text on CGSpace about page to give some tips to developers about using the resources more wisely (<ahref="https://github.com/ilri/DSpace/pull/352">#352</a>)</p></li>
<li><p>I made a small fix to my <code>move-collections.sh</code> script so that it handles the case when a “to” or “from” community doesn’t exist</p></li>
<li><p>Major reorganization of four of CTA’s French collections</p></li>
<li><p>Basically moving their items into the English ones, then moving the English ones to the top-level of the CTA community, and deleting the old sub-communities</p></li>
<li><p>Move collection <sup>10568</sup>⁄<sub>51821</sub> from <sup>10568</sup>⁄<sub>42212</sub> to <sup>10568</sup>⁄<sub>42211</sub></p></li>
<li><p>Move collection <sup>10568</sup>⁄<sub>51400</sub> from <sup>10568</sup>⁄<sub>42214</sub> to <sup>10568</sup>⁄<sub>42211</sub></p></li>
<li><p>Move collection <sup>10568</sup>⁄<sub>56992</sub> from <sup>10568</sup>⁄<sub>42216</sub> to <sup>10568</sup>⁄<sub>42211</sub></p></li>
<li><p>Move collection <sup>10568</sup>⁄<sub>42218</sub> from <sup>10568</sup>⁄<sub>42217</sub> to <sup>10568</sup>⁄<sub>42211</sub></p></li>
<li><p>Export CSV of collection <sup>10568</sup>⁄<sub>63484</sub> and move items to collection <sup>10568</sup>⁄<sub>51400</sub></p></li>
<li><p>Export CSV of collection <sup>10568</sup>⁄<sub>64403</sub> and move items to collection <sup>10568</sup>⁄<sub>56992</sub></p></li>
<li><p>Export CSV of collection <sup>10568</sup>⁄<sub>56994</sub> and move items to collection <sup>10568</sup>⁄<sub>42218</sub></p></li>
<li><p>There are blank lines in this metadata, which causes DSpace to not detect changes in the CSVs</p></li>
<li><p>I had to use OpenRefine to remove all columns from the CSV except <code>id</code> and <code>collection</code>, and then update the <code>collection</code> field for the new mappings</p></li>
<li><p>I was in the middle of applying the metadata imports on CGSpace and the system ran out of PostgreSQL connections…</p></li>
<li><p>There were 128 PostgreSQL connections at the time… grrrr.</p></li>
<li><p>So I restarted Tomcat 7 and restarted the imports</p></li>
<li><p>I assume the PostgreSQL transactions were fine but I will remove the Discovery index for their community and re-run the light-weight indexing to hopefully re-construct everything:</p>
<li>Briefly had PostgreSQL connection issues on CGSpace for the millionth time</li>
<li>I’m fucking sick of this!</li>
<li>The connection graph on CGSpace shows shit tons of connections idle</li>
</ul>
<p><imgsrc="/cgspace-notes/2017/12/postgres-connections-month-cgspace-2.png"alt="Idle PostgreSQL connections on CGSpace"/></p>
<ul>
<li>And I only now just realized that DSpace’s <code>db.maxidle</code> parameter is not seconds, but number of idle connections to allow.</li>
<li>So theoretically, because each webapp has its own pool, this could be 20 per app—so no wonder we have 50 idle connections!</li>
<li>I notice that this number will be set to 10 by default in DSpace 6.1 and 7.0: <ahref="https://jira.duraspace.org/browse/DS-3564">https://jira.duraspace.org/browse/DS-3564</a></li>
<li>So I’m going to reduce ours from 20 to 10 and start trying to figure out how the hell to supply a database pool using Tomcat JNDI</li>
<li>I re-deployed the <code>5_x-prod</code> branch on CGSpace, applied all system updates, and restarted the server</li>
<li><p>I don’t have time now to look into this but the Solr sharding has long been an issue!</p></li>
<li><p>Looking into using JDBC / JNDI to provide a database pool to DSpace</p></li>
<li><p>The <ahref="https://wiki.duraspace.org/display/DSDOC6x/Configuration+Reference">DSpace 6.x configuration docs</a> have more notes about setting up the database pool than the 5.x ones (which actually have none!)</p></li>
<li><p>First, I uncomment <code>db.jndi</code> in <em>dspace/config/dspace.cfg</em></p></li>
<li><p>Then I create a global <code>Resource</code> in the main Tomcat <em>server.xml</em> (inside <code>GlobalNamingResources</code>):</p>
<li><p>Most of the parameters are from comments by Mark Wood about his JNDI setup: <ahref="https://jira.duraspace.org/browse/DS-3564">https://jira.duraspace.org/browse/DS-3564</a></p></li>
<li><p>Then I add a <code>ResourceLink</code> to each web application context:</p>
<li><p>I am not sure why several guides show configuration snippets for <em>server.xml</em> and web application contexts that use a Local and Global jdbc…</p></li>
<li><p>When DSpace can’t find the JNDI context (for whatever reason) you will see this in the dspace logs:</p>
<li><p>Oh that’s fantastic, now at least Tomcat doesn’t print an error during startup so I guess it succeeds to create the JNDI pool</p></li>
<li><p>DSpace starts up but I have no idea if it’s using the JNDI configuration because I see this in the logs:</p>
<li><p>Let’s try again, but this time explicitly blank the PostgreSQL connection parameters in dspace.cfg and see if DSpace starts…</p></li>
<li><p>Wow, ok, that works, but having to copy the PostgreSQL JDBC JAR to Tomcat’s lib folder totally blows</p></li>
<li><p>Also, it’s likely this is only a problem on my local macOS + Tomcat test environment</p></li>
<li><p>Ubuntu’s Tomcat distribution will probably handle this differently</p></li>
<li><p>Wow, I think that actually works…</p></li>
<li><p>I wonder if I could get the JDBC driver from postgresql.org instead of relying on the one from the DSpace build: <ahref="https://jdbc.postgresql.org/">https://jdbc.postgresql.org/</a></p></li>
<li><p>I notice our version is 9.1-901, which isn’t even available anymore! The latest in the archived versions is 9.1-903</p></li>
<li><p>Also, since I commented out all the db parameters in DSpace.cfg, how does the command line <code>dspace</code> tool work?</p></li>
<li><p>Let’s try the upstream JDBC driver first:</p>
javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
<li><p>If I add the db values back to dspace.cfg the <code>dspace database info</code> command succeeds but the log still shows errors retrieving the JNDI connection</p></li>
<li><p>Perhaps something to report to the dspace-tech mailing list when I finally send my comments</p></li>
<li><p>Oh cool! <code>select * from pg_stat_activity</code> shows “PostgreSQL JDBC Driver” for the application name! That’s how you know it’s working!</p></li>
<li><p>If you monitor the <code>pg_stat_activity</code> while you run <code>dspace database info</code> you can see that it doesn’t use the JNDI and creates ~9 extra PostgreSQL connections!</p></li>
<li><p>And in the middle of all of this Linode sends an alert that CGSpace has high CPU usage from 2 to 4 PM</p></li>
<li><p>The final code for the JNDI work in the Ansible infrastructure scripts is here: <ahref="https://github.com/ilri/rmg-ansible-public/commit/1959d9cb7a0e7a7318c77f769253e5e029bdfa3b">https://github.com/ilri/rmg-ansible-public/commit/1959d9cb7a0e7a7318c77f769253e5e029bdfa3b</a></p></li>
<li><p>Looking at some old notes for metadata to clean up, I found a few hundred corrections in <code>cg.fulltextstatus</code> and <code>dc.language.iso</code>:</p>
<pre><code># update metadatavalue set text_value='Formally Published' where resource_type_id=2 and metadata_field_id=214 and text_value like 'Formally published';
UPDATE 5
# delete from metadatavalue where resource_type_id=2 and metadata_field_id=214 and text_value like 'NO';
DELETE 17
# update metadatavalue set text_value='en' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(En|English)';
UPDATE 49
# update metadatavalue set text_value='fr' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(fre|frn|French)';
UPDATE 4
# update metadatavalue set text_value='es' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(Spanish|spa)';
UPDATE 16
# update metadatavalue set text_value='vi' where resource_type_id=2 and metadata_field_id=38 and text_value='Vietnamese';
UPDATE 9
# update metadatavalue set text_value='ru' where resource_type_id=2 and metadata_field_id=38 and text_value='Ru';
UPDATE 1
# update metadatavalue set text_value='in' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(IN|In)';
UPDATE 5
# delete from metadatavalue where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(dc.language.iso|CGIAR Challenge Program on Water and Food)';
<li><p>Looks pretty normal actually, but I don’t know who 54.175.208.220 is</p></li>
<li><p>They identify as “com.plumanalytics”, which Google says is associated with Elsevier</p></li>
<li><p>They only seem to have used one Tomcat session so that’s good, I guess I don’t need to add them to the Tomcat Crawler Session Manager valve:</p>