<li>Update <ahref="https://github.com/ilri/dspace-statistics-api">dspace-statistics-api</a> for DSpace 6+ UUIDs
<ul>
<li>Tag version 1.2.0 on GitHub</li>
</ul>
</li>
<li>Test migrating legacy Solr statistics to UUIDs with the as-of-yet unreleased <ahref="https://github.com/DSpace/DSpace/commit/184f2b2153479045fba6239342c63e7f8564b8b6#diff-0350ce2e13b28d5d61252b7a8f50a059">SolrUpgradePre6xStatistics.java</a>
<ul>
<li>You need to download this into the DSpace 6.x source and compile it</li>
<li>Skype with Peter and Abenet to discuss the CG Core survey
<ul>
<li>We also discussed some other CGSpace issues</li>
</ul>
</li>
</ul>
<h2id="2020-03-04">2020-03-04</h2>
<ul>
<li>Abenet asked me to add some new ILRI subjects to CGSpace
<ul>
<li>I <ahref="https://github.com/ilri/DSpace/commit/b51a242e773bd8658d3cab4ac883975708b00386">updated the input-forms.xml</a> in our <code>5_x-prod</code> branch on GitHub</li>
<li>Abenet said we are changing <code>HEALTH</code> to <code>HUMAN HEALTH</code> so I need to fix those using my <code>fix-metadata-values.py</code> script:</li>
<li>I found a very <ahref="https://lucene.apache.org/solr/guide/8_1/solr-system-requirements.html#lucene-solr-prior-to-7-0">interesting comment on the Solr 8.1 guide</a> about Java compatibility:</li>
</ul>
<blockquote>
<p>Lucene/Solr 7.0 was the first version that successfully passed our tests using Java 9 and higher. You should avoid Java 9 or later for Lucene/Solr 6.x or earlier.</p>
</blockquote>
<h2id="2020-03-08">2020-03-08</h2>
<ul>
<li>I want to try to consolidate our yearly Solr statistics cores back into one <code>statistics</code> core using the solr-import-export-json tool</li>
<li>I will try it on DSpace test, doing one year at a time:</li>
</ul>
<pre><code>$ ./run.sh -s http://localhost:8081/solr/statistics-2010 -a export -o /tmp/statistics-2010.json -k uid
$ ./run.sh -s http://localhost:8081/solr/statistics -a import -o /tmp/statistics-2010.json -k uid
<li>Upgrade PostgreSQL from 9.6 to 10 on DSpace Test (linode19)
<ul>
<li>I’ve been running it for one month in my local environment, and others have reported on the dspace-tech mailing list that they are using 10 and 11</li>
<li>Peter noticed that the Solr stats were not showing anything before 2020
<ul>
<li>I had to restart Tomcat three times before all cores loaded properly…</li>
</ul>
</li>
</ul>
<h2id="2020-03-10">2020-03-10</h2>
<ul>
<li>Fix some logic issues in the nginx config
<ul>
<li>Use generic blocking of <code>[Bb]ot</code> and <code>[Cc]rawl</code> and <code>[Ss]pider</code> in the “badbots” rate limiting logic instead of trying to list them all one by one (bots should not be trying to index dynamic pages <em>no matter what</em> so we punish hard here)</li>
<li>We were not properly forwarding the remote IP address to Tomcat in all nginx location blocks, which led some locations to log a hit from 127.0.0.1 (because we need to explicitly add the global proxy params when setting other headers in location blocks)</li>
<li>Unfortunately this affected the REST API and there are a few hundred thousand requests from this user agent:</li>
</ul>
</li>
</ul>
<pre><code>Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729)
</code></pre><ul>
<li>It seems to only be a problem in the last week:</li>
<li>It is making 10,000 to 40,000 requests to XMLUI per day…</li>
</ul>
<pre><code># zgrep -c 'Mozilla/5.0 ((Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6)' /var/log/nginx/access.log.{1..9}
/var/log/nginx/access.log.30.gz:18687
/var/log/nginx/access.log.31.gz:28936
/var/log/nginx/access.log.32.gz:36402
/var/log/nginx/access.log.33.gz:38886
/var/log/nginx/access.log.34.gz:30607
/var/log/nginx/access.log.35.gz:19040
/var/log/nginx/access.log.36.gz:10780
/var/log/nginx/access.log.37.gz:5808
/var/log/nginx/access.log.38.gz:3100
/var/log/nginx/access.log.39.gz:1485
/var/log/nginx/access.log.3.gz:2898
/var/log/nginx/access.log.40.gz:373
/var/log/nginx/access.log.41.gz:3909
/var/log/nginx/access.log.42.gz:4729
/var/log/nginx/access.log.43.gz:3906
</code></pre><ul>
<li>I will purge those hits too!</li>
</ul>
<pre><code>$ curl -s "http://localhost:8081/solr/statistics/update?softCommit=true" -H "Content-Type: text/xml" --data-binary '<delete><query>userAgent:"Mozilla/5.0 ((Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6)"</query></delete>'
</code></pre><ul>
<li>Shit, and something happened and a few thousand hits from user agents with “Bot” in their user agent got through
<ul>
<li>I need to re-run the <code>check-bot-hits.sh</code> script with the standard COUNTER-Robots list again, but add my own versions of a few because the script/Solr doesn’t support case-insensitive regular expressions:</li>
<li>Ask Michael Victor for permission to create a new Linode server for DSpace Test</li>
</ul>
<h2id="2020-3-12">2020-3-12</h2>
<ul>
<li>I’m working on the 170 IITA records on <ahref="https://dspacetest.cgiar.org/handle/10568/106567">DSpace Test</a> from January finally
<ul>
<li>It’s been two months since I last looked and I want to do a thorough check to make sure Bosede didn’t introduce any new issues, but I want to consolidate all the text languages for these records so it’s easier to check them in OpenRefine</li>
<li>First I got a list of IDs from <code>csvcut</code> and then I updated the text languages for only those records:</li>
</ul>
</li>
</ul>
<pre><code>dspace=# SELECT DISTINCT text_lang, COUNT(*) FROM metadatavalue WHERE resource_type_id=2 AND resource_id in (111295,111294,111293,111292,111291,111290,111288,111286,111285,111284,111283,111282,111281,111280,111279,111278,111277,111276,111275,111274,111273,111272,111271,111270,111269,111268,111267,111266,111265,111264,111263,111262,111261,111260,111259,111258,111257,111256,111255,111254,111253,111252,111251,111250,111249,111248,111247,111246,111245,111244,111243,111242,111241,111240,111238,111237,111236,111235,111234,111233,111232,111231,111230,111229,111228,111227,111226,111225,111224,111223,111222,111221,111220,111219,111218,111217,111216,111215,111214,111213,111212,111211,111209,111208,111207,111206,111205,111204,111203,111202,111201,111200,111199,111198,111197,111196,111195,111194,111193,111192,111191,111190,111189,111188,111187,111186,111185,111184,111183,111182,111181,111180,111179,111178,111177,111176,111175,111174,111173,111172,111171,111170,111169,111168,111299,111298,111297,111296,111167,111166,111165,111164,111163,111162,111161,111160,111159,111158,111157,111156,111155,111154,111153,111152,111151,111150,111149,111148,111147,111146,111145,111144,111143,111142,111141,111140,111139,111138,111137,111136,111135,111134,111133,111132,111131,111129,111128,111127,111126,111125) GROUP BY text_lang ORDER BY count;
</code></pre><ul>
<li>Then I exported the metadata from DSpace Test and imported it into OpenRefine
<ul>
<li>I corrected one invalid AGROVOC subject using my <code>csv-metadata-quality</code> script</li>
</ul>
</li>
<li>I exported a new list of affiliations from the database, added line numbers with <code>csvcut</code>, and then validated them in OpenRefine using <code>reconcile-csv</code>:</li>
</ul>
<pre><code>dspace=# \COPY (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 211 GROUP BY text_value ORDER BY count DESC LIMIT 1500) to /tmp/2020-03-12-affiliations.csv WITH CSV HEADER;`
<li>I always forget how to copy the reconciled values in OpenRefine, but you need to make a new column and populate it using this GREL: <code>if(cell.recon.matched, cell.recon.match.name, value)</code></li>
<li>I mapped all 170 items to their appropriate collections based on type and uploaded them to CGSpace</li>
<li>I’m looking at the CPU usage of CGSpace (linode18) over the past year and I see we <em>rarely</em> even go over two CPUs on average sustained usage:</li>
</ul>
<p><imgsrc="/cgspace-notes/2020/03/cgspace-cpu-year.png"alt="linode18 CPU usage year"></p>
<ul>
<li>Also clearly visible is the effect of CPU steal in 2019-03</li>
<li>At max we have committed 10GB of RAM, the rest is used opportunistically by the filesystem cache, likely for Solr
<ul>
<li>There was a huge drop in 2019-07 when I changed the JVM settings</li>
<li>I think we should re-evaluate our deployment and perhaps target a different instance type and add block storage for assetstore (as we determined Linode’s block storage to be too slow for Solr)</li>