cgspace-notes/docs/2019-02/index.html

1399 lines
78 KiB
HTML
Raw Normal View History

2019-02-01 20:45:50 +01:00
<!DOCTYPE html>
<html lang="en" >
2019-02-01 20:45:50 +01:00
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
2020-12-06 15:53:29 +01:00
2019-02-01 20:45:50 +01:00
<meta property="og:title" content="February, 2019" />
<meta property="og:description" content="2019-02-01
Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!
2019-05-05 15:45:12 +02:00
The top IPs before, during, and after this latest alert tonight were:
2019-02-01 23:01:39 +01:00
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;01/Feb/2019:(17|18|19|20|21)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
245 207.46.13.5
332 54.70.40.11
385 5.143.231.38
405 207.46.13.173
405 207.46.13.75
1117 66.249.66.219
1121 35.237.175.180
1546 5.9.6.51
2474 45.5.186.2
5490 85.25.237.71
2019-02-01 23:01:39 +01:00
85.25.237.71 is the &ldquo;Linguee Bot&rdquo; that I first saw last month
2019-02-01 20:45:50 +01:00
The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase
2019-05-05 15:45:12 +02:00
There were just over 3 million accesses in the nginx logs last month:
2019-02-01 20:45:50 +01:00
2022-03-04 13:30:06 +01:00
# time zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Jan/2019&#34;
2019-02-01 20:45:50 +01:00
3018243
real 0m19.873s
user 0m22.203s
sys 0m1.979s
" />
<meta property="og:type" content="article" />
2019-02-02 13:12:57 +01:00
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2019-02/" />
2019-08-08 17:10:44 +02:00
<meta property="article:published_time" content="2019-02-01T21:37:30+02:00" />
2019-10-28 12:43:25 +01:00
<meta property="article:modified_time" content="2019-10-28T13:39:25+02:00" />
2019-02-01 20:45:50 +01:00
2020-12-06 15:53:29 +01:00
2019-02-01 20:45:50 +01:00
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="February, 2019"/>
<meta name="twitter:description" content="2019-02-01
Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!
2019-05-05 15:45:12 +02:00
The top IPs before, during, and after this latest alert tonight were:
2019-02-01 23:01:39 +01:00
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;01/Feb/2019:(17|18|19|20|21)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
245 207.46.13.5
332 54.70.40.11
385 5.143.231.38
405 207.46.13.173
405 207.46.13.75
1117 66.249.66.219
1121 35.237.175.180
1546 5.9.6.51
2474 45.5.186.2
5490 85.25.237.71
2019-02-01 23:01:39 +01:00
85.25.237.71 is the &ldquo;Linguee Bot&rdquo; that I first saw last month
2019-02-01 20:45:50 +01:00
The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase
2019-05-05 15:45:12 +02:00
There were just over 3 million accesses in the nginx logs last month:
2019-02-01 20:45:50 +01:00
2022-03-04 13:30:06 +01:00
# time zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Jan/2019&#34;
2019-02-01 20:45:50 +01:00
3018243
real 0m19.873s
user 0m22.203s
sys 0m1.979s
"/>
2022-06-23 07:40:53 +02:00
<meta name="generator" content="Hugo 0.101.0" />
2019-02-01 20:45:50 +01:00
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "February, 2019",
2020-04-02 09:55:42 +02:00
"url": "https://alanorth.github.io/cgspace-notes/2019-02/",
2019-03-07 10:37:53 +01:00
"wordCount": "7700",
"datePublished": "2019-02-01T21:37:30+02:00",
2019-10-28 12:43:25 +01:00
"dateModified": "2019-10-28T13:39:25+02:00",
2019-02-01 20:45:50 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2019-02/">
<title>February, 2019 | CGSpace Notes</title>
2019-02-01 20:45:50 +01:00
<!-- combined, minified CSS -->
2020-01-23 19:19:38 +01:00
2021-01-24 08:46:27 +01:00
<link href="https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel="stylesheet" integrity="sha256-vrgBLtwIuhC&#43;AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin="anonymous">
2019-02-01 20:45:50 +01:00
2020-01-28 11:01:42 +01:00
<!-- minified Font Awesome for SVG icons -->
2021-09-28 09:32:32 +02:00
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity="sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz&#43;lcnA=" crossorigin="anonymous"></script>
2020-01-28 11:01:42 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2019-02-01 20:45:50 +01:00
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
<header class="blog-header">
<div class="container">
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
2019-02-01 20:45:50 +01:00
</div>
</header>
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2019-02/">February, 2019</a></h2>
2020-11-16 09:54:00 +01:00
<p class="blog-post-meta">
<time datetime="2019-02-01T21:37:30+02:00">Fri Feb 01, 2019</time>
in
2022-06-23 07:40:53 +02:00
<span class="fas fa-folder" aria-hidden="true"></span>&nbsp;<a href="/categories/notes/" rel="category tag">Notes</a>
2019-02-01 20:45:50 +01:00
</p>
</header>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-01">2019-02-01</h2>
2019-02-01 20:45:50 +01:00
<ul>
<li>Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!</li>
2019-11-28 16:30:45 +01:00
<li>The top IPs before, during, and after this latest alert tonight were:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;01/Feb/2019:(17|18|19|20|21)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
245 207.46.13.5
332 54.70.40.11
385 5.143.231.38
405 207.46.13.173
405 207.46.13.75
1117 66.249.66.219
1121 35.237.175.180
1546 5.9.6.51
2474 45.5.186.2
5490 85.25.237.71
</code></pre><ul>
<li><code>85.25.237.71</code> is the &ldquo;Linguee Bot&rdquo; that I first saw last month</li>
<li>The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase</li>
<li>There were just over 3 million accesses in the nginx logs last month:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># time zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Jan/2019&#34;
2019-02-01 20:45:50 +01:00
3018243
real 0m19.873s
user 0m22.203s
sys 0m1.979s
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Normally I&rsquo;d say this was very high, but <a href="/cgspace-notes/2018-02/">about this time last year</a> I remember thinking the same thing when we had 3.1 million&hellip;</li>
2019-02-01 20:45:50 +01:00
<li>I will have to keep an eye on this to see if there is some error in Solr&hellip;</li>
2019-02-01 23:01:39 +01:00
<li>Atmire sent their <a href="https://github.com/ilri/DSpace/pull/407">pull request to re-enable the Metadata Quality Module (MQM) on our <code>5_x-dev</code> branch</a> today
<ul>
<li>I will test it next week and send them feedback</li>
2019-02-01 20:45:50 +01:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-02">2019-02-02</h2>
2019-02-02 10:36:24 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Another alert from Linode about CGSpace (linode18) this morning, here are the top IPs in the web server logs before, during, and after that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;02/Feb/2019:0(1|2|3|4|5)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
284 18.195.78.144
329 207.46.13.32
417 35.237.175.180
448 34.218.226.147
694 2a01:4f8:13b:1296::2
718 2a01:4f8:140:3192::2
786 137.108.70.14
1002 5.9.6.51
6077 85.25.237.71
8726 45.5.184.2
</code></pre><ul>
<li><code>45.5.184.2</code> is CIAT and <code>85.25.237.71</code> is the new Linguee bot that I first noticed a few days ago</li>
<li>I will increase the Linode alert threshold from 275 to 300% because this is becoming too much!</li>
2020-06-04 13:43:40 +02:00
<li>I tested the Atmire Metadata Quality Module (MQM)&rsquo;s duplicate checked on the some <a href="https://dspacetest.cgiar.org/handle/10568/81268">WLE items</a> that I helped Udana with a few months ago on DSpace Test (linode19) and indeed it found many duplicates!</li>
2019-02-02 10:36:24 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-03">2019-02-03</h2>
2019-02-03 10:33:02 +01:00
<ul>
<li>This is seriously getting annoying, Linode sent another alert this morning that CGSpace (linode18) load was 377%!</li>
2019-11-28 16:30:45 +01:00
<li>Here are the top IPs before, during, and after that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;03/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
325 85.25.237.71
340 45.5.184.72
431 5.143.231.8
756 5.9.6.51
1048 34.218.226.147
1203 66.249.66.219
1496 195.201.104.240
4658 205.186.128.185
4658 70.32.83.92
4852 45.5.184.2
</code></pre><ul>
<li><code>45.5.184.2</code> is CIAT, <code>70.32.83.92</code> and <code>205.186.128.185</code> are Macaroni Bros harvesters for CCAFS I think</li>
<li><code>195.201.104.240</code> is a new IP address in Germany with the following user agent:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:62.0) Gecko/20100101 Firefox/62.0
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>This user was making 2060 requests per minute this morning&hellip; seems like I should try to block this type of behavior heuristically, regardless of user agent!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;03/Feb/2019&#34; | grep 195.201.104.240 | grep -o -E &#39;03/Feb/2019:0[0-9]:[0-9][0-9]&#39; | uniq -c | sort -n | tail -n 20
2019-11-28 16:30:45 +01:00
19 03/Feb/2019:07:42
20 03/Feb/2019:07:12
21 03/Feb/2019:07:27
21 03/Feb/2019:07:28
25 03/Feb/2019:07:23
25 03/Feb/2019:07:29
26 03/Feb/2019:07:33
28 03/Feb/2019:07:38
30 03/Feb/2019:07:31
33 03/Feb/2019:07:35
33 03/Feb/2019:07:37
38 03/Feb/2019:07:40
43 03/Feb/2019:07:24
43 03/Feb/2019:07:32
46 03/Feb/2019:07:36
47 03/Feb/2019:07:34
47 03/Feb/2019:07:39
47 03/Feb/2019:07:41
51 03/Feb/2019:07:26
59 03/Feb/2019:07:25
</code></pre><ul>
<li>At least they re-used their Tomcat session!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -o -E &#39;session_id=[A-Z0-9]{32}:ip_addr=195.201.104.240&#39; dspace.log.2019-02-03 | sort | uniq | wc -l
2019-02-03 10:33:02 +01:00
1
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>This user was making requests to <code>/browse</code>, which is not currently under the existing rate limiting of dynamic pages in our nginx config
2019-02-03 10:33:02 +01:00
<ul>
<li>I <a href="https://github.com/ilri/rmg-ansible-public/commit/36dfb072d6724fb5cdc81ef79cab08ed9ce427ad">extended the existing <code>dynamicpages</code> (12/m) rate limit to <code>/browse</code> and <code>/discover</code></a> with an allowance for bursting of up to five requests for &ldquo;real&rdquo; users</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>Run all system updates on linode20 and reboot it
2019-02-03 18:05:28 +01:00
<ul>
<li>This will be the new AReS repository explorer server soon</li>
2019-02-03 10:33:02 +01:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-04">2019-02-04</h2>
2019-02-04 08:32:27 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Generate a list of CTA subjects from CGSpace for Peter:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# \copy (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=124 GROUP BY text_value ORDER BY COUNT DESC) to /tmp/cta-subjects.csv with csv header;
2019-02-04 08:32:27 +01:00
COPY 321
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Skype with Michael Victor about CKM and CGSpace</li>
<li>Discuss the new IITA research theme field with Abenet and decide that we should use <code>cg.identifier.iitatheme</code></li>
<li>This morning there was another alert from Linode about the high load on CGSpace (linode18), here are the top IPs in the web server logs before, during, and after that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;04/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
589 2a01:4f8:140:3192::2
762 66.249.66.219
889 35.237.175.180
1332 34.218.226.147
1393 5.9.6.51
1940 50.116.102.77
3578 85.25.237.71
4311 45.5.184.2
4658 205.186.128.185
4658 70.32.83.92
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>At this rate I think I just need to stop paying attention to these alerts—DSpace gets thrashed when people use the APIs properly and there&rsquo;s nothing we can do to improve REST API performance!</li>
2019-11-28 16:30:45 +01:00
<li>Perhaps I just need to keep increasing the Linode alert threshold (currently 300%) for this host?</li>
2019-02-04 10:22:07 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-05">2019-02-05</h2>
2019-02-05 07:59:54 +01:00
<ul>
<li>Peter sent me corrections and deletions for the CTA subjects and as usual, there were encoding errors with some accentsÁ in his file</li>
2019-11-28 16:30:45 +01:00
<li>In other news, it seems that the GREL syntax regarding booleans changed in OpenRefine recently, so I need to update some expressions like the one I use to detect encoding errors to use <code>toString()</code>:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>or(
2019-11-28 16:30:45 +01:00
isNotNull(value.match(/.*\uFFFD.*/)),
isNotNull(value.match(/.*\u00A0.*/)),
isNotNull(value.match(/.*\u200A.*/)),
isNotNull(value.match(/.*\u2019.*/)),
isNotNull(value.match(/.*\u00b4.*/)),
isNotNull(value.match(/.*\u007e.*/))
2019-02-05 07:59:54 +01:00
).toString()
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Testing the corrections for sixty-five items and sixteen deletions using my <a href="https://gist.github.com/alanorth/df92cbfb54d762ba21b28f7cd83b6897">fix-metadata-values.py</a> and <a href="https://gist.github.com/alanorth/bd7d58c947f686401a2b1fadc78736be">delete-metadata-values.py</a> scripts:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i 2019-02-04-Correct-65-CTA-Subjects.csv -f cg.subject.cta -t CORRECT -m 124 -db dspace -u dspace -p &#39;fuu&#39; -d
$ ./delete-metadata-values.py -i 2019-02-04-Delete-16-CTA-Subjects.csv -f cg.subject.cta -m 124 -db dspace -u dspace -p &#39;fuu&#39; -d
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I applied them on DSpace Test and CGSpace and started a full Discovery re-index:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ export JAVA_OPTS=&#34;-Dfile.encoding=UTF-8 -Xmx1024m&#34;
2019-02-05 07:59:54 +01:00
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Peter had marked several terms with <code>||</code> to indicate multiple values in his corrections so I will have to go back and do those manually:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>EMPODERAMENTO DE JOVENS,EMPODERAMENTO||JOVENS
2019-02-06 15:50:39 +01:00
ENVIRONMENTAL PROTECTION AND NATURAL RESOURCES MANAGEMENT,NATURAL RESOURCES MANAGEMENT||ENVIRONMENT
FISHERIES AND AQUACULTURE,FISHERIES||AQUACULTURE
MARKETING AND TRADE,MARKETING||TRADE
MARKETING ET COMMERCE,MARKETING||COMMERCE
NATURAL RESOURCES AND ENVIRONMENT,NATURAL RESOURCES MANAGEMENT||ENVIRONMENT
PÊCHES ET AQUACULTURE,PÊCHES||AQUACULTURE
PESCAS E AQUACULTURE,PISCICULTURA||AQUACULTURE
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2019-02-06">2019-02-06</h2>
2019-02-06 15:50:39 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>I dumped the CTA community so I can try to fix the subjects with multiple subjects that Peter indicated in his corrections:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace metadata-export -i 10568/42211 -f /tmp/cta.csv
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Then I used <code>csvcut</code> to get only the CTA subject columns:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ csvcut -c &#34;id,collection,cg.subject.cta,cg.subject.cta[],cg.subject.cta[en_US]&#34; /tmp/cta.csv &gt; /tmp/cta-subjects.csv
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>After that I imported the CSV into OpenRefine where I could properly identify and edit the subjects as multiple values</li>
<li>Then I imported it back into CGSpace:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace metadata-import -f /tmp/2019-02-06-CTA-multiple-subjects.csv
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Another day, another alert about high load on CGSpace (linode18) from Linode</li>
<li>This time the load average was 370% and the top ten IPs before, during, and after that time were:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#34;06/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
689 35.237.175.180
1236 5.9.6.51
1305 34.218.226.147
1580 66.249.66.219
1939 50.116.102.77
2313 108.212.105.35
4666 205.186.128.185
4666 70.32.83.92
4950 85.25.237.71
5158 45.5.186.2
</code></pre><ul>
<li>Looking closer at the top users, I see <code>45.5.186.2</code> is in Brazil and was making over 100 requests per minute to the REST API:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep 45.5.186.2 | grep -o -E &#39;06/Feb/2019:0[0-9]:[0-9][0-9]&#39; | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
118 06/Feb/2019:05:46
119 06/Feb/2019:05:37
119 06/Feb/2019:05:47
120 06/Feb/2019:05:43
120 06/Feb/2019:05:44
121 06/Feb/2019:05:38
122 06/Feb/2019:05:39
125 06/Feb/2019:05:42
126 06/Feb/2019:05:40
126 06/Feb/2019:05:41
</code></pre><ul>
<li>I was thinking of rate limiting those because I assumed most of them would be errors, but actually most are HTTP 200 OK!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &#39;06/Feb/2019&#39; | grep 45.5.186.2 | awk &#39;{print $9}&#39; | sort | uniq -c
2019-11-28 16:30:45 +01:00
10411 200
1 301
7 302
3 404
18 499
2 500
</code></pre><ul>
<li>I should probably start looking at the top IPs for web (XMLUI) and for API (REST and OAI) separately:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;06/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
328 220.247.212.35
372 66.249.66.221
380 207.46.13.2
519 2a01:4f8:140:3192::2
572 5.143.231.8
689 35.237.175.180
771 108.212.105.35
1236 5.9.6.51
1554 66.249.66.219
4942 85.25.237.71
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;06/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
10 66.249.66.221
26 66.249.66.219
69 5.143.231.8
340 45.5.184.72
1040 34.218.226.147
1542 108.212.105.35
1937 50.116.102.77
4661 205.186.128.185
4661 70.32.83.92
5102 45.5.186.2
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2019-02-07">2019-02-07</h2>
2019-02-07 07:25:41 +01:00
<ul>
<li>Linode sent an alert last night that the load on CGSpace (linode18) was over 300%</li>
2019-11-28 16:30:45 +01:00
<li>Here are the top IPs in the web server and API logs before, during, and after that time, respectively:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;06/Feb/2019:(17|18|19|20|23)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
5 66.249.66.209
6 2a01:4f8:210:51ef::2
6 40.77.167.75
9 104.198.9.108
9 157.55.39.192
10 157.55.39.244
12 66.249.66.221
20 95.108.181.88
27 66.249.66.219
2381 45.5.186.2
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;06/Feb/2019:(17|18|19|20|23)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
455 45.5.186.2
506 40.77.167.75
559 54.70.40.11
825 157.55.39.244
871 2a01:4f8:140:3192::2
938 157.55.39.192
1058 85.25.237.71
1416 5.9.6.51
1606 66.249.66.219
1718 35.237.175.180
</code></pre><ul>
<li>Then again this morning another alert:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;07/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
5 66.249.66.223
8 104.198.9.108
13 110.54.160.222
24 66.249.66.219
25 175.158.217.98
214 34.218.226.147
346 45.5.184.72
4529 45.5.186.2
4661 205.186.128.185
4661 70.32.83.92
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;07/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
145 157.55.39.237
154 66.249.66.221
214 34.218.226.147
261 35.237.175.180
273 2a01:4f8:140:3192::2
300 169.48.66.92
487 5.143.231.39
766 5.9.6.51
771 85.25.237.71
848 66.249.66.219
</code></pre><ul>
<li>So it seems that the load issue comes from the REST API, not the XMLUI</li>
2020-01-27 15:20:44 +01:00
<li>I could probably rate limit the REST API, or maybe just keep increasing the alert threshold so I don&rsquo;t get alert spam (this is probably the correct approach because it seems like the REST API can keep up with the requests and is returning HTTP 200 status as far as I can tell)</li>
2019-11-28 16:30:45 +01:00
<li>Bosede from IITA sent a message that a colleague is having problems submitting to some collections in their community:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Authorization denied for action WORKFLOW_STEP_1 on COLLECTION:1056 by user 1759
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Collection 1056 appears to be <a href="https://cgspace.cgiar.org/handle/10568/68741">IITA Posters and Presentations</a> and I see that its workflow step 1 (Accept/Reject) is empty:</li>
2019-02-07 10:03:09 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2019/02/iita-workflow-step1-empty.png" alt="IITA Posters and Presentations workflow step 1 empty"></p>
2019-02-07 10:03:09 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>IITA editors or approvers should be added to that step (though I&rsquo;m curious why nobody is in that group currently)</li>
2019-02-07 10:20:52 +01:00
<li>Abenet says we are not using the &ldquo;Accept/Reject&rdquo; step so this group should be deleted</li>
2019-02-07 15:41:08 +01:00
<li>Bizuwork asked about the &ldquo;DSpace Submission Approved and Archived&rdquo; emails that stopped working last month</li>
2019-11-28 16:30:45 +01:00
<li>I tried the <code>test-email</code> command on DSpace and it indeed is not working:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace test-email
2019-02-07 15:41:08 +01:00
About to send test email:
2019-11-28 16:30:45 +01:00
- To: aorth@mjanja.ch
- Subject: DSpace test email
- Server: smtp.serv.cgnet.com
2019-02-07 15:41:08 +01:00
Error sending email:
2019-11-28 16:30:45 +01:00
- Error: javax.mail.MessagingException: Could not connect to SMTP host: smtp.serv.cgnet.com, port: 25;
nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
2019-02-07 15:41:08 +01:00
Please see the DSpace documentation for assistance.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I can&rsquo;t connect to TCP port 25 on that server so I sent a mail to CGNET support to ask what&rsquo;s up</li>
2019-11-28 16:30:45 +01:00
<li>CGNET said these servers were discontinued in 2018-01 and that I should use <a href="https://docs.microsoft.com/en-us/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-office-3">Office 365</a></li>
2019-02-08 15:38:56 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-08">2019-02-08</h2>
2019-02-08 15:38:56 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>I re-configured CGSpace to use the email/password for cgspace-support, but I get this error when I try the <code>test-email</code> script:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Error sending email:
2019-11-28 16:30:45 +01:00
- Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR10CA0028.EURPRD10.PROD.OUTLOOK.COM]
</code></pre><ul>
<li>I tried to log into Outlook 365 with the credentials but I think the ones I have must be wrong, so I will ask ICT to reset the password</li>
2019-02-07 07:25:41 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-09">2019-02-09</h2>
2019-02-09 18:08:09 +01:00
<ul>
<li>Linode sent alerts about CPU load yesterday morning, yesterday night, and this morning! All over 300% CPU load!</li>
2019-11-28 16:30:45 +01:00
<li>This is just for this morning:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;09/Feb/2019:(07|08|09|10|11)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
289 35.237.175.180
290 66.249.66.221
296 18.195.78.144
312 207.46.13.201
393 207.46.13.64
526 2a01:4f8:140:3192::2
580 151.80.203.180
742 5.143.231.38
1046 5.9.6.51
1331 66.249.66.219
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;09/Feb/2019:(07|08|09|10|11)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
4 66.249.83.30
5 49.149.10.16
8 207.46.13.64
9 207.46.13.201
11 105.63.86.154
11 66.249.66.221
31 66.249.66.219
297 2001:41d0:d:1990::
908 34.218.226.147
1947 50.116.102.77
</code></pre><ul>
<li>I know 66.249.66.219 is Google, 5.9.6.51 is MegaIndex, and 5.143.231.38 is SputnikBot</li>
<li>Ooh, but 151.80.203.180 is some malicious bot making requests for <code>/etc/passwd</code> like this:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>/bitstream/handle/10568/68981/Identifying%20benefit%20flows%20studies%20on%20the%20potential%20monetary%20and%20non%20monetary%20benefits%20arising%20from%20the%20International%20Treaty%20on%20Plant%20Genetic_1671.pdf?sequence=1&amp;amp;isAllowed=../etc/passwd
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>151.80.203.180 is on OVH so I sent a message to their abuse email&hellip;</li>
2019-02-09 18:08:09 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-10">2019-02-10</h2>
2019-02-10 09:48:28 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Linode sent another alert about CGSpace (linode18) CPU load this morning, here are the top IPs in the web server XMLUI and API logs before, during, and after that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;10/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
232 18.195.78.144
238 35.237.175.180
281 66.249.66.221
314 151.80.203.180
319 34.218.226.147
326 40.77.167.178
352 157.55.39.149
444 2a01:4f8:140:3192::2
1171 5.9.6.51
1196 66.249.66.219
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;10/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
6 112.203.241.69
7 157.55.39.149
9 40.77.167.178
15 66.249.66.219
368 45.5.184.72
432 50.116.102.77
971 34.218.226.147
4403 45.5.186.2
4668 205.186.128.185
4668 70.32.83.92
</code></pre><ul>
<li>Another interesting thing might be the total number of requests for web and API services during that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -cE &#34;10/Feb/2019:0(5|6|7|8|9)&#34;
2019-02-10 09:48:28 +01:00
16333
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -cE &#34;10/Feb/2019:0(5|6|7|8|9)&#34;
2019-02-10 09:48:28 +01:00
15964
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Also, the number of unique IPs served during that time:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;10/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq | wc -l
2019-02-10 09:48:28 +01:00
1622
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;10/Feb/2019:0(5|6|7|8|9)&#34; | awk &#39;{print $1}&#39; | sort | uniq | wc -l
2019-02-10 09:48:28 +01:00
95
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>It&rsquo;s very clear to me now that the API requests are the heaviest!</li>
<li>I think I need to increase the Linode alert threshold from 300 to 350% now so I stop getting some of these alerts—it&rsquo;s becoming a bit of <em>the boy who cried wolf</em> because it alerts like clockwork twice per day!</li>
2019-11-28 16:30:45 +01:00
<li>Add my Python- and shell-based metadata workflow helper scripts as well as the environment settings for pipenv to our DSpace repository (<a href="https://github.com/ilri/DSpace/pull/408">#408</a>) so I can track changes and distribute them more formally instead of just keeping them <a href="https://github.com/ilri/DSpace/wiki/Scripts">collected on the wiki</a></li>
<li>Started adding IITA research theme (<code>cg.identifier.iitatheme</code>) to CGSpace
2019-02-10 12:32:53 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m still waiting for feedback from IITA whether they actually want to use &ldquo;SOCIAL SCIENCE &amp; AGRIC BUSINESS&rdquo; because it is listed as <a href="http://www.iita.org/project-discipline/social-science-and-agribusiness/">&ldquo;Social Science and Agribusiness&rdquo;</a> on their website</li>
2019-02-10 12:32:53 +01:00
<li>Also, I think they want to do some mappings of items with existing subjects to these new themes</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>Update ILRI author name style in the controlled vocabulary (Domelevo Entfellner, Jean-Baka) (<a href="https://github.com/ilri/DSpace/pull/409">#409</a>)
2019-02-10 12:32:53 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m still waiting to hear from Bizuwork whether we&rsquo;ll batch update all existing items with the old name style</li>
2019-02-11 10:17:27 +01:00
<li>No, there is only one entry and Bizu already fixed it</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>Last week Hector Tobon from CCAFS asked me about the Creative Commons 3.0 Intergovernmental Organizations (IGO) license because it is not in the list of SPDX licenses
2019-02-10 13:51:51 +01:00
<ul>
2019-03-18 14:32:22 +01:00
<li>Today I made <a href="http://13.57.134.254/app/license_requests/15/">a request</a> to the <a href="https://github.com/spdx/license-list-XML/blob/master/CONTRIBUTING.md">SPDX using their web form</a> to include this <a href="https://wiki.creativecommons.org/wiki/Intergovernmental_Organizations">class of Creative Commons licenses</a></li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>Testing the <code>mail.server.disabled</code> property that I noticed in <code>dspace.cfg</code> recently
2019-02-10 14:42:34 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Setting it to true results in the following message when I try the <code>dspace test-email</code> helper on DSpace Test:</li>
</ul>
</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Error sending email:
2019-11-28 16:30:45 +01:00
- Error: cannot test email because mail.server.disabled is set to true
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m not sure why I didn&rsquo;t know about this configuration option before, and always maintained multiple configurations for development and production
2019-02-10 14:42:34 +01:00
<ul>
<li>I will modify the <a href="https://github.com/ilri/rmg-ansible-public">Ansible DSpace role</a> to use this in its <code>build.properties</code> template</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>I updated my local Sonatype nexus Docker image and had an issue with the volume for some reason so I decided to just start from scratch:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># docker rm nexus
2019-02-10 14:42:34 +01:00
# docker pull sonatype/nexus3
# mkdir -p /home/aorth/.local/lib/containers/volumes/nexus_data
# chown 200:200 /home/aorth/.local/lib/containers/volumes/nexus_data
# docker run --name nexus --network dspace-build -d -v /home/aorth/.local/lib/containers/volumes/nexus_data:/nexus-data -p 8081:8081 sonatype/nexus3
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>For some reason my <code>mvn package</code> for DSpace is not working now&hellip; I might go back to <a href="https://mjanja.ch/2018/02/cache-maven-artifacts-with-artifactory/">using Artifactory for caching</a> instead:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># docker pull docker.bintray.io/jfrog/artifactory-oss:latest
2019-02-11 10:17:27 +01:00
# mkdir -p /home/aorth/.local/lib/containers/volumes/artifactory5_data
# chown 1030 /home/aorth/.local/lib/containers/volumes/artifactory5_data
# docker run --name artifactory --network dspace-build -d -v /home/aorth/.local/lib/containers/volumes/artifactory5_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2019-02-11">2019-02-11</h2>
2019-02-11 10:17:27 +01:00
<ul>
<li>Bosede from IITA said we can use &ldquo;SOCIAL SCIENCE &amp; AGRIBUSINESS&rdquo; in their new IITA theme field to be consistent with other places they are using it</li>
<li>Run all system updates on DSpace Test (linode19) and reboot it</li>
2019-02-10 09:48:28 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-12">2019-02-12</h2>
2019-02-12 08:24:28 +01:00
<ul>
<li>I notice that <a href="https://jira.duraspace.org/browse/DS-3052">DSpace 6 has included a new JAR-based PDF thumbnailer based on PDFBox</a>, I wonder how good its thumbnails are and how it handles CMYK PDFs</li>
<li>On a similar note, I wonder if we could use the performance-focused <a href="https://libvips.github.io/libvips/">libvps</a> and the third-party <a href="https://github.com/codecitizen/jlibvips/">jlibvips Java library</a> in DSpace</li>
2019-11-28 16:30:45 +01:00
<li>Testing the <code>vipsthumbnail</code> command line tool with <a href="https://cgspace.cgiar.org/handle/10568/51999">this CGSpace item that uses CMYK</a>:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ vipsthumbnail alc_contrastes_desafios.pdf -s 300 -o &#39;%s.jpg[Q=92,optimize_coding,strip]&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>(DSpace 5 appears to use JPEG 92 quality so I do the same)</li>
<li>Thinking about making &ldquo;top items&rdquo; endpoints in my <a href="https://github.com/ilri/dspace-statistics-api">dspace-statistics-api</a></li>
<li>I could use the following SQL queries very easily to get the top items by views or downloads:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspacestatistics=# SELECT * FROM items WHERE views &gt; 0 ORDER BY views DESC LIMIT 10;
2019-02-12 19:12:43 +01:00
dspacestatistics=# SELECT * FROM items WHERE downloads &gt; 0 ORDER BY downloads DESC LIMIT 10;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;d have to think about what to make the REST API endpoints, perhaps: <code>/statistics/top/items?limit=10</code></li>
2019-11-28 16:30:45 +01:00
<li>But how do I do top items by views / downloads separately?</li>
<li>I re-deployed DSpace 6.3 locally to test the PDFBox thumbnails, especially to see if they handle CMYK files properly
2019-02-12 19:12:43 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>The quality is JPEG 75 and I don&rsquo;t see a way to set the thumbnail dimensions, but the resulting image is indeed sRGB:</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ identify -verbose alc_contrastes_desafios.pdf.jpg
2019-02-12 19:12:43 +01:00
...
2019-11-28 16:30:45 +01:00
Colorspace: sRGB
</code></pre><ul>
<li>I will read the PDFBox thumbnailer documentation to see if I can change the size and quality</li>
2019-02-12 08:24:28 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-13">2019-02-13</h2>
2019-02-13 13:29:58 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>ILRI ICT reset the password for the CGSpace mail account, but I still can&rsquo;t get it to send mail from DSpace&rsquo;s <code>test-email</code> utility</li>
2019-11-28 16:30:45 +01:00
<li>I even added extra mail properties to <code>dspace.cfg</code> as suggested by someone on the dspace-tech mailing list:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>mail.extraproperties = mail.smtp.starttls.required = true, mail.smtp.auth=true
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>But the result is still:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Error sending email:
2019-11-28 16:30:45 +01:00
- Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR06CA0001.eurprd06.prod.outlook.com]
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I tried to log into the Outlook 365 web mail and it doesn&rsquo;t work so I&rsquo;ve emailed ILRI ICT again</li>
<li>After reading the <a href="https://javaee.github.io/javamail/FAQ#commonmistakes">common mistakes in the JavaMail FAQ</a> I reconfigured the extra properties in DSpace&rsquo;s mail configuration to be simply:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>mail.extraproperties = mail.smtp.starttls.enable=true
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>&hellip; and then I was able to send a mail using my personal account where I know the credentials work</li>
<li>The CGSpace account still gets this error message:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Error sending email:
2019-11-28 16:30:45 +01:00
- Error: javax.mail.AuthenticationFailedException
</code></pre><ul>
<li>I updated the <a href="https://github.com/ilri/DSpace/pull/410">DSpace SMTP settings in <code>dspace.cfg</code></a> as well as the <a href="https://github.com/ilri/rmg-ansible-public/commit/ab5fe4d10e16413cd04ffb1bc3179dc970d6d47c">variables in the DSpace role of the Ansible infrastructure scripts</a></li>
<li>Thierry from CTA is having issues with his account on DSpace Test, and there is no admin password reset function on DSpace (only via email, which is disabled on DSpace Test), so I have to delete and re-create his account:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace user --delete --email blah@cta.int
2022-03-04 13:30:06 +01:00
$ dspace user --add --givenname Thierry --surname Lewyllie --email blah@cta.int --password &#39;blah&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>On this note, I saw a thread on the dspace-tech mailing list that says this functionality exists if you enable <code>webui.user.assumelogin = true</code></li>
<li>I will enable this on CGSpace (<a href="https://github.com/ilri/DSpace/pull/411">#411</a>)</li>
<li>Test re-creating my local PostgreSQL and Artifactory containers with podman instead of Docker (using the volumes from my old Docker containers though):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># podman pull postgres:9.6-alpine
2019-02-14 01:03:32 +01:00
# podman run --name dspacedb -v /home/aorth/.local/lib/containers/volumes/dspacedb_data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:9.6-alpine
# podman pull docker.bintray.io/jfrog/artifactory-oss
# podman run --name artifactory -d -v /home/aorth/.local/lib/containers/volumes/artifactory5_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Totally works&hellip; awesome!</li>
<li>Then I tried with rootless containers by creating the subuid and subgid mappings for aorth:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ sudo touch /etc/subuid /etc/subgid
2019-02-14 18:44:18 +01:00
$ usermod --add-subuids 10000-75535 aorth
$ usermod --add-subgids 10000-75535 aorth
$ sudo sysctl kernel.unprivileged_userns_clone=1
$ podman pull postgres:9.6-alpine
$ podman run --name dspacedb -v /home/aorth/.local/lib/containers/volumes/dspacedb_data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:9.6-alpine
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Which totally works, but Podman&rsquo;s rootless support doesn&rsquo;t work with port mappings yet&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>Deploy the Tomcat-7-from-tarball branch on CGSpace (linode18), but first stop the Ubuntu Tomcat 7 and do some basic prep before running the Ansible playbook:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># systemctl stop tomcat7
2019-02-14 18:44:18 +01:00
# apt remove tomcat7 tomcat7-admin
# useradd -m -r -s /bin/bash dspace
# mv /usr/share/tomcat7/.m2 /home/dspace
# mv /usr/share/tomcat7/src /home/dspace
# chown -R dspace:dspace /home/dspace
# chown -R dspace:dspace /home/cgspace.cgiar.org
# dpkg -P tomcat7-admin tomcat7-common
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>After running the playbook CGSpace came back up, but I had an issue with some Solr cores not being loaded (similar to last month) and this was in the Solr log:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>2019-02-14 18:17:31,304 ERROR org.apache.solr.core.SolrCore @ org.apache.solr.common.SolrException: Error CREATEing SolrCore &#39;statistics-2018&#39;: Unable to create core [statistics-2018] Caused by: Lock obtain timed out: NativeFSLock@/home/cgspace.cgiar.org/solr/statistics-2018/data/index/write.lock
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The issue last month was address space, which is now set as <code>LimitAS=infinity</code> in <code>tomcat7.service</code>&hellip;</li>
<li>I re-ran the Ansible playbook to make sure all configs etc were the, then rebooted the server</li>
<li>Still the error persists after reboot</li>
<li>I will try to stop Tomcat and then remove the locks manually:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># find /home/cgspace.cgiar.org/solr/ -iname &#34;write.lock&#34; -delete
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>After restarting Tomcat the usage statistics are back</li>
2020-01-27 15:20:44 +01:00
<li>Interestingly, many of the locks were from last month, last year, and even 2015! I&rsquo;m pretty sure that&rsquo;s not supposed to be how locks work&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>Help Sarah Kasyoka finish an item submission that she was having issues with due to the file size</li>
2020-01-27 15:20:44 +01:00
<li>I increased the nginx upload limit, but she said she was having problems and couldn&rsquo;t really tell me why</li>
2019-11-28 16:30:45 +01:00
<li>I logged in as her and completed the submission with no problems&hellip;</li>
2019-02-13 17:06:47 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-15">2019-02-15</h2>
2019-02-15 16:30:02 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Tomcat was killed around 3AM by the kernel&rsquo;s OOM killer according to <code>dmesg</code>:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>[Fri Feb 15 03:10:42 2019] Out of memory: Kill process 12027 (java) score 670 or sacrifice child
2019-02-15 16:30:02 +01:00
[Fri Feb 15 03:10:42 2019] Killed process 12027 (java) total-vm:14108048kB, anon-rss:5450284kB, file-rss:0kB, shmem-rss:0kB
[Fri Feb 15 03:10:43 2019] oom_reaper: reaped process 12027 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The <code>tomcat7</code> service shows:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Feb 15 03:10:44 linode19 systemd[1]: tomcat7.service: Main process exited, code=killed, status=9/KILL
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I suspect it was related to the media-filter cron job that runs at 3AM but I don&rsquo;t see anything particular in the log files</li>
2019-11-28 16:30:45 +01:00
<li>I want to try to normalize the <code>text_lang</code> values to make working with metadata easier</li>
<li>We currently have a bunch of weird values that DSpace uses like <code>NULL</code>, <code>en_US</code>, and <code>en</code> and others that have been entered manually by editors:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# SELECT DISTINCT text_lang, count(*) FROM metadatavalue WHERE resource_type_id=2 GROUP BY text_lang ORDER BY count DESC;
2019-11-28 16:30:45 +01:00
text_lang | count
2019-02-15 16:30:02 +01:00
-----------+---------
2019-11-28 16:30:45 +01:00
| 1069539
en_US | 577110
| 334768
en | 133501
es | 12
* | 11
es_ES | 2
fr | 2
spa | 2
E. | 1
ethnob | 1
</code></pre><ul>
<li>The majority are <code>NULL</code>, <code>en_US</code>, the blank string, and <code>en</code>—the rest are not enough to be significant</li>
<li>Theoretically this field could help if you wanted to search for Spanish-language fields in the API or something, but even for the English fields there are two different values (and those are from DSpace itself)!</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m going to normalized these to <code>NULL</code> at least on DSpace Test for now:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# UPDATE metadatavalue SET text_lang = NULL WHERE resource_type_id=2 AND text_lang IS NOT NULL;
2019-02-15 16:30:02 +01:00
UPDATE 1045410
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I started proofing IITA&rsquo;s 2019-01 records that Sisay uploaded this week
2019-02-15 16:30:02 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>There were 259 records in IITA&rsquo;s original spreadsheet, but there are 276 in Sisay&rsquo;s collection</li>
2019-02-15 16:30:02 +01:00
<li>Also, I found that there are at least twenty duplicates in these records that we will need to address</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>ILRI ICT fixed the password for the CGSpace support email account and I tested it on Outlook 365 web and DSpace and it works</li>
2020-01-27 15:20:44 +01:00
<li>Re-create my local PostgreSQL container to for new PostgreSQL version and to use podman&rsquo;s volumes:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ podman pull postgres:9.6-alpine
2019-02-15 16:30:02 +01:00
$ podman volume create dspacedb_data
$ podman run --name dspacedb -v dspacedb_data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:9.6-alpine
$ createuser -h localhost -U postgres --pwprompt dspacetest
$ createdb -h localhost -U postgres -O dspacetest --encoding=UNICODE dspacetest
2022-03-04 13:30:06 +01:00
$ psql -h localhost -U postgres dspacetest -c &#39;alter user dspacetest superuser;&#39;
2019-02-15 16:30:02 +01:00
$ pg_restore -h localhost -U postgres -d dspacetest -O --role=dspacetest -h localhost dspace_2019-02-11.backup
$ psql -h localhost -U postgres -f ~/src/git/DSpace/dspace/etc/postgres/update-sequences.sql dspacetest
2022-03-04 13:30:06 +01:00
$ psql -h localhost -U postgres dspacetest -c &#39;alter user dspacetest nosuperuser;&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>And it&rsquo;s all running without root!</li>
2019-11-28 16:30:45 +01:00
<li>Then re-create my Artifactory container as well, taking into account ulimit open file requirements by Artifactory as well as the user limitations caused by rootless subuid mappings:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ podman volume create artifactory_data
2019-02-15 16:37:14 +01:00
artifactory_data
$ podman create --ulimit nofile=32000:32000 --name artifactory -v artifactory_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss
$ buildah unshare
$ chown -R 1030:1030 ~/.local/share/containers/storage/volumes/artifactory_data
$ exit
$ podman start artifactory
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>More on the <a href="https://podman.io/blogs/2018/10/03/podman-remove-content-homedir.html">subuid permissions issue with rootless containers here</a></li>
2019-02-15 16:30:02 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-17">2019-02-17</h2>
2019-02-17 12:16:36 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I ran DSpace&rsquo;s cleanup task on CGSpace (linode18) and there were errors:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace cleanup -v
2022-03-04 13:30:06 +01:00
Error: ERROR: update or delete on table &#34;bitstream&#34; violates foreign key constraint &#34;bundle_primary_bitstream_id_fkey&#34; on table &#34;bundle&#34;
Detail: Key (bitstream_id)=(162844) is still referenced from table &#34;bundle&#34;.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The solution is, as always:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ psql dspace -c &#39;update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (162844);&#39;
2019-02-17 12:16:36 +01:00
UPDATE 1
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I merged the Atmire Metadata Quality Module (MQM) changes to the <code>5_x-prod</code> branch and deployed it on CGSpace (<a href="https://github.com/ilri/DSpace/pull/407">#407</a>)</li>
<li>Then I ran all system updates on CGSpace server and rebooted it</li>
2019-02-17 12:16:36 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-18">2019-02-18</h2>
2019-02-19 00:00:47 +01:00
<ul>
<li>Jesus fucking Christ, Linode sent an alert that CGSpace (linode18) was using 421% CPU for a few hours this afternoon (server time):</li>
2019-11-28 16:30:45 +01:00
<li>There seems to have been a lot of activity in XMLUI:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;18/Feb/2019:1(2|3|4|5|6)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
1236 18.212.208.240
1276 54.164.83.99
1277 3.83.14.11
1282 3.80.196.188
1296 3.84.172.18
1299 100.24.48.177
1299 34.230.15.139
1327 52.54.252.47
1477 5.9.6.51
1861 94.71.244.172
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;18/Feb/2019:1(2|3|4|5|6)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
8 42.112.238.64
9 121.52.152.3
9 157.55.39.50
10 110.54.151.102
10 194.246.119.6
10 66.249.66.221
15 190.56.193.94
28 66.249.66.219
43 34.209.213.122
178 50.116.102.77
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;18/Feb/2019:1(2|3|4|5|6)&#34; | awk &#39;{print $1}&#39; | sort | uniq | wc -l
2019-02-19 00:00:47 +01:00
2727
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;18/Feb/2019:1(2|3|4|5|6)&#34; | awk &#39;{print $1}&#39; | sort | uniq | wc -l
2019-02-19 00:00:47 +01:00
186
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>94.71.244.172 is in Greece and uses the user agent &ldquo;Indy Library&rdquo;</li>
<li>At least they are re-using their Tomcat session:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -o -E &#39;session_id=[A-Z0-9]{32}:ip_addr=94.71.244.172&#39; dspace.log.2019-02-18 | sort | uniq | wc -l
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>
<p>The following IPs were all hitting the server hard simultaneously and are located on Amazon and use the user agent &ldquo;Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0&rdquo;:</p>
2019-02-19 00:00:47 +01:00
<ul>
<li>52.54.252.47</li>
<li>34.230.15.139</li>
<li>100.24.48.177</li>
<li>3.84.172.18</li>
<li>3.80.196.188</li>
<li>3.83.14.11</li>
<li>54.164.83.99</li>
<li>18.212.208.240</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>
<p>Actually, even up to the top 30 IPs are almost all on Amazon and use the same user agent!</p>
</li>
<li>
<p>For reference most of these IPs hitting the XMLUI this afternoon are on Amazon:</p>
</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;18/Feb/2019:1(2|3|4|5|6)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 30
2019-11-28 16:30:45 +01:00
1173 52.91.249.23
1176 107.22.118.106
1178 3.88.173.152
1179 3.81.136.184
1183 34.201.220.164
1183 3.89.134.93
1184 54.162.66.53
1187 3.84.62.209
1188 3.87.4.140
1189 54.158.27.198
1190 54.209.39.13
1192 54.82.238.223
1208 3.82.232.144
1209 3.80.128.247
1214 54.167.64.164
1219 3.91.17.126
1220 34.201.108.226
1221 3.84.223.134
1222 18.206.155.14
1231 54.210.125.13
1236 18.212.208.240
1276 54.164.83.99
1277 3.83.14.11
1282 3.80.196.188
1296 3.84.172.18
1299 100.24.48.177
1299 34.230.15.139
1327 52.54.252.47
1477 5.9.6.51
1861 94.71.244.172
</code></pre><ul>
<li>In the case of 52.54.252.47 they are only making about 10 requests per minute during this time (albeit from dozens of concurrent IPs):</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep 52.54.252.47 | grep -o -E &#39;18/Feb/2019:1[0-9]:[0-9][0-9]&#39; | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
10 18/Feb/2019:17:20
10 18/Feb/2019:17:22
10 18/Feb/2019:17:31
11 18/Feb/2019:13:21
11 18/Feb/2019:15:18
11 18/Feb/2019:16:43
11 18/Feb/2019:16:57
11 18/Feb/2019:16:58
11 18/Feb/2019:18:34
12 18/Feb/2019:14:37
</code></pre><ul>
<li>As this user agent is not recognized as a bot by DSpace this will definitely fuck up the usage statistics</li>
<li>There were 92,000 requests from these IPs alone today!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -c &#39;Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0&#39;
2019-02-19 00:00:47 +01:00
92756
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I will add this user agent to the <a href="https://github.com/ilri/rmg-ansible-public/blob/master/roles/dspace/templates/nginx/default.conf.j2">&ldquo;badbots&rdquo; rate limiting in our nginx configuration</a></li>
<li>I realized that I had effectively only been applying the &ldquo;badbots&rdquo; rate limiting to requests at the root, so I added it to the other blocks that match Discovery, Browse, etc as well</li>
<li>IWMI sent a few new ORCID identifiers for us to add to our controlled vocabulary</li>
<li>I will merge them with our existing list and then resolve their names using my <code>resolve-orcids.py</code> script:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml 2019-02-18-IWMI-ORCID-IDs.txt | grep -oE &#39;[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}&#39; | sort | uniq &gt; /tmp/2019-02-18-combined-orcids.txt
2019-02-19 01:30:34 +01:00
$ ./resolve-orcids.py -i /tmp/2019-02-18-combined-orcids.txt -o /tmp/2019-02-18-combined-names.txt -d
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I merged the changes to the <code>5_x-prod</code> branch and they will go live the next time we re-deploy CGSpace (<a href="https://github.com/ilri/DSpace/pull/412">#412</a>)</li>
2019-02-19 00:00:47 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-19">2019-02-19</h2>
2019-02-19 21:42:33 +01:00
<ul>
<li>Linode sent another alert about CPU usage on CGSpace (linode18) averaging 417% this morning</li>
2020-01-27 15:20:44 +01:00
<li>Unfortunately, I don&rsquo;t see any strange activity in the web server API or XMLUI logs at that time in particular</li>
2019-11-28 16:30:45 +01:00
<li>So far today the top ten IPs in the XMLUI logs are:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;19/Feb/2019:&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
11541 18.212.208.240
11560 3.81.136.184
11562 3.88.237.84
11569 34.230.15.139
11572 3.80.128.247
11573 3.91.17.126
11586 54.82.89.217
11610 54.209.39.13
11657 54.175.90.13
14686 143.233.242.130
</code></pre><ul>
<li>143.233.242.130 is in Greece and using the user agent &ldquo;Indy Library&rdquo;, like the top IP yesterday (94.71.244.172)</li>
2020-01-27 15:20:44 +01:00
<li>That user agent is in our Tomcat list of crawlers so at least its resource usage is controlled by forcing it to use a single Tomcat session, but I don&rsquo;t know if DSpace recognizes if this is a bot or not, so the logs are probably skewed because of this</li>
<li>The user is requesting only things like <code>/handle/10568/56199?show=full</code> so it&rsquo;s nothing malicious, only annoying</li>
<li>Otherwise there are still shit loads of IPs from Amazon still hammering the server, though I see HTTP 503 errors now after yesterday&rsquo;s nginx rate limiting updates
2019-02-19 21:42:33 +01:00
<ul>
<li>I should really try to script something around <a href="https://ipapi.co/api/">ipapi.co</a> to get these quickly and easily</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>The top requests in the API logs today are:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;19/Feb/2019:&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
42 66.249.66.221
44 156.156.81.215
55 3.85.54.129
76 66.249.66.219
87 34.209.213.122
1550 34.218.226.147
2127 50.116.102.77
4684 205.186.128.185
11429 45.5.186.2
12360 2a01:7e00::f03c:91ff:fe0a:d645
</code></pre><ul>
<li><code>2a01:7e00::f03c:91ff:fe0a:d645</code> is on Linode, and I can see from the XMLUI access logs that it is Drupal, so I assume it is part of the new ILRI website harvester&hellip;</li>
2020-01-27 15:20:44 +01:00
<li>Jesus, Linode just sent another alert as we speak that the load on CGSpace (linode18) has been at 450% the last two hours! I&rsquo;m so fucking sick of this</li>
2019-11-28 16:30:45 +01:00
<li>Our usage stats have exploded the last few months:</li>
2019-02-19 21:42:33 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2019/02/usage-stats.png" alt="Usage stats"></p>
2019-02-20 01:34:52 +01:00
<ul>
<li>I need to follow up with the DSpace developers and Atmire to see how they classify which requests are bots so we can try to estimate the impact caused by these users and perhaps try to update the list to make the stats more accurate</li>
2019-11-28 16:30:45 +01:00
<li>I found one IP address in Nigeria that has an Android user agent and has requested a bitstream from <a href="https://hdl.handle.net/10568/96140">10568/96140</a> almost 200 times:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># grep 41.190.30.105 /var/log/nginx/access.log | grep -c &#39;acgg_progress_report.pdf&#39;
2019-02-20 01:34:52 +01:00
185
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Wow, and another IP in Nigeria made a bunch more yesterday from the same user agent:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># grep 41.190.3.229 /var/log/nginx/access.log.1 | grep -c &#39;acgg_progress_report.pdf&#39;
2019-02-20 01:34:52 +01:00
346
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>In the last two days alone there were 1,000 requests for this PDF, mostly from Nigeria!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep acgg_progress_report.pdf | grep -v &#39;upstream response is buffered&#39; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n
2019-11-28 16:30:45 +01:00
1 139.162.146.60
1 157.55.39.159
1 196.188.127.94
1 196.190.127.16
1 197.183.33.222
1 66.249.66.221
2 104.237.146.139
2 175.158.209.61
2 196.190.63.120
2 196.191.127.118
2 213.55.99.121
2 82.145.223.103
3 197.250.96.248
4 196.191.127.125
4 197.156.77.24
5 105.112.75.237
185 41.190.30.105
346 41.190.3.229
503 41.190.31.73
</code></pre><ul>
<li>That is so weird, they are all using this Android user agent:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Mozilla/5.0 (Linux; Android 7.0; TECNO Camon CX Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/33.0.0.0 Mobile Safari/537.36
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I wrote a quick and dirty Python script called <code>resolve-addresses.py</code> to resolve IP addresses to their owning organization&rsquo;s name, ASN, and country using the <a href="https://ipapi.co">IPAPI.co API</a></li>
2019-02-20 01:34:52 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-20">2019-02-20</h2>
2019-02-20 16:42:58 +01:00
<ul>
<li>Ben Hack was asking about getting authors publications programmatically from CGSpace for the new ILRI website</li>
2020-01-27 15:20:44 +01:00
<li>I told him that they should probably try to use the REST API&rsquo;s <code>find-by-metadata-field</code> endpoint</li>
2019-11-28 16:30:45 +01:00
<li>The annoying thing is that you have to match the text language attribute of the field exactly, but it does work:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ curl -s -H &#34;accept: application/json&#34; -H &#34;Content-Type: application/json&#34; -X POST &#34;https://cgspace.cgiar.org/rest/items/find-by-metadata-field&#34; -d &#39;{&#34;key&#34;: &#34;cg.creator.id&#34;,&#34;value&#34;: &#34;Alan S. Orth: 0000-0002-1735-7458&#34;, &#34;language&#34;: &#34;&#34;}&#39;
$ curl -s -H &#34;accept: application/json&#34; -H &#34;Content-Type: application/json&#34; -X POST &#34;https://cgspace.cgiar.org/rest/items/find-by-metadata-field&#34; -d &#39;{&#34;key&#34;: &#34;cg.creator.id&#34;,&#34;value&#34;: &#34;Alan S. Orth: 0000-0002-1735-7458&#34;, &#34;language&#34;: null}&#39;
$ curl -s -H &#34;accept: application/json&#34; -H &#34;Content-Type: application/json&#34; -X POST &#34;https://cgspace.cgiar.org/rest/items/find-by-metadata-field&#34; -d &#39;{&#34;key&#34;: &#34;cg.creator.id&#34;,&#34;value&#34;: &#34;Alan S. Orth: 0000-0002-1735-7458&#34;, &#34;language&#34;: &#34;en_US&#34;}&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>This returns six items for me, which is the <a href="https://cgspace.cgiar.org/discover?filtertype_1=orcid&amp;filter_relational_operator_1=contains&amp;filter_1=Alan+S.+Orth%3A+0000-0002-1735-7458&amp;submit_apply_filter=&amp;query=">same I see in a Discovery search</a></li>
<li>Hector Tobon from CIAT asked if it was possible to get item statistics from CGSpace so I told him to use my <a href="https://github.com/ilri/dspace-statistics-api">dspace-statistics-api</a></li>
2020-01-27 15:20:44 +01:00
<li>I was playing with <a href="http://yasgui.org/">YasGUI</a> to query AGROVOC&rsquo;s SPARQL endpoint, but they must have a cached version or something because I get an HTTP 404 if I try to go to the endpoint manually</li>
2019-11-28 16:30:45 +01:00
<li>I think I want to stick to the regular <a href="http://aims.fao.org/agrovoc/webservices">web services</a> to validate AGROVOC terms</li>
2019-02-20 16:42:58 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2019/02/yasgui-agrovoc.png" alt="YasGUI querying AGROVOC"></p>
2019-02-21 03:20:09 +01:00
<ul>
<li>There seems to be a REST API for AGROVOC here: <a href="http://agrovoc.uniroma2.it/agrovoc/rest/v1/search?query=FISH&amp;lang=en">http://agrovoc.uniroma2.it/agrovoc/rest/v1/search?query=FISH&amp;lang=en</a></li>
<li>See this <a href="https://jira.duraspace.org/browse/VIVO-1655">issue on the VIVO tracker</a> for more information about this endpoint</li>
<li>The old-school AGROVOC SOAP WSDL works with the <a href="https://python-zeep.readthedocs.io/en/master/">Zeep Python library</a>, but in my tests the results are way too broad despite trying to use a &ldquo;exact match&rdquo; searching</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-21">2019-02-21</h2>
2019-02-21 19:08:18 +01:00
<ul>
<li>I wrote a script <a href="https://github.com/ilri/DSpace/blob/5_x-prod/agrovoc-lookup.py">agrovoc-lookup.py</a> to resolve subject terms against the public AGROVOC REST API</li>
<li>It allows specifying the language the term should be queried in as well as output files to save the matched and unmatched terms to</li>
2019-11-28 16:30:45 +01:00
<li>I ran our top 1500 subjects through English, Spanish, and French and saved the matched and unmatched terms to separate files:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./agrovoc-lookup.py -l en -i /tmp/top-1500-subjects.txt -om /tmp/matched-subjects-en.txt -or /tmp/rejected-subjects-en.txt
2019-02-21 19:08:18 +01:00
$ ./agrovoc-lookup.py -l es -i /tmp/top-1500-subjects.txt -om /tmp/matched-subjects-es.txt -or /tmp/rejected-subjects-es.txt
$ ./agrovoc-lookup.py -l fr -i /tmp/top-1500-subjects.txt -om /tmp/matched-subjects-fr.txt -or /tmp/rejected-subjects-fr.txt
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Then I generated a list of all the unique matched terms:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ cat /tmp/matched-subjects-* | sort | uniq &gt; /tmp/2019-02-21-matched-subjects.txt
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>And then a list of all the unique <em>unmatched</em> terms using some utility I&rsquo;ve never heard of before called <code>comm</code> or with <code>diff</code>:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ sort /tmp/top-1500-subjects.txt &gt; /tmp/subjects-sorted.txt
2019-02-21 19:08:18 +01:00
$ comm -13 /tmp/2019-02-21-matched-subjects.txt /tmp/subjects-sorted.txt &gt; /tmp/2019-02-21-unmatched-subjects.txt
2022-03-04 13:30:06 +01:00
$ diff --new-line-format=&#34;&#34; --unchanged-line-format=&#34;&#34; /tmp/subjects-sorted.txt /tmp/2019-02-21-matched-subjects.txt &gt; /tmp/2019-02-21-unmatched-subjects.txt
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Generate a list of countries and regions from CGSpace for Sisay to look through:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# \COPY (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE metadata_field_id = 228 AND resource_type_id = 2 GROUP BY text_value ORDER BY count DESC) to /tmp/2019-02-21-countries.csv WITH CSV HEADER;
2019-02-21 19:08:18 +01:00
COPY 202
dspace=# \COPY (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE metadata_field_id = 227 AND resource_type_id = 2 GROUP BY text_value ORDER BY count DESC) to /tmp/2019-02-21-regions.csv WITH CSV HEADER;
COPY 33
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I did a bit more work on the IITA research theme (adding it to Discovery search filters) and it&rsquo;s almost ready so I created a pull request (<a href="https://github.com/ilri/DSpace/pull/413">#413</a>)</li>
2019-11-28 16:30:45 +01:00
<li>I still need to test the batch tagging of IITA items with themes based on their IITA subjects:
2019-02-22 03:16:33 +01:00
<ul>
<li>NATURAL RESOURCE MANAGEMENT research theme to items with NATURAL RESOURCE MANAGEMENT subject</li>
<li>BIOTECH &amp; PLANT BREEDING research theme to items with PLANT BREEDING subject</li>
<li>SOCIAL SCIENCE &amp; AGRIBUSINESS research theme to items with AGRIBUSINESS subject</li>
<li>PLANT PRODUCTION &amp; HEALTH research theme to items with PLANT PRODUCTION subject</li>
<li>PLANT PRODUCTION &amp; HEALTH research theme to items with PLANT HEALTH subject</li>
<li>NUTRITION &amp; HUMAN HEALTH research theme to items with NUTRITION subject</li>
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-22">2019-02-22</h2>
2019-02-25 01:58:00 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>
<p>Help Udana from WLE with some issues related to CGSpace items on their <a href="https://www.wle.cgiar.org/publications">Publications website</a></p>
2019-02-25 01:58:00 +01:00
<ul>
<li>He wanted some IWMI items to show up in their publications website</li>
2020-01-27 15:20:44 +01:00
<li>The items were mapped into WLE collections, but still weren&rsquo;t showing up on the publications website</li>
2019-02-25 01:58:00 +01:00
<li>I told him that he needs to add the <code>cg.identifier.wletheme</code> to the items so that the website indexer finds them</li>
2019-11-28 16:30:45 +01:00
<li>A few days ago he added the metadata to <a href="https://cgspace.cgiar.org/handle/10568/93011">10568/93011</a> and now I see that the item is present on the <a href="https://www.wle.cgiar.org/resource-recovery-waste-business-models-energy-nutrient-and-water-reuse-low-and-middle-income">WLE publications website</a></li>
</ul>
</li>
<li>
2020-01-27 15:20:44 +01:00
<p>Start looking at IITA&rsquo;s latest round of batch uploads called <a href="https://dspacetest.cgiar.org/handle/10568/108684">&ldquo;IITA_Feb_14&rdquo; on DSpace Test</a></p>
2019-02-25 01:58:00 +01:00
<ul>
<li>One mispelled authorship type</li>
<li>A few dozen incorrect inconsistent affiliations (I dumped a list of the top 1500 affiliations and reconciled against it, but it was still a lot of work)</li>
<li>One issue with smart quotes in countries</li>
<li>A few IITA subjects with syntax errors</li>
<li>Some whitespace and consistency issues in sponsorships</li>
<li>Eight items with invalid ISBN: 0-471-98560-3</li>
<li>Two incorrectly formatted ISSNs</li>
2020-01-27 15:20:44 +01:00
<li>Lots of incorrect values in subjects, but that&rsquo;s a difficult problem to do in an automated way</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>
<p>I figured out how to query AGROVOC from OpenRefine using Jython by creating a custom text facet:</p>
</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>import json
2019-02-25 01:58:00 +01:00
import re
import urllib
import urllib2
2022-03-04 13:30:06 +01:00
pattern = re.compile(&#39;^S[A-Z ]+$&#39;)
2019-02-25 01:58:00 +01:00
if pattern.match(value):
2022-03-04 13:30:06 +01:00
url = &#39;http://agrovoc.uniroma2.it/agrovoc/rest/v1/search?query=&#39; + urllib.quote_plus(value) + &#39;&amp;lang=en&#39;
2019-11-28 16:30:45 +01:00
get = urllib2.urlopen(url)
data = json.load(get)
2022-03-04 13:30:06 +01:00
if len(data[&#39;results&#39;]) == 1:
return &#34;matched&#34;
2019-02-25 01:58:00 +01:00
2022-03-04 13:30:06 +01:00
return &#34;unmatched&#34;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>You have to make sure to URL encode the value with <code>quote_plus()</code> and it totally works, but it seems to refresh the facets (and therefore re-query everything) when you select a facet so that makes it basically unusable</li>
<li>There is a <a href="https://programminghistorian.org/en/lessons/fetch-and-parse-data-with-openrefine#example-2-url-queries-and-parsing-json">good resource discussing OpenRefine, Jython, and web scraping</a></li>
2019-02-25 01:58:00 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-24">2019-02-24</h2>
2019-02-25 01:58:00 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I decided to try to validate the AGROVOC subjects in IITA&rsquo;s recent batch upload by dumping all their terms, checking them in en/es/fr with <code>agrovoc-lookup.py</code>, then reconciling against the final list using reconcile-csv with OpenRefine</li>
<li>I&rsquo;m not sure how to deal with terms like &ldquo;CORN&rdquo; that are alternative labels (<code>altLabel</code>) in AGROVOC where the preferred label (<code>prefLabel</code>) would be &ldquo;MAIZE&rdquo;</li>
2019-11-28 16:30:45 +01:00
<li>For example, <a href="http://agrovoc.uniroma2.it/agrovoc/rest/v1/search?query=CORN*&amp;lang=en">a query</a> for <code>CORN*</code> returns:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code> &#34;results&#34;: [
2019-11-28 16:30:45 +01:00
{
2022-03-04 13:30:06 +01:00
&#34;altLabel&#34;: &#34;corn (maize)&#34;,
&#34;lang&#34;: &#34;en&#34;,
&#34;prefLabel&#34;: &#34;maize&#34;,
&#34;type&#34;: [
&#34;skos:Concept&#34;
2019-11-28 16:30:45 +01:00
],
2022-03-04 13:30:06 +01:00
&#34;uri&#34;: &#34;http://aims.fao.org/aos/agrovoc/c_12332&#34;,
&#34;vocab&#34;: &#34;agrovoc&#34;
2019-11-28 16:30:45 +01:00
},
</code></pre><ul>
2020-06-04 13:43:40 +02:00
<li>There are dozens of other entries like &ldquo;corn (soft wheat)&rdquo;, &ldquo;corn (zea)&rdquo;, &ldquo;corn bran&rdquo;, &ldquo;Cornales&rdquo;, etc that could potentially match and to determine if they are related programatically is difficult</li>
2019-11-28 16:30:45 +01:00
<li>Shit, and then there are terms like &ldquo;GENETIC DIVERSITY&rdquo; that should <a href="http://agrovoc.uniroma2.it/agrovoc/agrovoc/en/page/c_33952">technically be</a> &ldquo;genetic diversity (as resource)&rdquo;</li>
<li>I applied all changes to the IITA Feb 14 batch data except the affiliations and sponsorships because I think I made some mistakes with the copying of reconciled values so I will try to look at those again separately</li>
<li>I went back and re-did the affiliations and sponsorships and then applied them on the IITA Feb 14 collection on DSpace Test</li>
<li>I did a duplicate check of the IITA Feb 14 records on DSpace Test and there were about fifteen or twenty items reported
2019-02-25 02:29:12 +01:00
<ul>
<li>A few of them are actually in previous IITA batch updates, which means they have been uploaded to CGSpace yet, so I worry that there would be many more</li>
2020-01-27 15:20:44 +01:00
<li>I want to re-synchronize CGSpace to DSpace Test to make sure that the duplicate checking is accurate, but I&rsquo;m not sure I can because the Earlham guys are still testing COPO actively on DSpace Test</li>
2019-02-25 01:58:00 +01:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-25">2019-02-25</h2>
2019-02-25 23:11:13 +01:00
<ul>
<li>There seems to be something going on with Solr on CGSpace (linode18) because statistics on communities and collections are blank for January and February this year</li>
2019-11-28 16:30:45 +01:00
<li>I see some errors started recently in Solr (yesterday):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ grep -c ERROR /home/cgspace.cgiar.org/log/solr.log.2019-02-*
2019-02-25 23:11:13 +01:00
/home/cgspace.cgiar.org/log/solr.log.2019-02-11.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-12.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-13.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-14.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-15.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-16.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-17.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-18.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-19.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-20.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-21.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-22.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-23.xz:0
/home/cgspace.cgiar.org/log/solr.log.2019-02-24:34
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>But I don&rsquo;t see anything interesting in yesterday&rsquo;s Solr log&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>I see this in the Tomcat 7 logs yesterday:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Feb 25 21:09:29 linode18 tomcat7[1015]: Error while updating
2019-02-25 23:11:13 +01:00
Feb 25 21:09:29 linode18 tomcat7[1015]: java.lang.UnsupportedOperationException: Multiple update components target the same field:solr_update_time_stamp
Feb 25 21:09:29 linode18 tomcat7[1015]: at org.dspace.statistics.SolrLogger$9.visit(SourceFile:1241)
Feb 25 21:09:29 linode18 tomcat7[1015]: at org.dspace.statistics.SolrLogger.visitEachStatisticShard(SourceFile:268)
Feb 25 21:09:29 linode18 tomcat7[1015]: at org.dspace.statistics.SolrLogger.update(SourceFile:1225)
Feb 25 21:09:29 linode18 tomcat7[1015]: at org.dspace.statistics.SolrLogger.update(SourceFile:1220)
Feb 25 21:09:29 linode18 tomcat7[1015]: at org.dspace.statistics.StatisticsLoggingConsumer.consume(SourceFile:103)
...
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>In the Solr admin GUI I see we have the following error: &ldquo;statistics-2011: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Error opening new searcher&rdquo;</li>
<li>I restarted Tomcat and upon startup I see lots of errors in the systemd journal, like:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Feb 25 21:37:49 linode18 tomcat7[28363]: SEVERE: IOException while loading persisted sessions: java.io.StreamCorruptedException: invalid type code: 00
2019-02-25 23:11:13 +01:00
Feb 25 21:37:49 linode18 tomcat7[28363]: java.io.StreamCorruptedException: invalid type code: 00
Feb 25 21:37:49 linode18 tomcat7[28363]: at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1601)
Feb 25 21:37:49 linode18 tomcat7[28363]: at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
Feb 25 21:37:49 linode18 tomcat7[28363]: at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:561)
Feb 25 21:37:49 linode18 tomcat7[28363]: at java.lang.Throwable.readObject(Throwable.java:914)
Feb 25 21:37:49 linode18 tomcat7[28363]: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Feb 25 21:37:49 linode18 tomcat7[28363]: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t think that&rsquo;s related&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>Also, now the Solr admin UI says &ldquo;statistics-2015: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Error opening new searcher&rdquo;</li>
<li>In the Solr log I see:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2019-02-25 21:38:14,246 ERROR org.apache.solr.core.CoreContainer @ Error creating core [statistics-2015]: Error opening new searcher
2019-02-25 23:11:13 +01:00
org.apache.solr.common.SolrException: Error opening new searcher
2019-11-28 16:30:45 +01:00
at org.apache.solr.core.SolrCore.&lt;init&gt;(SolrCore.java:873)
at org.apache.solr.core.SolrCore.&lt;init&gt;(SolrCore.java:646)
2019-02-25 23:11:13 +01:00
...
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
2019-11-28 16:30:45 +01:00
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.&lt;init&gt;(SolrCore.java:845)
... 31 more
2019-02-25 23:11:13 +01:00
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/cgspace.cgiar.org/solr/statistics-2015/data/index/write.lock
2019-11-28 16:30:45 +01:00
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.&lt;init&gt;(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.&lt;init&gt;(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 33 more
2022-03-04 13:30:06 +01:00
2019-02-25 21:38:14,250 ERROR org.apache.solr.core.SolrCore @ org.apache.solr.common.SolrException: Error CREATEing SolrCore &#39;statistics-2015&#39;: Unable to create core [statistics-2015] Caused by: Lock obtain timed out: NativeFSLock@/home/cgspace.cgiar.org/solr/statistics-2015/data/index/write.lock
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I tried to shutdown Tomcat and remove the locks:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># systemctl stop tomcat7
2022-03-04 13:30:06 +01:00
# find /home/cgspace.cgiar.org/solr -iname &#34;*.lock&#34; -delete
2019-02-25 23:11:13 +01:00
# systemctl start tomcat7
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>&hellip; but the problem still occurs</li>
<li>I can see that there are still hits being recorded for items (in the Solr admin UI as well as my statistics API), so the main stats core is working at least!</li>
<li>On a hunch I tried adding <code>ulimit -v unlimited</code> to the Tomcat <code>catalina.sh</code> and now Solr starts up with no core errors and I actually have statistics for January and February on <a href="https://cgspace.cgiar.org/handle/10568/16814">some communities</a>, but not <a href="https://cgspace.cgiar.org/handle/10568/1">others</a></li>
<li>I wonder if the address space limits that I added via <code>LimitAS=infinity</code> in the systemd service are somehow not working?</li>
<li>I did some tests with calling a shell script from systemd on DSpace Test (linode19) and the <code>LimitAS</code> setting does work, and the <code>infinity</code> setting in systemd does get translated to &ldquo;unlimited&rdquo; on the service</li>
2020-01-27 15:20:44 +01:00
<li>I thought it might be open file limit, but it seems we&rsquo;re nowhere near the current limit of 16384:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># lsof -u dspace | wc -l
2019-02-26 08:32:17 +01:00
3016
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>For what it&rsquo;s worth I see the same errors about <code>solr_update_time_stamp</code> on DSpace Test (linode19)</li>
2019-11-28 16:30:45 +01:00
<li>Update DSpace Test to <a href="https://tomcat.apache.org/tomcat-7.0-doc/changelog.html#Tomcat_7.0.93_(violetagg)">Tomcat 7.0.93</a></li>
<li>Something seems to have happened (some Atmire scheduled task, perhaps the CUA one at 7AM?) on CGSpace because I checked a few communities and collections on CGSpace and there are now statistics for January and February</li>
2019-02-26 08:32:17 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2019/02/statlets-working.png" alt="CGSpace statlets working again"></p>
2019-02-26 08:32:17 +01:00
<ul>
<li>I still have not figured out what the <em>real</em> cause for the Solr cores to not load was, though</li>
2019-02-25 23:11:13 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-26">2019-02-26</h2>
2019-02-27 01:25:00 +01:00
<ul>
<li>I sent a mail to the dspace-tech mailing list about the &ldquo;solr_update_time_stamp&rdquo; error</li>
2019-11-28 16:30:45 +01:00
<li>A CCAFS user sent a message saying they got this error when submitting to CGSpace:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Authorization denied for action WORKFLOW_STEP_1 on COLLECTION:1021 by user 3049
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>According to the <a href="https://cgspace.cgiar.org/rest/collections/1021">REST API</a> collection 1021 appears to be <a href="https://cgspace.cgiar.org/handle/10568/66581">CCAFS Tools, Maps, Datasets and Models</a></li>
<li>I looked at the <code>WORKFLOW_STEP_1</code> (Accept/Reject) and the group is of course empty</li>
2020-01-27 15:20:44 +01:00
<li>As we&rsquo;ve seen several times recently, we are not using this step so it should simply be deleted</li>
2019-02-27 01:25:00 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-27">2019-02-27</h2>
2019-02-27 18:33:16 +01:00
<ul>
<li>Discuss batch uploads with Sisay</li>
2020-01-27 15:20:44 +01:00
<li>He&rsquo;s trying to upload some CTA records, but it&rsquo;s not possible to do collection mapping when using the web UI
2019-02-27 19:18:12 +01:00
<ul>
<li>I sent a mail to the dspace-tech mailing list to ask about the inability to perform mappings when uploading via the XMLUI batch upload</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2020-01-27 15:20:44 +01:00
<li>He asked me to upload the files for him via the command line, but the file he referenced (<code>Thumbnails_feb_2019.zip</code>) doesn&rsquo;t exist</li>
<li>I noticed that the command line batch import functionality is a bit weird when using zip files because you have to specify the directory where the zip file is location as well as the zip file&rsquo;s name:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ~/dspace/bin/dspace import -a -e aorth@stfu.com -m mapfile -s /home/aorth/Downloads/2019-02-27-test/ -z SimpleArchiveFormat.zip
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Why don&rsquo;t they just derive the directory from the path to the zip file?</li>
<li>Working on Udana&rsquo;s Restoring Degraded Landscapes (RDL) WLE records that we originally started in 2018-11 and fixing many of the same problems that I originally did then
2019-02-28 08:33:03 +01:00
<ul>
<li>I also added a few regions because they are obvious for the countries</li>
<li>Also I added some rights fields that I noticed were easily available from the publications pages</li>
2020-01-27 15:20:44 +01:00
<li>I imported the records into my local environment with a fresh snapshot of the CGSpace database and ran the Atmire duplicate checker against them and it didn&rsquo;t find any</li>
2019-02-28 08:33:03 +01:00
<li>I uploaded fifty-two records to the <a href="https://cgspace.cgiar.org/handle/10568/81592">Restoring Degraded Landscapes collection</a> on CGSpace</li>
2019-02-27 18:33:16 +01:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-02-28">2019-02-28</h2>
2019-02-28 17:20:19 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>I helped Sisay upload the nineteen CTA records from last week via the command line because they required mappings (which is not possible to do via the batch upload web interface)</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace import -a -e swebshet@stfu.org -s /home/swebshet/Thumbnails_feb_2019 -m 2019-02-28-CTA-Thumbnails.map
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Mails from CGSpace stopped working, looks like ICT changed the password again or we got locked out <em>sigh</em></li>
2020-01-27 15:20:44 +01:00
<li>Now I&rsquo;m getting this message when trying to use DSpace&rsquo;s <code>test-email</code> script:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace test-email
2019-02-28 18:27:07 +01:00
About to send test email:
2019-11-28 16:30:45 +01:00
- To: stfu@google.com
- Subject: DSpace test email
- Server: smtp.office365.com
2019-02-28 18:27:07 +01:00
Error sending email:
2019-11-28 16:30:45 +01:00
- Error: javax.mail.AuthenticationFailedException
2019-02-28 18:27:07 +01:00
Please see the DSpace documentation for assistance.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;ve tried to log in with the last two passwords that ICT reset it to earlier this month, but they are not working</li>
<li>I sent a mail to ILRI ICT to check if we&rsquo;re locked out or reset the password again</li>
2019-02-28 18:27:07 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<!-- raw HTML omitted -->
2019-02-01 20:45:50 +01:00
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
2022-06-06 08:45:43 +02:00
<li><a href="/cgspace-notes/2022-06/">June, 2022</a></li>
2022-05-04 10:09:45 +02:00
<li><a href="/cgspace-notes/2022-05/">May, 2022</a></li>
2022-04-27 08:58:45 +02:00
<li><a href="/cgspace-notes/2022-04/">April, 2022</a></li>
2022-03-01 15:48:40 +01:00
2022-04-27 08:58:45 +02:00
<li><a href="/cgspace-notes/2022-03/">March, 2022</a></li>
2022-04-04 18:15:58 +02:00
2022-02-10 18:35:40 +01:00
<li><a href="/cgspace-notes/2022-02/">February, 2022</a></li>
2019-02-01 20:45:50 +01:00
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p dir="auto">
2019-02-01 20:45:50 +01:00
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>