cgspace-notes/docs/2018-01/index.html

1507 lines
99 KiB
HTML
Raw Normal View History

2018-02-11 17:28:23 +01:00
<!DOCTYPE html>
<html lang="en" >
2018-02-11 17:28:23 +01:00
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
2020-12-06 15:53:29 +01:00
2018-02-11 17:28:23 +01:00
<meta property="og:title" content="January, 2018" />
<meta property="og:description" content="2018-01-02
Uptime Robot noticed that CGSpace went down and up a few times last night, for a few minutes each time
2020-01-27 15:20:44 +01:00
I didn&rsquo;t get any load alerts from Linode and the REST and XMLUI logs don&rsquo;t show anything out of the ordinary
2018-02-11 17:28:23 +01:00
The nginx logs show HTTP 200s until 02/Jan/2018:11:27:17 &#43;0000 when Uptime Robot got an HTTP 500
In dspace.log around that time I see many errors like &ldquo;Client closed the connection before file download was complete&rdquo;
2019-05-05 15:45:12 +02:00
And just before that I see this:
2018-02-11 17:28:23 +01:00
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-980] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:50; busy:50; idle:0; lastwait:5000].
Ah hah! So the pool was actually empty!
2020-01-27 15:20:44 +01:00
I need to increase that, let&rsquo;s try to bump it up from 50 to 75
After that one client got an HTTP 499 but then the rest were HTTP 200, so I don&rsquo;t know what the hell Uptime Robot saw
2019-05-05 15:45:12 +02:00
I notice this error quite a few times in dspace.log:
2018-02-11 17:28:23 +01:00
2018-01-02 01:21:19,137 ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer @ Error while searching for sidebar facets
2022-03-04 13:30:06 +01:00
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1976&#43;TO&#43;1979]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2018-02-11 17:28:23 +01:00
And there are many of these errors every day for the past month:
2022-03-04 13:30:06 +01:00
$ grep -c &#34;Error while searching for sidebar facets&#34; dspace.log.*
2018-02-11 17:28:23 +01:00
dspace.log.2017-11-21:4
dspace.log.2017-11-22:1
dspace.log.2017-11-23:4
dspace.log.2017-11-24:11
dspace.log.2017-11-25:0
dspace.log.2017-11-26:1
dspace.log.2017-11-27:7
dspace.log.2017-11-28:21
dspace.log.2017-11-29:31
dspace.log.2017-11-30:15
dspace.log.2017-12-01:15
dspace.log.2017-12-02:20
dspace.log.2017-12-03:38
dspace.log.2017-12-04:65
dspace.log.2017-12-05:43
dspace.log.2017-12-06:72
dspace.log.2017-12-07:27
dspace.log.2017-12-08:15
dspace.log.2017-12-09:29
dspace.log.2017-12-10:35
dspace.log.2017-12-11:20
dspace.log.2017-12-12:44
dspace.log.2017-12-13:36
dspace.log.2017-12-14:59
dspace.log.2017-12-15:104
dspace.log.2017-12-16:53
dspace.log.2017-12-17:66
dspace.log.2017-12-18:83
dspace.log.2017-12-19:101
dspace.log.2017-12-20:74
dspace.log.2017-12-21:55
dspace.log.2017-12-22:66
dspace.log.2017-12-23:50
dspace.log.2017-12-24:85
dspace.log.2017-12-25:62
dspace.log.2017-12-26:49
dspace.log.2017-12-27:30
dspace.log.2017-12-28:54
dspace.log.2017-12-29:68
dspace.log.2017-12-30:89
dspace.log.2017-12-31:53
dspace.log.2018-01-01:45
dspace.log.2018-01-02:34
2020-01-27 15:20:44 +01:00
Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let&rsquo;s Encrypt if it&rsquo;s just a handful of domains
2018-02-11 17:28:23 +01:00
" />
<meta property="og:type" content="article" />
2019-02-02 13:12:57 +01:00
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2018-01/" />
2019-08-08 17:10:44 +02:00
<meta property="article:published_time" content="2018-01-02T08:35:54-08:00" />
2020-04-13 16:24:05 +02:00
<meta property="article:modified_time" content="2020-04-13T15:30:24+03:00" />
2018-09-30 07:23:48 +02:00
2020-12-06 15:53:29 +01:00
2018-02-11 17:28:23 +01:00
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="January, 2018"/>
<meta name="twitter:description" content="2018-01-02
Uptime Robot noticed that CGSpace went down and up a few times last night, for a few minutes each time
2020-01-27 15:20:44 +01:00
I didn&rsquo;t get any load alerts from Linode and the REST and XMLUI logs don&rsquo;t show anything out of the ordinary
2018-02-11 17:28:23 +01:00
The nginx logs show HTTP 200s until 02/Jan/2018:11:27:17 &#43;0000 when Uptime Robot got an HTTP 500
In dspace.log around that time I see many errors like &ldquo;Client closed the connection before file download was complete&rdquo;
2019-05-05 15:45:12 +02:00
And just before that I see this:
2018-02-11 17:28:23 +01:00
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-980] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:50; busy:50; idle:0; lastwait:5000].
Ah hah! So the pool was actually empty!
2020-01-27 15:20:44 +01:00
I need to increase that, let&rsquo;s try to bump it up from 50 to 75
After that one client got an HTTP 499 but then the rest were HTTP 200, so I don&rsquo;t know what the hell Uptime Robot saw
2019-05-05 15:45:12 +02:00
I notice this error quite a few times in dspace.log:
2018-02-11 17:28:23 +01:00
2018-01-02 01:21:19,137 ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer @ Error while searching for sidebar facets
2022-03-04 13:30:06 +01:00
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1976&#43;TO&#43;1979]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2018-02-11 17:28:23 +01:00
And there are many of these errors every day for the past month:
2022-03-04 13:30:06 +01:00
$ grep -c &#34;Error while searching for sidebar facets&#34; dspace.log.*
2018-02-11 17:28:23 +01:00
dspace.log.2017-11-21:4
dspace.log.2017-11-22:1
dspace.log.2017-11-23:4
dspace.log.2017-11-24:11
dspace.log.2017-11-25:0
dspace.log.2017-11-26:1
dspace.log.2017-11-27:7
dspace.log.2017-11-28:21
dspace.log.2017-11-29:31
dspace.log.2017-11-30:15
dspace.log.2017-12-01:15
dspace.log.2017-12-02:20
dspace.log.2017-12-03:38
dspace.log.2017-12-04:65
dspace.log.2017-12-05:43
dspace.log.2017-12-06:72
dspace.log.2017-12-07:27
dspace.log.2017-12-08:15
dspace.log.2017-12-09:29
dspace.log.2017-12-10:35
dspace.log.2017-12-11:20
dspace.log.2017-12-12:44
dspace.log.2017-12-13:36
dspace.log.2017-12-14:59
dspace.log.2017-12-15:104
dspace.log.2017-12-16:53
dspace.log.2017-12-17:66
dspace.log.2017-12-18:83
dspace.log.2017-12-19:101
dspace.log.2017-12-20:74
dspace.log.2017-12-21:55
dspace.log.2017-12-22:66
dspace.log.2017-12-23:50
dspace.log.2017-12-24:85
dspace.log.2017-12-25:62
dspace.log.2017-12-26:49
dspace.log.2017-12-27:30
dspace.log.2017-12-28:54
dspace.log.2017-12-29:68
dspace.log.2017-12-30:89
dspace.log.2017-12-31:53
dspace.log.2018-01-01:45
dspace.log.2018-01-02:34
2020-01-27 15:20:44 +01:00
Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let&rsquo;s Encrypt if it&rsquo;s just a handful of domains
2018-02-11 17:28:23 +01:00
"/>
2023-01-10 20:22:03 +01:00
<meta name="generator" content="Hugo 0.109.0">
2018-02-11 17:28:23 +01:00
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "January, 2018",
2020-04-02 09:55:42 +02:00
"url": "https://alanorth.github.io/cgspace-notes/2018-01/",
2018-04-30 18:05:39 +02:00
"wordCount": "7940",
2018-02-11 17:28:23 +01:00
"datePublished": "2018-01-02T08:35:54-08:00",
2020-04-13 16:24:05 +02:00
"dateModified": "2020-04-13T15:30:24+03:00",
2018-02-11 17:28:23 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2018-01/">
<title>January, 2018 | CGSpace Notes</title>
2018-02-11 17:28:23 +01:00
<!-- combined, minified CSS -->
2020-01-23 19:19:38 +01:00
2022-09-12 10:35:57 +02:00
<link href="https://alanorth.github.io/cgspace-notes/css/style.c6ba80bc50669557645abe05f86b73cc5af84408ed20f1551a267bc19ece8228.css" rel="stylesheet" integrity="sha256-xrqAvFBmlVdkWr4F&#43;GtzzFr4RAjtIPFVGiZ7wZ7Ogig=" crossorigin="anonymous">
2018-02-11 17:28:23 +01:00
2020-01-28 11:01:42 +01:00
<!-- minified Font Awesome for SVG icons -->
2021-09-28 09:32:32 +02:00
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity="sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz&#43;lcnA=" crossorigin="anonymous"></script>
2020-01-28 11:01:42 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2018-02-11 17:28:23 +01:00
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
<header class="blog-header">
<div class="container">
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
2018-02-11 17:28:23 +01:00
</div>
</header>
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2018-01/">January, 2018</a></h2>
2020-11-16 09:54:00 +01:00
<p class="blog-post-meta">
<time datetime="2018-01-02T08:35:54-08:00">Tue Jan 02, 2018</time>
in
2022-06-23 07:40:53 +02:00
<span class="fas fa-folder" aria-hidden="true"></span>&nbsp;<a href="/categories/notes/" rel="category tag">Notes</a>
2018-02-11 17:28:23 +01:00
</p>
</header>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-02">2018-01-02</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Uptime Robot noticed that CGSpace went down and up a few times last night, for a few minutes each time</li>
2020-01-27 15:20:44 +01:00
<li>I didn&rsquo;t get any load alerts from Linode and the REST and XMLUI logs don&rsquo;t show anything out of the ordinary</li>
2018-02-11 17:28:23 +01:00
<li>The nginx logs show HTTP 200s until <code>02/Jan/2018:11:27:17 +0000</code> when Uptime Robot got an HTTP 500</li>
<li>In dspace.log around that time I see many errors like &ldquo;Client closed the connection before file download was complete&rdquo;</li>
2019-11-28 16:30:45 +01:00
<li>And just before that I see this:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-980] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:50; busy:50; idle:0; lastwait:5000].
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Ah hah! So the pool was actually empty!</li>
2020-01-27 15:20:44 +01:00
<li>I need to increase that, let&rsquo;s try to bump it up from 50 to 75</li>
<li>After that one client got an HTTP 499 but then the rest were HTTP 200, so I don&rsquo;t know what the hell Uptime Robot saw</li>
2019-11-28 16:30:45 +01:00
<li>I notice this error quite a few times in dspace.log:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-02 01:21:19,137 ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer @ Error while searching for sidebar facets
2022-03-04 13:30:06 +01:00
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1976+TO+1979]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>And there are many of these errors every day for the past month:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Error while searching for sidebar facets&#34; dspace.log.*
2018-02-11 17:28:23 +01:00
dspace.log.2017-11-21:4
dspace.log.2017-11-22:1
dspace.log.2017-11-23:4
dspace.log.2017-11-24:11
dspace.log.2017-11-25:0
dspace.log.2017-11-26:1
dspace.log.2017-11-27:7
dspace.log.2017-11-28:21
dspace.log.2017-11-29:31
dspace.log.2017-11-30:15
dspace.log.2017-12-01:15
dspace.log.2017-12-02:20
dspace.log.2017-12-03:38
dspace.log.2017-12-04:65
dspace.log.2017-12-05:43
dspace.log.2017-12-06:72
dspace.log.2017-12-07:27
dspace.log.2017-12-08:15
dspace.log.2017-12-09:29
dspace.log.2017-12-10:35
dspace.log.2017-12-11:20
dspace.log.2017-12-12:44
dspace.log.2017-12-13:36
dspace.log.2017-12-14:59
dspace.log.2017-12-15:104
dspace.log.2017-12-16:53
dspace.log.2017-12-17:66
dspace.log.2017-12-18:83
dspace.log.2017-12-19:101
dspace.log.2017-12-20:74
dspace.log.2017-12-21:55
dspace.log.2017-12-22:66
dspace.log.2017-12-23:50
dspace.log.2017-12-24:85
dspace.log.2017-12-25:62
dspace.log.2017-12-26:49
dspace.log.2017-12-27:30
dspace.log.2017-12-28:54
dspace.log.2017-12-29:68
dspace.log.2017-12-30:89
dspace.log.2017-12-31:53
dspace.log.2018-01-01:45
dspace.log.2018-01-02:34
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let&rsquo;s Encrypt if it&rsquo;s just a handful of domains</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-03">2018-01-03</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>I woke up to more up and down of CGSpace, this time UptimeRobot noticed a few rounds of up and down of a few minutes each and Linode also notified of high CPU load from 12 to 2 PM</li>
2019-11-28 16:30:45 +01:00
<li>Looks like I need to increase the database pool size again:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Timeout: Pool empty.&#34; dspace.log.2018-01-*
2018-02-11 17:28:23 +01:00
dspace.log.2018-01-01:0
dspace.log.2018-01-02:1972
dspace.log.2018-01-03:1909
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>For some reason there were a lot of &ldquo;active&rdquo; connections last night:</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/postgres_connections-day.png" alt="CGSpace PostgreSQL connections"></p>
2018-02-11 17:28:23 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>The active IPs in XMLUI are:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E &#34;3/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
607 40.77.167.141
611 2a00:23c3:8c94:7800:392c:a491:e796:9c50
663 188.226.169.37
759 157.55.39.245
887 68.180.229.254
1037 157.55.39.175
1068 216.244.66.245
1495 66.249.64.91
1934 104.196.152.243
2219 134.155.96.78
</code></pre><ul>
<li>134.155.96.78 appears to be at the University of Mannheim in Germany</li>
<li>They identify as: Mozilla/5.0 (compatible; heritrix/3.2.0 +http://ifm.uni-mannheim.de)</li>
2020-01-27 15:20:44 +01:00
<li>This appears to be the <a href="https://github.com/internetarchive/heritrix3">Internet Archive&rsquo;s open source bot</a></li>
<li>They seem to be re-using their Tomcat session so I don&rsquo;t need to do anything to them just yet:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep 134.155.96.78 dspace.log.2018-01-03 | grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
2
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The API logs show the normal users:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E &#34;3/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
32 207.46.13.182
38 40.77.167.132
38 68.180.229.254
43 66.249.64.91
46 40.77.167.141
49 157.55.39.245
79 157.55.39.175
1533 50.116.102.77
4069 70.32.83.92
9355 45.5.184.196
</code></pre><ul>
<li>In other related news I see a sizeable amount of requests coming from python-requests</li>
<li>For example, just in the last day there were 1700!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -c python-requests
2018-02-11 17:28:23 +01:00
1773
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>But they come from hundreds of IPs, many of which are 54.x.x.x:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep python-requests | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail -n 30
2019-11-28 16:30:45 +01:00
9 54.144.87.92
9 54.146.222.143
9 54.146.249.249
9 54.158.139.206
9 54.161.235.224
9 54.163.41.19
9 54.163.4.51
9 54.196.195.107
9 54.198.89.134
9 54.80.158.113
10 54.198.171.98
10 54.224.53.185
10 54.226.55.207
10 54.227.8.195
10 54.242.234.189
10 54.242.238.209
10 54.80.100.66
11 54.161.243.121
11 54.205.154.178
11 54.234.225.84
11 54.87.23.173
11 54.90.206.30
12 54.196.127.62
12 54.224.242.208
12 54.226.199.163
13 54.162.149.249
13 54.211.182.255
19 50.17.61.150
21 54.211.119.107
139 164.39.7.62
</code></pre><ul>
<li>I have no idea what these are but they seem to be coming from Amazon&hellip;</li>
2020-01-27 15:20:44 +01:00
<li>I guess for now I just have to increase the database connection pool&rsquo;s max active</li>
<li>It&rsquo;s currently 75 and normally I&rsquo;d just bump it by 25 but let me be a bit daring and push it by 50 to 125, because I used to see at least 121 connections in pg_stat_activity before when we were using the shitty default pooling</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-04">2018-01-04</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>CGSpace went down and up a bunch of times last night and ILRI staff were complaining a lot last night</li>
2019-11-28 16:30:45 +01:00
<li>The XMLUI logs show this activity:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E &#34;4/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
968 197.211.63.81
981 213.55.99.121
1039 66.249.64.93
1258 157.55.39.175
1273 207.46.13.182
1311 157.55.39.191
1319 157.55.39.197
1775 66.249.64.78
2216 104.196.152.243
3366 66.249.64.91
</code></pre><ul>
<li>Again we ran out of PostgreSQL database connections, even after bumping the pool max active limit from 50 to 75 to 125 yesterday!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-04 07:36:08,089 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL connection Error -
2018-02-11 17:28:23 +01:00
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-256] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:125; busy:125; idle:0; lastwait:5000].
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>So for this week that is the number one problem!</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Timeout: Pool empty.&#34; dspace.log.2018-01-*
2018-02-11 17:28:23 +01:00
dspace.log.2018-01-01:0
dspace.log.2018-01-02:1972
dspace.log.2018-01-03:1909
dspace.log.2018-01-04:1559
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I will just bump the connection limit to 300 because I&rsquo;m fucking fed up with this shit</li>
2019-11-28 16:30:45 +01:00
<li>Once I get back to Amman I will have to try to create different database pools for different web applications, like recently discussed on the dspace-tech mailing list</li>
<li>Create accounts on CGSpace for two CTA staff <a href="mailto:km4ard@cta.int">km4ard@cta.int</a> and <a href="mailto:bheenick@cta.int">bheenick@cta.int</a></li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-05">2018-01-05</h2>
2018-02-11 17:28:23 +01:00
<ul>
2018-03-28 08:48:08 +02:00
<li>Peter said that CGSpace was down last night and Tsega restarted Tomcat</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see any alerts from Linode or UptimeRobot, and there are no PostgreSQL connection errors in the dspace logs for today:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Timeout: Pool empty.&#34; dspace.log.2018-01-*
2018-02-11 17:28:23 +01:00
dspace.log.2018-01-01:0
dspace.log.2018-01-02:1972
dspace.log.2018-01-03:1909
dspace.log.2018-01-04:1559
dspace.log.2018-01-05:0
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Daniel asked for help with their DAGRIS server (linode2328112) that has no disk space</li>
<li>I had a look and there is one Apache 2 log file that is 73GB, with lots of this:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>[Fri Jan 05 09:31:22.965398 2018] [:error] [pid 9340] [client 213.55.99.121:64476] WARNING: Unable to find a match for &#34;9-16-1-RV.doc&#34; in &#34;/home/files/journals/6//articles/9/&#34;. Skipping this file., referer: http://dagris.info/reviewtool/index.php/index/install/upgrade
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I will delete the log file for now and tell Danny</li>
2020-01-27 15:20:44 +01:00
<li>Also, I&rsquo;m still seeing a hundred or so of the &ldquo;ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer&rdquo; errors in dspace logs, I need to search the dspace-tech mailing list to see what the cause is</li>
<li>I will run a full Discovery reindex in the mean time to see if it&rsquo;s something wrong with the Discovery Solr core</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ export JAVA_OPTS=&#34;-Dfile.encoding=UTF-8 -Xmx512m -XX:+TieredCompilation -XX:TieredStopAtLevel=1&#34;
2018-02-11 17:28:23 +01:00
$ time schedtool -D -e ionice -c2 -n7 nice -n19 [dspace]/bin/dspace index-discovery -b
real 110m43.985s
user 15m24.960s
sys 3m14.890s
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Reboot CGSpace and DSpace Test for new kernels (4.14.12-x86_64-linode92) that partially mitigate the <a href="https://blog.linode.com/2018/01/03/cpu-vulnerabilities-meltdown-spectre/">Spectre and Meltdown CPU vulnerabilities</a></li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-06">2018-01-06</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m still seeing Solr errors in the DSpace logs even after the full reindex yesterday:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1983+TO+1989]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I posted a message to the dspace-tech mailing list to see if anyone can help</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-09">2018-01-09</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Advise Sisay about blank lines in some IITA records</li>
2019-11-28 16:30:45 +01:00
<li>Generate a list of author affiliations for Peter to clean up:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = &#39;contributor&#39; and qualifier = &#39;affiliation&#39;) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
2018-02-11 17:28:23 +01:00
COPY 4515
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2018-01-10">2018-01-10</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I looked to see what happened to this year&rsquo;s Solr statistics sharding task that should have run on 2018-01-01 and of course it failed:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Moving: 81742 into core statistics-2010
2018-02-11 17:28:23 +01:00
Exception: IOException occured when talking to server at: http://localhost:8081/solr//statistics-2010
org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://localhost:8081/solr//statistics-2010
2019-11-28 16:30:45 +01:00
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:566)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.dspace.statistics.SolrLogger.shardSolrIndex(SourceFile:2243)
at org.dspace.statistics.util.StatisticsClient.main(StatisticsClient.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
2018-02-11 17:28:23 +01:00
Caused by: org.apache.http.client.ClientProtocolException
2019-11-28 16:30:45 +01:00
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:867)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 10 more
2018-02-11 17:28:23 +01:00
Caused by: org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity. The cause lists the reason the original request failed.
2019-11-28 16:30:45 +01:00
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:659)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
... 14 more
2018-02-11 17:28:23 +01:00
Caused by: java.net.SocketException: Connection reset
2019-11-28 16:30:45 +01:00
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:159)
at org.apache.http.impl.io.AbstractSessionOutputBuffer.write(AbstractSessionOutputBuffer.java:179)
at org.apache.http.impl.io.ChunkedOutputStream.flushCacheWithAppend(ChunkedOutputStream.java:124)
at org.apache.http.impl.io.ChunkedOutputStream.write(ChunkedOutputStream.java:181)
at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:132)
at org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:89)
at org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
at org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:117)
at org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:265)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:203)
at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:236)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
... 16 more
</code></pre><ul>
<li>DSpace Test has the same error but with creating the 2017 core:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Moving: 2243021 into core statistics-2017
2018-02-11 17:28:23 +01:00
Exception: IOException occured when talking to server at: http://localhost:8081/solr//statistics-2017
org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://localhost:8081/solr//statistics-2017
2019-11-28 16:30:45 +01:00
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:566)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.dspace.statistics.SolrLogger.shardSolrIndex(SourceFile:2243)
at org.dspace.statistics.util.StatisticsClient.main(StatisticsClient.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
2018-02-11 17:28:23 +01:00
Caused by: org.apache.http.client.ClientProtocolException
2019-11-28 16:30:45 +01:00
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:867)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 10 more
</code></pre><ul>
2020-04-13 16:24:05 +02:00
<li>There is interesting documentation about this on the DSpace Wiki: <a href="https://wiki.lyrasis.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear">https://wiki.lyrasis.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear</a></li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m looking to see maybe if we&rsquo;re hitting the issues mentioned in <a href="https://jira.duraspace.org/browse/DS-2212">DS-2212</a> that were apparently fixed in DSpace 5.2</li>
2019-11-28 16:30:45 +01:00
<li>I can apparently search for records in the Solr stats core that have an empty <code>owningColl</code> field using this in the Solr admin query: <code>-owningColl:*</code></li>
2020-01-27 15:20:44 +01:00
<li>On CGSpace I see 48,000,000 records that have an <code>owningColl</code> field and 34,000,000 that don&rsquo;t:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ http &#39;http://localhost:3000/solr/statistics/select?q=owningColl%3A*&amp;wt=json&amp;indent=true&#39; | grep numFound
&#34;response&#34;:{&#34;numFound&#34;:48476327,&#34;start&#34;:0,&#34;docs&#34;:[
$ http &#39;http://localhost:3000/solr/statistics/select?q=-owningColl%3A*&amp;wt=json&amp;indent=true&#39; | grep numFound
&#34;response&#34;:{&#34;numFound&#34;:34879872,&#34;start&#34;:0,&#34;docs&#34;:[
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I tested the <code>dspace stats-util -s</code> process on my local machine and it failed the same way</li>
2020-01-27 15:20:44 +01:00
<li>It doesn&rsquo;t seem to be helpful, but the dspace log shows this:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-10 10:51:19,301 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2016
2018-02-11 17:28:23 +01:00
2018-01-10 10:51:19,301 INFO org.dspace.statistics.SolrLogger @ Moving: 3821 records into core statistics-2016
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-04-13 16:24:05 +02:00
<li>Terry Brady has written some notes on the DSpace Wiki about Solr sharing issues: <a href="https://wiki.lyrasis.org/display/%7Eterrywbrady/Statistics+Import+Export+Issues">https://wiki.lyrasis.org/display/%7Eterrywbrady/Statistics+Import+Export+Issues</a></li>
2019-11-28 16:30:45 +01:00
<li>Uptime Robot said that CGSpace went down at around 9:43 AM</li>
2020-01-27 15:20:44 +01:00
<li>I looked at PostgreSQL&rsquo;s <code>pg_stat_activity</code> table and saw 161 active connections, but no pool errors in the DSpace logs:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Timeout: Pool empty.&#34; dspace.log.2018-01-10
2018-02-11 17:28:23 +01:00
0
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The XMLUI logs show quite a bit of activity today:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep &#34;10/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
951 207.46.13.159
954 157.55.39.123
1217 95.108.181.88
1503 104.196.152.243
6455 70.36.107.50
11412 70.36.107.190
16730 70.36.107.49
17386 2607:fa98:40:9:26b6:fdff:feff:1c96
21566 2607:fa98:40:9:26b6:fdff:feff:195d
45384 2607:fa98:40:9:26b6:fdff:feff:1888
</code></pre><ul>
<li>The user agent for the top six or so IPs are all the same:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>&#34;Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36&#34;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li><code>whois</code> says they come from <a href="http://www.perfectip.net/">Perfect IP</a></li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;ve never seen those top IPs before, but they have created 50,000 Tomcat sessions today:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -E &#39;(2607:fa98:40:9:26b6:fdff:feff:1888|2607:fa98:40:9:26b6:fdff:feff:195d|2607:fa98:40:9:26b6:fdff:feff:1c96|70.36.107.49|70.36.107.190|70.36.107.50)&#39; /home/cgspace.cgiar.org/log/dspace.log.2018-01-10 | grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
49096
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Rather than blocking their IPs, I think I might just add their user agent to the &ldquo;badbots&rdquo; zone with Baidu, because they seem to be the only ones using that user agent:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep &#34;Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari
/537.36&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
6796 70.36.107.50
11870 70.36.107.190
17323 70.36.107.49
19204 2607:fa98:40:9:26b6:fdff:feff:1c96
23401 2607:fa98:40:9:26b6:fdff:feff:195d
47875 2607:fa98:40:9:26b6:fdff:feff:1888
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I added the user agent to nginx&rsquo;s badbots limit req zone but upon testing the config I got an error:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># nginx -t
2018-02-11 17:28:23 +01:00
nginx: [emerg] could not build map_hash, you should increase map_hash_bucket_size: 64
nginx: configuration file /etc/nginx/nginx.conf test failed
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>According to nginx docs the <a href="https://nginx.org/en/docs/hash.html">bucket size should be a multiple of the CPU&rsquo;s cache alignment</a>, which is 64 for us:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># cat /proc/cpuinfo | grep cache_alignment | head -n1
2018-02-11 17:28:23 +01:00
cache_alignment : 64
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>On our servers that is 64, so I increased this parameter to 128 and deployed the changes to nginx</li>
<li>Almost immediately the PostgreSQL connections dropped back down to 40 or so, and UptimeRobot said the site was back up</li>
2020-01-27 15:20:44 +01:00
<li>So that&rsquo;s interesting that we&rsquo;re not out of PostgreSQL connections (current pool maxActive is 300!) but the system is &ldquo;down&rdquo; to UptimeRobot and very slow to use</li>
2019-11-28 16:30:45 +01:00
<li>Linode continues to test mitigations for Meltdown and Spectre: <a href="https://blog.linode.com/2018/01/03/cpu-vulnerabilities-meltdown-spectre/">https://blog.linode.com/2018/01/03/cpu-vulnerabilities-meltdown-spectre/</a></li>
<li>I rebooted DSpace Test to see if the kernel will be updated (currently Linux 4.14.12-x86_64-linode92)&hellip; nope.</li>
<li>It looks like Linode will reboot the KVM hosts later this week, though</li>
<li>Udana from WLE asked if we could give him permission to upload CSVs to CGSpace (which would require super admin access)</li>
<li>Citing concerns with metadata quality, I suggested adding him on DSpace Test first</li>
<li>I opened a ticket with Atmire to ask them about DSpace 5.8 compatibility: <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560</a></li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-11">2018-01-11</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>The PostgreSQL and firewall graphs from this week show clearly the load from the new bot from PerfectIP.net yesterday:</li>
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/postgres_connections-day-perfectip.png" alt="PostgreSQL load">
<img src="/cgspace-notes/2018/01/firewall-perfectip.png" alt="Firewall load"></p>
2018-02-11 17:28:23 +01:00
<ul>
<li>Linode rebooted DSpace Test and CGSpace for their host hypervisor kernel updates</li>
2019-11-28 16:30:45 +01:00
<li>Following up with the Solr sharding issue on the dspace-tech mailing list, I noticed this interesting snippet in the Tomcat <code>localhost_access_log</code> at the time of my sharding attempt on my test machine:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/statistics/select?q=type%3A2+AND+id%3A1&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 200 107
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/statistics/select?q=*%3A*&amp;rows=0&amp;facet=true&amp;facet.range=time&amp;facet.range.start=NOW%2FYEAR-18YEARS&amp;facet.range.end=NOW%2FYEAR%2B0YEARS&amp;facet.range.gap=%2B1YEAR&amp;facet.mincount=1&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 200 447
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/admin/cores?action=STATUS&amp;core=statistics-2016&amp;indexInfo=true&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 200 76
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/admin/cores?action=CREATE&amp;name=statistics-2016&amp;instanceDir=statistics&amp;dataDir=%2FUsers%2Faorth%2Fdspace%2Fsolr%2Fstatistics-2016%2Fdata&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 200 63
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/statistics/select?csv.mv.separator=%7C&amp;q=*%3A*&amp;fq=time%3A%28%5B2016%5C-01%5C-01T00%5C%3A00%5C%3A00Z+TO+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%5D+NOT+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%29&amp;rows=10000&amp;wt=csv HTTP/1.1&#34; 200 2137630
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;GET /solr/statistics/admin/luke?show=schema&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 200 16253
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] &#34;POST /solr//statistics-2016/update/csv?commit=true&amp;softCommit=false&amp;waitSearcher=true&amp;f.previousWorkflowStep.split=true&amp;f.previousWorkflowStep.separator=%7C&amp;f.previousWorkflowStep.encapsulator=%22&amp;f.actingGroupId.split=true&amp;f.actingGroupId.separator=%7C&amp;f.actingGroupId.encapsulator=%22&amp;f.containerCommunity.split=true&amp;f.containerCommunity.separator=%7C&amp;f.containerCommunity.encapsulator=%22&amp;f.range.split=true&amp;f.range.separator=%7C&amp;f.range.encapsulator=%22&amp;f.containerItem.split=true&amp;f.containerItem.separator=%7C&amp;f.containerItem.encapsulator=%22&amp;f.p_communities_map.split=true&amp;f.p_communities_map.separator=%7C&amp;f.p_communities_map.encapsulator=%22&amp;f.ngram_query_search.split=true&amp;f.ngram_query_search.separator=%7C&amp;f.ngram_query_search.encapsulator=%22&amp;f.containerBitstream.split=true&amp;f.containerBitstream.separator=%7C&amp;f.containerBitstream.encapsulator=%22&amp;f.owningItem.split=true&amp;f.owningItem.separator=%7C&amp;f.owningItem.encapsulator=%22&amp;f.actingGroupParentId.split=true&amp;f.actingGroupParentId.separator=%7C&amp;f.actingGroupParentId.encapsulator=%22&amp;f.text.split=true&amp;f.text.separator=%7C&amp;f.text.encapsulator=%22&amp;f.simple_query_search.split=true&amp;f.simple_query_search.separator=%7C&amp;f.simple_query_search.encapsulator=%22&amp;f.owningComm.split=true&amp;f.owningComm.separator=%7C&amp;f.owningComm.encapsulator=%22&amp;f.owner.split=true&amp;f.owner.separator=%7C&amp;f.owner.encapsulator=%22&amp;f.filterquery.split=true&amp;f.filterquery.separator=%7C&amp;f.filterquery.encapsulator=%22&amp;f.p_group_map.split=true&amp;f.p_group_map.separator=%7C&amp;f.p_group_map.encapsulator=%22&amp;f.actorMemberGroupId.split=true&amp;f.actorMemberGroupId.separator=%7C&amp;f.actorMemberGroupId.encapsulator=%22&amp;f.bitstreamId.split=true&amp;f.bitstreamId.separator=%7C&amp;f.bitstreamId.encapsulator=%22&amp;f.group_name.split=true&amp;f.group_name.separator=%7C&amp;f.group_name.encapsulator=%22&amp;f.p_communities_name.split=true&amp;f.p_communities_name.separator=%7C&amp;f.p_communities_name.encapsulator=%22&amp;f.query.split=true&amp;f.query.separator=%7C&amp;f.query.encapsulator=%22&amp;f.workflowStep.split=true&amp;f.workflowStep.separator=%7C&amp;f.workflowStep.encapsulator=%22&amp;f.containerCollection.split=true&amp;f.containerCollection.separator=%7C&amp;f.containerCollection.encapsulator=%22&amp;f.complete_query_search.split=true&amp;f.complete_query_search.separator=%7C&amp;f.complete_query_search.encapsulator=%22&amp;f.p_communities_id.split=true&amp;f.p_communities_id.separator=%7C&amp;f.p_communities_id.encapsulator=%22&amp;f.rangeDescription.split=true&amp;f.rangeDescription.separator=%7C&amp;f.rangeDescription.encapsulator=%22&amp;f.group_id.split=true&amp;f.group_id.separator=%7C&amp;f.group_id.encapsulator=%22&amp;f.bundleName.split=true&amp;f.bundleName.separator=%7C&amp;f.bundleName.encapsulator=%22&amp;f.ngram_simplequery_search.split=true&amp;f.ngram_simplequery_search.separator=%7C&amp;f.ngram_simplequery_search.encapsulator=%22&amp;f.group_map.split=true&amp;f.group_map.separator=%7C&amp;f.group_map.encapsulator=%22&amp;f.owningColl.split=true&amp;f.owningColl.separator=%7C&amp;f.owningColl.encapsulator=%22&amp;f.p_group_id.split=true&amp;f.p_group_id.separator=%7C&amp;f.p_group_id.encapsulator=%22&amp;f.p_group_name.split=true&amp;f.p_group_name.separator=%7C&amp;f.p_group_name.encapsulator=%22&amp;wt=javabin&amp;version=2 HTTP/1.1&#34; 409 156
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The new core is created but when DSpace attempts to POST to it there is an HTTP 409 error</li>
<li>This is apparently a common Solr error code that means &ldquo;version conflict&rdquo;: <a href="http://yonik.com/solr/optimistic-concurrency/">http://yonik.com/solr/optimistic-concurrency/</a></li>
<li>Looks like that bot from the PerfectIP.net host ended up making about 450,000 requests to XMLUI alone yesterday:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep &#34;Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36&#34; | grep &#34;10/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
21572 70.36.107.50
30722 70.36.107.190
34566 70.36.107.49
101829 2607:fa98:40:9:26b6:fdff:feff:195d
111535 2607:fa98:40:9:26b6:fdff:feff:1c96
161797 2607:fa98:40:9:26b6:fdff:feff:1888
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Wow, I just figured out how to set the application name of each database pool in the JNDI config of Tomcat&rsquo;s <code>server.xml</code>:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>&lt;Resource name=&#34;jdbc/dspaceWeb&#34; auth=&#34;Container&#34; type=&#34;javax.sql.DataSource&#34;
driverClassName=&#34;org.postgresql.Driver&#34;
url=&#34;jdbc:postgresql://localhost:5432/dspacetest?ApplicationName=dspaceWeb&#34;
username=&#34;dspace&#34;
password=&#34;dspace&#34;
initialSize=&#39;5&#39;
maxActive=&#39;75&#39;
maxIdle=&#39;15&#39;
minIdle=&#39;5&#39;
maxWait=&#39;5000&#39;
validationQuery=&#39;SELECT 1&#39;
testOnBorrow=&#39;true&#39; /&gt;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>So theoretically I could name each connection &ldquo;xmlui&rdquo; or &ldquo;dspaceWeb&rdquo; or something meaningful and it would show up in PostgreSQL&rsquo;s <code>pg_stat_activity</code> table!</li>
2019-11-28 16:30:45 +01:00
<li>This would be super helpful for figuring out where load was coming from (now I wonder if I could figure out how to graph this)</li>
2020-01-27 15:20:44 +01:00
<li>Also, I realized that the <code>db.jndi</code> parameter in dspace.cfg needs to match the <code>name</code> value in your applicaiton&rsquo;s context—not the <code>global</code> one</li>
2019-11-28 16:30:45 +01:00
<li>Ah hah! Also, I can name the default DSpace connection pool in dspace.cfg as well, like:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>db.url = jdbc:postgresql://localhost:5432/dspacetest?ApplicationName=dspaceDefault
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>With that it is super easy to see where PostgreSQL connections are coming from in <code>pg_stat_activity</code></li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-12">2018-01-12</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-04-13 16:24:05 +02:00
<li>I&rsquo;m looking at the <a href="https://wiki.lyrasis.org/display/DSDOC6x/Installing+DSpace#InstallingDSpace-ServletEngine(ApacheTomcat7orlater,Jetty,CauchoResinorequivalent)">DSpace 6.0 Install docs</a> and notice they tweak the number of threads in their Tomcat connector:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>&lt;!-- Define a non-SSL HTTP/1.1 Connector on port 8080 --&gt;
2022-03-04 13:30:06 +01:00
&lt;Connector port=&#34;8080&#34;
maxThreads=&#34;150&#34;
minSpareThreads=&#34;25&#34;
maxSpareThreads=&#34;75&#34;
enableLookups=&#34;false&#34;
redirectPort=&#34;8443&#34;
acceptCount=&#34;100&#34;
connectionTimeout=&#34;20000&#34;
disableUploadTimeout=&#34;true&#34;
URIEncoding=&#34;UTF-8&#34;/&gt;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>In Tomcat 8.5 the <code>maxThreads</code> defaults to 200 which is probably fine, but tweaking <code>minSpareThreads</code> could be good</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see a setting for <code>maxSpareThreads</code> in the docs so that might be an error</li>
<li>Looks like in Tomcat 8.5 the default URIEncoding for Connectors is UTF-8, so we don&rsquo;t need to specify that manually anymore: <a href="https://tomcat.apache.org/tomcat-8.5-doc/config/http.html">https://tomcat.apache.org/tomcat-8.5-doc/config/http.html</a></li>
2019-11-28 16:30:45 +01:00
<li>Ooh, I just saw the <code>acceptorThreadCount</code> setting (in Tomcat 7 and 8.5):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>The number of threads to be used to accept connections. Increase this value on a multi CPU machine, although you would never really need more than 2. Also, with a lot of non keep alive connections, you might want to increase this value as well. Default value is 1.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>That could be very interesting</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-13">2018-01-13</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Still testing DSpace 6.2 on Tomcat 8.5.24</li>
2019-11-28 16:30:45 +01:00
<li>Catalina errors at Tomcat 8.5 startup:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>13-Jan-2018 13:59:05.245 WARNING [main] org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory.getObjectInstance Name = dspace6 Property maxActive is not used in DBCP2, use maxTotal instead. maxTotal default value is 8. You have set value of &#34;35&#34; for &#34;maxActive&#34; property, which is being ignored.
13-Jan-2018 13:59:05.245 WARNING [main] org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory.getObjectInstance Name = dspace6 Property maxWait is not used in DBCP2 , use maxWaitMillis instead. maxWaitMillis default value is -1. You have set value of &#34;5000&#34; for &#34;maxWait&#34; property, which is being ignored.
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I looked in my Tomcat 7.0.82 logs and I don&rsquo;t see anything about DBCP2 errors, so I guess this a Tomcat 8.0.x or 8.5.x thing</li>
2019-11-28 16:30:45 +01:00
<li>DBCP2 appears to be Tomcat 8.0.x and up according to the <a href="https://tomcat.apache.org/migration-8.html">Tomcat 8.0 migration guide</a></li>
<li>I have updated our <a href="https://github.com/ilri/rmg-ansible-public/commit/246f9d7b06d53794f189f0cc57ad5ddd80f0b014">Ansible infrastructure scripts</a> so that it will be ready whenever we switch to Tomcat 8 (probably with Ubuntu 18.04 later this year)</li>
<li>When I enable the ResourceLink in the ROOT.xml context I get the following error in the Tomcat localhost log:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>13-Jan-2018 14:14:36.017 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.dspace.app.util.DSpaceWebappListener]
2019-11-28 16:30:45 +01:00
java.lang.ExceptionInInitializerError
at org.dspace.app.util.AbstractDSpaceWebapp.register(AbstractDSpaceWebapp.java:74)
at org.dspace.app.util.DSpaceWebappListener.contextInitialized(DSpaceWebappListener.java:31)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:629)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1839)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-02-11 17:28:23 +01:00
Caused by: java.lang.NullPointerException
2019-11-28 16:30:45 +01:00
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:547)
at org.dspace.core.Context.&lt;clinit&gt;(Context.java:103)
... 15 more
</code></pre><ul>
<li>Interesting blog post benchmarking Tomcat JDBC vs Apache Commons DBCP2, with configuration snippets: <a href="http://www.tugay.biz/2016/07/tomcat-connection-pool-vs-apache.html">http://www.tugay.biz/2016/07/tomcat-connection-pool-vs-apache.html</a></li>
2020-01-27 15:20:44 +01:00
<li>The Tomcat vs Apache pool thing is confusing, but apparently we&rsquo;re using Apache Commons DBCP2 because we don&rsquo;t specify <code>factory=&quot;org.apache.tomcat.jdbc.pool.DataSourceFactory&quot;</code> in our global resource</li>
<li>So at least I know that I&rsquo;m not looking for documentation or troubleshooting on the Tomcat JDBC pool!</li>
<li>I looked at <code>pg_stat_activity</code> during Tomcat&rsquo;s startup and I see that the pool created in server.xml is indeed connecting, just that nothing uses it</li>
2019-11-28 16:30:45 +01:00
<li>Also, the fallback connection parameters specified in local.cfg (not dspace.cfg) are used</li>
<li>Shit, this might actually be a DSpace error: <a href="https://jira.duraspace.org/browse/DS-3434">https://jira.duraspace.org/browse/DS-3434</a></li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;ll comment on that issue</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-14">2018-01-14</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Looking at the authors Peter had corrected</li>
2020-01-27 15:20:44 +01:00
<li>Some had multiple and he&rsquo;s corrected them by adding <code>||</code> in the correction column, but I can&rsquo;t process those this way so I will just have to flag them and do those manually later</li>
2018-02-11 17:28:23 +01:00
<li>Also, I can flag the values that have &ldquo;DELETE&rdquo;</li>
<li>Then I need to facet the correction column on isBlank(value) and not flagged</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-15">2018-01-15</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Help Udana from IWMI export a CSV from DSpace Test so he can start trying a batch upload</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m going to apply these ~130 corrections on CGSpace:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>update metadatavalue set text_value=&#39;Formally Published&#39; where resource_type_id=2 and metadata_field_id=214 and text_value like &#39;Formally published&#39;;
delete from metadatavalue where resource_type_id=2 and metadata_field_id=214 and text_value like &#39;NO&#39;;
update metadatavalue set text_value=&#39;en&#39; where resource_type_id=2 and metadata_field_id=38 and text_value ~ &#39;(En|English)&#39;;
update metadatavalue set text_value=&#39;fr&#39; where resource_type_id=2 and metadata_field_id=38 and text_value ~ &#39;(fre|frn|French)&#39;;
update metadatavalue set text_value=&#39;es&#39; where resource_type_id=2 and metadata_field_id=38 and text_value ~ &#39;(Spanish|spa)&#39;;
update metadatavalue set text_value=&#39;vi&#39; where resource_type_id=2 and metadata_field_id=38 and text_value=&#39;Vietnamese&#39;;
update metadatavalue set text_value=&#39;ru&#39; where resource_type_id=2 and metadata_field_id=38 and text_value=&#39;Ru&#39;;
update metadatavalue set text_value=&#39;in&#39; where resource_type_id=2 and metadata_field_id=38 and text_value ~ &#39;(IN|In)&#39;;
delete from metadatavalue where resource_type_id=2 and metadata_field_id=38 and text_value ~ &#39;(dc.language.iso|CGIAR Challenge Program on Water and Food)&#39;;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Continue proofing Peter&rsquo;s author corrections that I started yesterday, faceting on non blank, non flagged, and briefly scrolling through the values of the corrections to find encoding errors for French and Spanish names</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/openrefine-authors.png" alt="OpenRefine Authors"></p>
2018-02-11 17:28:23 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Apply corrections using <a href="https://gist.github.com/alanorth/df92cbfb54d762ba21b28f7cd83b6897">fix-metadata-values.py</a>:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i /tmp/2018-01-14-Authors-1300-Corrections.csv -f dc.contributor.author -t correct -m 3 -d dspace-u dspace -p &#39;fuuu&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>In looking at some of the values to delete or check I found some metadata values that I could not resolve their handle via SQL:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>dspace=# select * from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value=&#39;Tarawali&#39;;
2019-11-28 16:30:45 +01:00
metadata_value_id | resource_id | metadata_field_id | text_value | text_lang | place | authority | confidence | resource_type_id
2018-02-11 17:28:23 +01:00
-------------------+-------------+-------------------+------------+-----------+-------+-----------+------------+------------------
2019-11-28 16:30:45 +01:00
2757936 | 4369 | 3 | Tarawali | | 9 | | 600 | 2
2018-02-11 17:28:23 +01:00
(1 row)
2022-03-04 13:30:06 +01:00
dspace=# select handle from item, handle where handle.resource_id = item.item_id AND item.item_id = &#39;4369&#39;;
2019-11-28 16:30:45 +01:00
handle
2018-02-11 17:28:23 +01:00
--------
(0 rows)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Even searching in the DSpace advanced search for author equals &ldquo;Tarawali&rdquo; produces nothing&hellip;</li>
2020-04-13 16:24:05 +02:00
<li>Otherwise, the <a href="https://wiki.lyrasis.org/display/DSPACE/Helper+SQL+functions+for+DSpace+5">DSpace 5 SQL Helper Functions</a> provide <code>ds5_item2itemhandle()</code>, which is much easier than my long query above that I always have to go search for</li>
2019-11-28 16:30:45 +01:00
<li>For example, to find the Handle for an item that has the author &ldquo;Erni&rdquo;:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>dspace=# select * from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value=&#39;Erni&#39;;
2019-11-28 16:30:45 +01:00
metadata_value_id | resource_id | metadata_field_id | text_value | text_lang | place | authority | confidence | resource_type_id
2018-02-11 17:28:23 +01:00
-------------------+-------------+-------------------+------------+-----------+-------+--------------------------------------+------------+------------------
2019-11-28 16:30:45 +01:00
2612150 | 70308 | 3 | Erni | | 9 | 3fe10c68-6773-49a7-89cc-63eb508723f2 | -1 | 2
2018-02-11 17:28:23 +01:00
(1 row)
dspace=# select ds5_item2itemhandle(70308);
2019-11-28 16:30:45 +01:00
ds5_item2itemhandle
2018-02-11 17:28:23 +01:00
---------------------
2019-11-28 16:30:45 +01:00
10568/68609
2018-02-11 17:28:23 +01:00
(1 row)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Next I apply the author deletions:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ ./delete-metadata-values.py -i /tmp/2018-01-14-Authors-5-Deletions.csv -f dc.contributor.author -m 3 -d dspace -u dspace -p &#39;fuuu&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Now working on the affiliation corrections from Peter:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i /tmp/2018-01-15-Affiliations-888-Corrections.csv -f cg.contributor.affiliation -t correct -m 211 -d dspace -u dspace -p &#39;fuuu&#39;
$ ./delete-metadata-values.py -i /tmp/2018-01-15-Affiliations-11-Deletions.csv -f cg.contributor.affiliation -m 211 -d dspace -u dspace -p &#39;fuuu&#39;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Now I made a new list of affiliations for Peter to look through:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where metadata_schema_id = 2 and element = &#39;contributor&#39; and qualifier = &#39;affiliation&#39;) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
2018-02-11 17:28:23 +01:00
COPY 4552
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Looking over the affiliations again I see dozens of CIAT ones with their affiliation formatted like: International Center for Tropical Agriculture (CIAT)</li>
<li>For example, this one is from just last month: <a href="https://cgspace.cgiar.org/handle/10568/89930">https://cgspace.cgiar.org/handle/10568/89930</a></li>
<li>Our controlled vocabulary has this in the format without the abbreviation: International Center for Tropical Agriculture</li>
2020-01-27 15:20:44 +01:00
<li>So some submitters don&rsquo;t know to use the controlled vocabulary lookup</li>
2019-11-28 16:30:45 +01:00
<li>Help Sisay with some thumbnails for book chapters in Open Refine and SAFBuilder</li>
2020-01-27 15:20:44 +01:00
<li>CGSpace users were having problems logging in, I think something&rsquo;s wrong with LDAP because I see this in the logs:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-15 12:53:15,810 WARN org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=2386749547D03E0AA4EC7E44181A7552:ip_addr=x.x.x.x:ldap_authentication:type=failed_auth javax.naming.AuthenticationException\colon; [LDAP\colon; error code 49 - 80090308\colon; LdapErr\colon; DSID-0C090400, comment\colon; AcceptSecurityContext error, data 775, v1db1^@]
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Looks like we processed 2.9 million requests on CGSpace in 2017-12:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># time zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Dec/2017&#34;
2018-02-11 17:28:23 +01:00
2890041
real 0m25.756s
user 0m28.016s
sys 0m2.210s
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2018-01-16">2018-01-16</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Meeting with CGSpace team, a few action items:
<ul>
2020-01-27 15:20:44 +01:00
<li>Discuss standardized names for CRPs and centers with ICARDA (don&rsquo;t wait for CG Core)</li>
2018-02-11 17:28:23 +01:00
<li>Re-send DC rights implementation and forward to everyone so we can move forward with it (without the URI field for now)</li>
<li>Start looking at where I was with the AGROVOC API</li>
2022-03-04 13:30:06 +01:00
<li>Have a controlled vocabulary for CGIAR authors&rsquo; names and ORCIDs? Perhaps values like: Orth, Alan S. (0000-0002-1735-7458)</li>
2018-02-11 17:28:23 +01:00
<li>Need to find the metadata field name that ICARDA is using for their ORCIDs</li>
<li>Update text for DSpace version plan on wiki</li>
<li>Come up with an SLA, something like: <em>In return for your contribution we will, to the best of our ability, ensure 99.5% (&ldquo;two and a half nines&rdquo;) uptime of CGSpace, ensure data is stored in open formats and safely backed up, follow CG Core metadata standards, &hellip;</em></li>
<li>Add Sisay and Danny to Uptime Robot and allow them to restart Tomcat on CGSpace ✔</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2020-01-27 15:20:44 +01:00
<li>I removed Tsega&rsquo;s SSH access to the web and DSpace servers, and asked Danny to check whether there is anything he needs from Tsega&rsquo;s home directories so we can delete the accounts completely</li>
<li>I removed Tsega&rsquo;s access to Linode dashboard as well</li>
2018-02-11 17:28:23 +01:00
<li>I ended up creating a Jira issue for my <code>db.jndi</code> documentation fix: <a href="https://jira.duraspace.org/browse/DS-3803">DS-3803</a></li>
<li>The DSpace developers said they wanted each pull request to be associated with a Jira issue</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-17">2018-01-17</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Abenet asked me to proof and upload 54 records for LIVES</li>
2020-01-27 15:20:44 +01:00
<li>A few records were missing countries (even though they&rsquo;re all from Ethiopia)</li>
2018-02-11 17:28:23 +01:00
<li>Also, there are whitespace issues in many columns, and the items are mapped to the LIVES and ILRI articles collections, not Theses</li>
2019-11-28 16:30:45 +01:00
<li>In any case, importing them like this:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ export JAVA_OPTS=&#34;-Dfile.encoding=UTF-8 -Xmx512m -XX:+TieredCompilation -XX:TieredStopAtLevel=1&#34;
2018-02-11 17:28:23 +01:00
$ dspace import -a -e aorth@mjanja.ch -s /tmp/2018-01-16\ LIVES/SimpleArchiveFormat -m lives.map &amp;&gt; lives.log
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>And fantastic, before I started the import there were 10 PostgreSQL connections, and then CGSpace crashed during the upload</li>
<li>When I looked there were 210 PostgreSQL connections!</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see any high load in XMLUI or REST/OAI:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E &#34;17/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
381 40.77.167.124
403 213.55.99.121
431 207.46.13.60
445 157.55.39.113
445 157.55.39.231
449 95.108.181.88
453 68.180.229.254
593 54.91.48.104
757 104.196.152.243
776 66.249.66.90
2022-03-04 13:30:06 +01:00
# cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E &#34;17/Jan/2018&#34; | awk &#39;{print $1}&#39; | sort -n | uniq -c | sort -h | tail
2019-11-28 16:30:45 +01:00
11 205.201.132.14
11 40.77.167.124
15 35.226.23.240
16 157.55.39.231
16 66.249.64.155
18 66.249.66.90
22 95.108.181.88
58 104.196.152.243
4106 70.32.83.92
9229 45.5.184.196
</code></pre><ul>
<li>But I do see this strange message in the dspace log:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-17 07:59:25,856 INFO org.apache.http.impl.client.SystemDefaultHttpClient @ I/O exception (org.apache.http.NoHttpResponseException) caught when processing request to {}-&gt;http://localhost:8081: The target server failed to respond
2018-02-11 17:28:23 +01:00
2018-01-17 07:59:25,856 INFO org.apache.http.impl.client.SystemDefaultHttpClient @ Retrying request to {}-&gt;http://localhost:8081
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I have NEVER seen this error before, and there is no error before or after that in DSpace&rsquo;s solr.log</li>
<li>Tomcat&rsquo;s catalina.out does show something interesting, though, right at that time:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>[====================&gt; ]40% time remaining: 7 hour(s) 14 minute(s) 45 seconds. timestamp: 2018-01-17 07:57:02
2018-02-11 17:28:23 +01:00
[====================&gt; ]40% time remaining: 7 hour(s) 14 minute(s) 45 seconds. timestamp: 2018-01-17 07:57:11
[====================&gt; ]40% time remaining: 7 hour(s) 14 minute(s) 44 seconds. timestamp: 2018-01-17 07:57:37
[====================&gt; ]40% time remaining: 7 hour(s) 16 minute(s) 5 seconds. timestamp: 2018-01-17 07:57:49
2022-03-04 13:30:06 +01:00
Exception in thread &#34;http-bio-127.0.0.1-8081-exec-627&#34; java.lang.OutOfMemoryError: Java heap space
2019-11-28 16:30:45 +01:00
at org.apache.lucene.util.FixedBitSet.clone(FixedBitSet.java:576)
at org.apache.solr.search.BitDocSet.andNot(BitDocSet.java:222)
at org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1067)
at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1557)
at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:485)
at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.dspace.solr.filters.LocalHostRestrictionFilter.doFilter(LocalHostRestrictionFilter.java:50)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:221)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.CrawlerSessionManagerValve.invoke(CrawlerSessionManagerValve.java:180)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:436)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:318)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>You can see the timestamp above, which is some Atmire nightly task I think, but I can&rsquo;t figure out which one</li>
2019-11-28 16:30:45 +01:00
<li>So I restarted Tomcat and tried the import again, which finished very quickly and without errors!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace import -a -e aorth@mjanja.ch -s /tmp/2018-01-16\ LIVES/SimpleArchiveFormat -m lives2.map &amp;&gt; lives2.log
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Looking at the JVM graphs from Munin it does look like the heap ran out of memory (see the blue dip just before the green spike when I restarted Tomcat):</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/tomcat-jvm-day.png" alt="Tomcat JVM Heap"></p>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m playing with maven repository caching using Artifactory in a Docker instance: <a href="https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker">https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker</a></li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ docker pull docker.bintray.io/jfrog/artifactory-oss:latest
2018-02-11 17:28:23 +01:00
$ docker volume create --name artifactory5_data
$ docker network create dspace-build
$ docker run --network dspace-build --name artifactory -d -v artifactory5_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Then configure the local maven to use it in settings.xml with the settings from &ldquo;Set Me Up&rdquo;: <a href="https://www.jfrog.com/confluence/display/RTF/Using+Artifactory">https://www.jfrog.com/confluence/display/RTF/Using+Artifactory</a></li>
<li>This could be a game changer for testing and running the Docker DSpace image</li>
<li>Wow, I even managed to add the Atmire repository as a remote and map it into the <code>libs-release</code> virtual repository, then tell maven to use it for <code>atmire.com-releases</code> in settings.xml!</li>
<li>Hmm, some maven dependencies for the SWORDv2 web application in DSpace 5.5 are broken:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>[ERROR] Failed to execute goal on project dspace-swordv2: Could not resolve dependencies for project org.dspace:dspace-swordv2:war:5.5: Failed to collect dependencies at org.swordapp:sword2-server:jar:classes:1.0 -&gt; org.apache.abdera:abdera-client:jar:1.1.1 -&gt; org.apache.abdera:abdera-core:jar:1.1.1 -&gt; org.apache.abdera:abdera-i18n:jar:1.1.1 -&gt; org.apache.geronimo.specs:geronimo-activation_1.0.2_spec:jar:1.1: Failed to read artifact descriptor for org.apache.geronimo.specs:geronimo-activation_1.0.2_spec:jar:1.1: Could not find artifact org.apache.geronimo.specs:specs:pom:1.1 in central (http://localhost:8081/artifactory/libs-release) -&gt; [Help 1]
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I never noticed because I build with that web application disabled:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ mvn -U -Dmirage2.on=true -Dmirage2.deps.included=false -Denv=localhost -P \!dspace-sword,\!dspace-swordv2 clean package
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>UptimeRobot said CGSpace went down for a few minutes</li>
2020-01-27 15:20:44 +01:00
<li>I didn&rsquo;t do anything but it came back up on its own</li>
<li>I don&rsquo;t see anything unusual in the XMLUI or REST/OAI logs</li>
2019-11-28 16:30:45 +01:00
<li>Now Linode alert says the CPU load is high, <em>sigh</em></li>
2020-01-27 15:20:44 +01:00
<li>Regarding the heap space error earlier today, it looks like it does happen a few times a week or month (I&rsquo;m not sure how far these logs go back, as they are not strictly daily):</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># zgrep -c java.lang.OutOfMemoryError /var/log/tomcat7/catalina.out* | grep -v :0
2018-02-11 17:28:23 +01:00
/var/log/tomcat7/catalina.out:2
/var/log/tomcat7/catalina.out.10.gz:7
/var/log/tomcat7/catalina.out.11.gz:1
/var/log/tomcat7/catalina.out.12.gz:2
/var/log/tomcat7/catalina.out.15.gz:1
/var/log/tomcat7/catalina.out.17.gz:2
/var/log/tomcat7/catalina.out.18.gz:3
/var/log/tomcat7/catalina.out.20.gz:1
/var/log/tomcat7/catalina.out.21.gz:4
/var/log/tomcat7/catalina.out.25.gz:1
/var/log/tomcat7/catalina.out.28.gz:1
/var/log/tomcat7/catalina.out.2.gz:6
/var/log/tomcat7/catalina.out.30.gz:2
/var/log/tomcat7/catalina.out.31.gz:1
/var/log/tomcat7/catalina.out.34.gz:1
/var/log/tomcat7/catalina.out.38.gz:1
/var/log/tomcat7/catalina.out.39.gz:1
/var/log/tomcat7/catalina.out.4.gz:3
/var/log/tomcat7/catalina.out.6.gz:2
/var/log/tomcat7/catalina.out.7.gz:14
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Overall the heap space usage in the munin graph seems ok, though I usually increase it by 512MB over the average a few times per year as usage grows</li>
<li>But maybe I should increase it by more, like 1024MB, to give a bit more head room</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-18">2018-01-18</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>UptimeRobot said CGSpace was down for 1 minute last night</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see any errors in the nginx or catalina logs, so I guess UptimeRobot just got impatient and closed the request, which caused nginx to send an HTTP 499</li>
2019-11-28 16:30:45 +01:00
<li>I realize I never did a full re-index after the SQL author and affiliation updates last week, so I should force one now:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ export JAVA_OPTS=&#34;-Dfile.encoding=UTF-8 -Xmx1024m -XX:+TieredCompilation -XX:TieredStopAtLevel=1&#34;
2018-02-11 17:28:23 +01:00
$ time schedtool -D -e ionice -c2 -n7 nice -n19 /home/cgspace.cgiar.org/bin/dspace index-discovery -b
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Maria from Bioversity asked if I could remove the abstracts from all of their Limited Access items in the <a href="https://cgspace.cgiar.org/handle/10568/35501">Bioversity Journal Articles</a> collection</li>
2020-01-27 15:20:44 +01:00
<li>It&rsquo;s easy enough to do in OpenRefine, but you have to be careful to only get those items that are uploaded into Bioversity&rsquo;s collection, not the ones that are mapped from others!</li>
2019-11-28 16:30:45 +01:00
<li>Use this GREL in OpenRefine after isolating all the Limited Access items: <code>value.startsWith(&quot;10568/35501&quot;)</code></li>
<li>UptimeRobot said CGSpace went down AGAIN and both Sisay and Danny immediately logged in and restarted Tomcat without talking to me <em>or</em> each other!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Jan 18 07:01:22 linode18 sudo[10805]: dhmichael : TTY=pts/5 ; PWD=/home/dhmichael ; USER=root ; COMMAND=/bin/systemctl restart tomcat7
2018-02-11 17:28:23 +01:00
Jan 18 07:01:22 linode18 sudo[10805]: pam_unix(sudo:session): session opened for user root by dhmichael(uid=0)
Jan 18 07:01:22 linode18 systemd[1]: Stopping LSB: Start Tomcat....
Jan 18 07:01:22 linode18 sudo[10812]: swebshet : TTY=pts/3 ; PWD=/home/swebshet ; USER=root ; COMMAND=/bin/systemctl restart tomcat7
Jan 18 07:01:22 linode18 sudo[10812]: pam_unix(sudo:session): session opened for user root by swebshet(uid=0)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I had to cancel the Discovery indexing and I&rsquo;ll have to re-try it another time when the server isn&rsquo;t so busy (it had already taken two hours and wasn&rsquo;t even close to being done)</li>
<li>For now I&rsquo;ve increased the Tomcat JVM heap from 5632 to 6144m, to give ~1GB of free memory over the average usage to hopefully account for spikes caused by load or background jobs</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-19">2018-01-19</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Linode alerted and said that the CPU load was 264.1% on CGSpace</li>
2019-11-28 16:30:45 +01:00
<li>Start the Discovery indexing again:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ export JAVA_OPTS=&#34;-Dfile.encoding=UTF-8 -Xmx1024m -XX:+TieredCompilation -XX:TieredStopAtLevel=1&#34;
2018-02-11 17:28:23 +01:00
$ time schedtool -D -e ionice -c2 -n7 nice -n19 /home/cgspace.cgiar.org/bin/dspace index-discovery -b
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Linode alerted again and said that CGSpace was using 301% CPU</li>
2020-01-27 15:20:44 +01:00
<li>Peter emailed to ask why <a href="https://cgspace.cgiar.org/handle/10568/88090">this item</a> doesn&rsquo;t have an Altmetric badge on CGSpace but does have one on the <a href="https://www.altmetric.com/details/26709041">Altmetric dashboard</a></li>
<li>Looks like our badge code calls the <code>handle</code> endpoint which doesn&rsquo;t exist:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>https://api.altmetric.com/v1/handle/10568/88090
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I told Peter we should keep an eye out and try again next week</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-20">2018-01-20</h2>
2018-02-11 17:28:23 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Run the authority indexing script on CGSpace and of course it died:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ time schedtool -D -e ionice -c2 -n7 nice -n19 /home/cgspace.cgiar.org/bin/dspace index-authority
2018-02-11 17:28:23 +01:00
Retrieving all data
Initialize org.dspace.authority.indexer.DSpaceAuthorityIndexer
Exception: null
java.lang.NullPointerException
2019-11-28 16:30:45 +01:00
at org.dspace.authority.AuthorityValueGenerator.generateRaw(AuthorityValueGenerator.java:82)
at org.dspace.authority.AuthorityValueGenerator.generate(AuthorityValueGenerator.java:39)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.prepareNextValue(DSpaceAuthorityIndexer.java:201)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:132)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:144)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:144)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:159)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:144)
at org.dspace.authority.indexer.DSpaceAuthorityIndexer.hasMore(DSpaceAuthorityIndexer.java:144)
at org.dspace.authority.indexer.AuthorityIndexClient.main(AuthorityIndexClient.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
2018-02-11 17:28:23 +01:00
real 7m2.241s
user 1m33.198s
sys 0m12.317s
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I tested the abstract cleanups on Bioversity&rsquo;s Journal Articles collection again that I had started a few days ago</li>
2019-11-28 16:30:45 +01:00
<li>In the end there were 324 items in the collection that were Limited Access, but only 199 had abstracts</li>
<li>I want to document the workflow of adding a production PostgreSQL database to a development instance of <a href="https://github.com/alanorth/docker-dspace">DSpace in Docker</a>:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ docker exec dspace_db dropdb -U postgres dspace
2018-02-11 17:28:23 +01:00
$ docker exec dspace_db createdb -U postgres -O dspace --encoding=UNICODE dspace
2022-03-04 13:30:06 +01:00
$ docker exec dspace_db psql -U postgres dspace -c &#39;alter user dspace createuser;&#39;
2018-02-11 17:28:23 +01:00
$ docker cp test.dump dspace_db:/tmp/test.dump
$ docker exec dspace_db pg_restore -U postgres -d dspace /tmp/test.dump
2022-03-04 13:30:06 +01:00
$ docker exec dspace_db psql -U postgres dspace -c &#39;alter user dspace nocreateuser;&#39;
2018-02-11 17:28:23 +01:00
$ docker exec dspace_db vacuumdb -U postgres dspace
$ docker cp ~/src/git/DSpace/dspace/etc/postgres/update-sequences.sql dspace_db:/tmp
$ docker exec dspace_db psql -U dspace -f /tmp/update-sequences.sql dspace
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2018-01-22">2018-01-22</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Look over Udana&rsquo;s CSV of 25 WLE records from last week</li>
2018-02-11 17:28:23 +01:00
<li>I sent him some corrections:
<ul>
<li>The file encoding is Windows-1252</li>
<li>There were whitespace issues in the dc.identifier.citation field (spaces at the beginning and end, and multiple spaces in between some words)</li>
<li>Also, the authors listed in the citation need to be in normal format, separated by commas or colons (however you prefer), not with ||</li>
<li>There were spaces in the beginning and end of some cg.identifier.doi fields</li>
<li>Make sure that the cg.coverage.countries field is just countries: ie, no &ldquo;SOUTH ETHIOPIA&rdquo; or &ldquo;EAST AFRICA&rdquo; (the first should just be ETHIOPIA, the second should be in cg.coverage.region instead)</li>
<li>The current list of regions we use is here: <a href="https://github.com/ilri/DSpace/blob/5_x-prod/dspace/config/input-forms.xml#L5162">https://github.com/ilri/DSpace/blob/5_x-prod/dspace/config/input-forms.xml#L5162</a></li>
<li>You have a syntax error in your cg.coverage.regions (extra ||)</li>
<li>The value of dc.identifier.issn should just be the ISSN but you have: eISSN: 1479-487X</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2018-02-11 17:28:23 +01:00
<li>I wrote a quick Python script to use the DSpace REST API to find all collections under a given community</li>
<li>The source code is here: <a href="https://gist.github.com/alanorth/ddd7f555f0e487fe0e9d3eb4ff26ce50">rest-find-collections.py</a></li>
2020-01-27 15:20:44 +01:00
<li>Peter had said that found a bunch of ILRI collections that were called &ldquo;untitled&rdquo;, but I don&rsquo;t see any:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./rest-find-collections.py 10568/1 | wc -l
2018-02-11 17:28:23 +01:00
308
$ ./rest-find-collections.py 10568/1 | grep -i untitled
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Looking at the <a href="https://tomcat.apache.org/tomcat-7.0-doc/config/http.html">Tomcat connector docs</a> I think we really need to increase <code>maxThreads</code></li>
<li>The default is 200, which can easily be taken up by bots considering that Google and Bing each browse with fifty (50) connections each sometimes!</li>
<li>Before I increase this I want to see if I can measure and graph this, and then benchmark</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;ll probably also increase <code>minSpareThreads</code> to 20 (its default is 10)</li>
2019-11-28 16:30:45 +01:00
<li>I still want to bump up <code>acceptorThreadCount</code> from 1 to 2 as well, as the documentation says this should be increased on multi-core systems</li>
<li>I spent quite a bit of time looking at <code>jvisualvm</code> and <code>jconsole</code> today</li>
<li>Run system updates on DSpace Test and reboot it</li>
<li>I see I can monitor the number of Tomcat threads and some detailed JVM memory stuff if I install <code>munin-plugins-java</code></li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;d still like to get arbitrary mbeans like activeSessions etc, though</li>
<li>I can&rsquo;t remember if I had to configure the jmx settings in <code>/etc/munin/plugin-conf.d/munin-node</code> or not—I think all I did was re-run the <code>munin-node-configure</code> script and of course enable JMX in Tomcat&rsquo;s JVM options</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-23">2018-01-23</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Thinking about generating a jmeter test plan for DSpace, along the lines of <a href="https://github.com/Georgetown-University-Libraries/dspace-performance-test">Georgetown&rsquo;s dspace-performance-test</a></li>
2019-11-28 16:30:45 +01:00
<li>I got a list of all the GET requests on CGSpace for January 21st (the last time Linode complained the load was high), excluding admin calls:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz /var/log/nginx/library-access.log.2.gz /var/log/nginx/library-access.log.3.gz /var/log/nginx/rest.log.2.gz /var/log/nginx/rest.log.3.gz /var/log/nginx/oai.log.2.gz /var/log/nginx/oai.log.3.gz /var/log/nginx/error.log.2.gz /var/log/nginx/error.log.3.gz | grep &#34;21/Jan/2018&#34; | grep &#34;GET &#34; | grep -c -v &#34;/admin&#34;
2018-02-11 17:28:23 +01:00
56405
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Apparently about 28% of these requests were for bitstreams, 30% for the REST API, and 30% for handles:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz /var/log/nginx/library-access.log.2.gz /var/log/nginx/library-access.log.3.gz /var/log/nginx/rest.log.2.gz /var/log/nginx/rest.log.3.gz /var/log/nginx/oai.log.2.gz /var/log/nginx/oai.log.3.gz /var/log/nginx/error.log.2.gz /var/log/nginx/error.log.3.gz | grep &#34;21/Jan/2018&#34; | grep &#34;GET &#34; | grep -v &#34;/admin&#34; | awk &#39;{print $7}&#39; | grep -Eo &#34;^/(handle|bitstream|rest|oai)/&#34; | sort | uniq -c | sort -n
2019-11-28 16:30:45 +01:00
38 /oai/
14406 /bitstream/
15179 /rest/
15191 /handle/
</code></pre><ul>
<li>And 3% were to the homepage or search:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz /var/log/nginx/library-access.log.2.gz /var/log/nginx/library-access.log.3.gz /var/log/nginx/rest.log.2.gz /var/log/nginx/rest.log.3.gz /var/log/nginx/oai.log.2.gz /var/log/nginx/oai.log.3.gz /var/log/nginx/error.log.2.gz /var/log/nginx/error.log.3.gz | grep &#34;21/Jan/2018&#34; | grep &#34;GET &#34; | grep -v &#34;/admin&#34; | awk &#39;{print $7}&#39; | grep -Eo &#39;^/($|open-search|discover)&#39; | sort | uniq -c
2019-11-28 16:30:45 +01:00
1050 /
413 /discover
170 /open-search
</code></pre><ul>
<li>The last 10% or so seem to be for static assets that would be served by nginx anyways:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz /var/log/nginx/library-access.log.2.gz /var/log/nginx/library-access.log.3.gz /var/log/nginx/rest.log.2.gz /var/log/nginx/rest.log.3.gz /var/log/nginx/oai.log.2.gz /var/log/nginx/oai.log.3.gz /var/log/nginx/error.log.2.gz /var/log/nginx/error.log.3.gz | grep &#34;21/Jan/2018&#34; | grep &#34;GET &#34; | grep -v &#34;/admin&#34; | awk &#39;{print $7}&#39; | grep -v bitstream | grep -Eo &#39;\.(js|css|png|jpg|jpeg|php|svg|gif|txt|map)$&#39; | sort | uniq -c | sort -n
2019-11-28 16:30:45 +01:00
2 .gif
7 .css
84 .js
433 .php
882 .txt
2551 .png
</code></pre><ul>
<li>I can definitely design a test plan on this!</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-24">2018-01-24</h2>
2018-02-11 17:28:23 +01:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Looking at the REST requests, most of them are to expand all or metadata, but 5% are for retrieving bitstreams:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log.3.gz /var/log/nginx/access.log.4.gz /var/log/nginx/library-access.log.3.gz /var/log/nginx/library-access.log.4.gz /var/log/nginx/rest.log.3.gz /var/log/nginx/rest.log.4.gz /var/log/nginx/oai.log.3.gz /var/log/nginx/oai.log.4.gz /var/log/nginx/error.log.3.gz /var/log/nginx/error.log.4.gz | grep &#34;21/Jan/2018&#34; | grep &#34;GET &#34; | grep -v &#34;/admin&#34; | awk &#39;{print $7}&#39; | grep -E &#34;^/rest&#34; | grep -Eo &#34;(retrieve|expand=[a-z].*)&#34; | sort | uniq -c | sort -n
2019-11-28 16:30:45 +01:00
1 expand=collections
16 expand=all&amp;limit=1
45 expand=items
775 retrieve
5675 expand=all
8633 expand=metadata
</code></pre><ul>
<li>I finished creating the test plan for DSpace Test and ran it from my Linode with:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ jmeter -n -t DSpacePerfTest-dspacetest.cgiar.org.jmx -l 2018-01-24-1.jtl
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Atmire responded to <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">my issue from two weeks ago</a> and said they will start looking into DSpace 5.8 compatibility for CGSpace</li>
<li>I set up a new Arch Linux Linode instance with 8192 MB of RAM and ran the test plan a few times to get a baseline:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># lscpu
2018-02-11 17:28:23 +01:00
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Stepping: 2
CPU MHz: 2499.994
BogoMIPS: 5001.32
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti retpoline fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
# free -m
2019-11-28 16:30:45 +01:00
total used free shared buff/cache available
2018-02-11 17:28:23 +01:00
Mem: 7970 107 7759 1 103 7771
Swap: 255 0 255
# pacman -Syu
# pacman -S git wget jre8-openjdk-headless mosh htop tmux
# useradd -m test
# su - test
$ git clone -b ilri https://github.com/alanorth/dspace-performance-test.git
$ wget http://www-us.apache.org/dist//jmeter/binaries/apache-jmeter-3.3.tgz
$ tar xf apache-jmeter-3.3.tgz
$ cd apache-jmeter-3.3/bin
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-24-linode5451120-baseline.jtl -j ~/dspace-performance-test/2018-01-24-linode5451120-baseline.log
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-24-linode5451120-baseline2.jtl -j ~/dspace-performance-test/2018-01-24-linode5451120-baseline2.log
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-24-linode5451120-baseline3.jtl -j ~/dspace-performance-test/2018-01-24-linode5451120-baseline3.log
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Then I generated reports for these runs like this:</li>
2019-05-05 15:45:12 +02:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ jmeter -g 2018-01-24-linode5451120-baseline.jtl -o 2018-01-24-linode5451120-baseline
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2018-01-25">2018-01-25</h2>
2018-02-11 17:28:23 +01:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Run another round of tests on DSpace Test with jmeter after changing Tomcat&rsquo;s <code>minSpareThreads</code> to 20 (default is 10) and <code>acceptorThreadCount</code> to 2 (default is 1):</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads.log
2018-02-11 17:28:23 +01:00
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads2.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads2.log
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads3.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-tomcat-threads3.log
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I changed the parameters back to the baseline ones and switched the Tomcat JVM garbage collector to G1GC and re-ran the tests</li>
<li>JVM options for Tomcat changed from <code>-Xms3072m -Xmx3072m -XX:+UseConcMarkSweepGC</code> to <code>-Xms3072m -Xmx3072m -XX:+UseG1GC -XX:+PerfDisableSharedMem</code></li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-g1gc.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-g1gc.log
2018-02-11 17:28:23 +01:00
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-g1gc2.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-g1gc2.log
$ ./jmeter -n -t ~/dspace-performance-test/DSpacePerfTest-dspacetest.cgiar.org.jmx -l ~/dspace-performance-test/2018-01-25-linode5451120-g1gc3.jtl -j ~/dspace-performance-test/2018-01-25-linode5451120-g1gc3.log
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I haven&rsquo;t had time to look at the results yet</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-26">2018-01-26</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Peter followed up about some of the points from the Skype meeting last week</li>
2020-01-27 15:20:44 +01:00
<li>Regarding the ORCID field issue, I see <a href="http://repo.mel.cgiar.org/handle/20.500.11766/7668?show=full">ICARDA&rsquo;s MELSpace is using <code>cg.creator.ID</code></a>: 0000-0001-9156-7691</li>
2018-02-11 17:28:23 +01:00
<li>I had floated the idea of using a controlled vocabulary with values formatted something like: Orth, Alan S. (0000-0002-1735-7458)</li>
<li>Update PostgreSQL JDBC driver version from 42.1.4 to 42.2.1 on DSpace Test, see: <a href="https://jdbc.postgresql.org/">https://jdbc.postgresql.org/</a></li>
<li>Reboot DSpace Test to get new Linode kernel (Linux 4.14.14-x86_64-linode94)</li>
<li>I am testing my old work on the <code>dc.rights</code> field, I had added a branch for it a few months ago</li>
<li>I added a list of Creative Commons and other licenses in <code>input-forms.xml</code></li>
2020-01-27 15:20:44 +01:00
<li>The problem is that Peter wanted to use two questions, one for CG centers and one for other, but using the same metadata value, which isn&rsquo;t possible (?)</li>
2019-11-28 16:30:45 +01:00
<li>So I used some creativity and made several fields display values, but not store any, ie:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>&lt;pair&gt;
2019-11-28 16:30:45 +01:00
&lt;displayed-value&gt;For products published by another party:&lt;/displayed-value&gt;
&lt;stored-value&gt;&lt;/stored-value&gt;
2018-02-11 17:28:23 +01:00
&lt;/pair&gt;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I was worried that if a user selected this field for some reason that DSpace would store an empty value, but it simply doesn&rsquo;t register that as a valid option:</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/dc-rights-submission.png" alt="Rights"></p>
2018-02-11 17:28:23 +01:00
<ul>
<li>I submitted a test item with ORCiDs and dc.rights from a controlled vocabulary on DSpace Test: <a href="https://dspacetest.cgiar.org/handle/10568/97703">https://dspacetest.cgiar.org/handle/10568/97703</a></li>
<li>I will send it to Peter to check and give feedback (ie, about the ORCiD field name as well as allowing users to add ORCiDs manually or not)</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-28">2018-01-28</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>Assist Udana from WLE again to proof his 25 records and upload them to DSpace Test</li>
<li>I am playing with the <code>startStopThreads=&quot;0&quot;</code> parameter in Tomcat <code>&lt;Engine&gt;</code> and <code>&lt;Host&gt;</code> configuration</li>
<li>It reduces the start up time of Catalina by using multiple threads to start web applications in parallel</li>
<li>On my local test machine the startup time went from 70 to 30 seconds</li>
<li>See: <a href="https://tomcat.apache.org/tomcat-7.0-doc/config/host.html">https://tomcat.apache.org/tomcat-7.0-doc/config/host.html</a></li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-29">2018-01-29</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>CGSpace went down this morning for a few minutes, according to UptimeRobot</li>
2019-11-28 16:30:45 +01:00
<li>Looking at the DSpace logs I see this error happened just before UptimeRobot noticed it going down:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-29 05:30:22,226 INFO org.dspace.usage.LoggerUsageEventListener @ anonymous:session_id=3775D4125D28EF0C691B08345D905141:ip_addr=68.180.229.254:view_item:handle=10568/71890
2022-03-04 13:30:06 +01:00
2018-01-29 05:30:22,322 ERROR org.dspace.app.xmlui.aspect.discovery.AbstractSearch @ org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1994+TO+1999]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2018-02-11 17:28:23 +01:00
Was expecting one of:
2022-03-04 13:30:06 +01:00
&#34;TO&#34; ...
2019-11-28 16:30:45 +01:00
&lt;RANGE_QUOTED&gt; ...
&lt;RANGE_GOOP&gt; ...
2018-02-11 17:28:23 +01:00
2022-03-04 13:30:06 +01:00
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse &#39;dateIssued_keyword:[1994+TO+1999]&#39;: Encountered &#34; &#34;]&#34; &#34;] &#34;&#34; at line 1, column 32.
2018-02-11 17:28:23 +01:00
Was expecting one of:
2022-03-04 13:30:06 +01:00
&#34;TO&#34; ...
2019-11-28 16:30:45 +01:00
&lt;RANGE_QUOTED&gt; ...
&lt;RANGE_GOOP&gt; ...
</code></pre><ul>
<li>So is this an error caused by this particular client (which happens to be <code>Yahoo! Slurp</code>)?</li>
<li>I see a few dozen HTTP 499 errors in the nginx access log for a few minutes before this happened, but HTTP 499 is just when nginx says that the client closed the request early</li>
<li>Perhaps this from the nginx error log is relevant?</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>2018/01/29 05:26:34 [warn] 26895#26895: *944759 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/6/16/0000026166 while reading upstream, client: 180.76.15.34, server: cgspace.cgiar.org, request: &#34;GET /bitstream/handle/10947/4658/FISH%20Leaflet.pdf?sequence=12 HTTP/1.1&#34;, upstream: &#34;http://127.0.0.1:8443/bitstream/handle/10947/4658/FISH%20Leaflet.pdf?sequence=12&#34;, host: &#34;cgspace.cgiar.org&#34;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I think that must be unrelated, probably the client closed the request to nginx because DSpace (Tomcat) was taking too long</li>
<li>An interesting <a href="https://gist.github.com/magnetikonline/11312172">snippet to get the maximum and average nginx responses</a>:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># awk &#39;($9 ~ /200/) { i++;sum+=$10;max=$10&gt;max?$10:max; } END { printf(&#34;Maximum: %d\nAverage: %d\n&#34;,max,i?sum/i:0); }&#39; /var/log/nginx/access.log
2018-02-11 17:28:23 +01:00
Maximum: 2771268
Average: 210483
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I guess responses that don&rsquo;t fit in RAM get saved to disk (a default of 1024M), so this is definitely not the issue here, and that warning is totally unrelated</li>
<li>My best guess is that the Solr search error is related somehow but I can&rsquo;t figure it out</li>
<li>We definitely have enough database connections, as I haven&rsquo;t seen a pool error in weeks:</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code>$ grep -c &#34;Timeout: Pool empty.&#34; dspace.log.2018-01-2*
2018-02-11 17:28:23 +01:00
dspace.log.2018-01-20:0
dspace.log.2018-01-21:0
dspace.log.2018-01-22:0
dspace.log.2018-01-23:0
dspace.log.2018-01-24:0
dspace.log.2018-01-25:0
dspace.log.2018-01-26:0
dspace.log.2018-01-27:0
dspace.log.2018-01-28:0
dspace.log.2018-01-29:0
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Adam Hunt from WLE complained that pages take &ldquo;1-2 minutes&rdquo; to load each, from France and Sri Lanka</li>
<li>I asked him which particular pages, as right now pages load in 2 or 3 seconds for me</li>
<li>UptimeRobot said CGSpace went down again, and I looked at PostgreSQL and saw 211 active database connections</li>
2020-01-27 15:20:44 +01:00
<li>If it&rsquo;s not memory and it&rsquo;s not database, it&rsquo;s gotta be Tomcat threads, seeing as the default <code>maxThreads</code> is 200 anyways, it actually makes sense</li>
2019-11-28 16:30:45 +01:00
<li>I decided to change the Tomcat thread settings on CGSpace:
2018-02-11 17:28:23 +01:00
<ul>
<li><code>maxThreads</code> from 200 (default) to 400</li>
<li><code>processorCache</code> from 200 (default) to 400, <a href="https://tomcat.apache.org/tomcat-7.0-doc/config/http.html">recommended to be the same as <code>maxThreads</code></a></li>
<li><code>minSpareThreads</code> from 10 (default) to 20</li>
<li><code>acceptorThreadCount</code> from 1 (default) to 2, <a href="https://tomcat.apache.org/tomcat-7.0-doc/config/http.html">recommended to be 2 on multi-CPU systems</a></li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
<li>Looks like I only enabled the new thread stuff on the connector used internally by Solr, so I probably need to match that by increasing them on the other connector that nginx proxies to</li>
<li>Jesus Christ I need to fucking fix the Munin monitoring so that I can tell how many fucking threads I have running</li>
<li>Wow, so apparently you need to specify which connector to check if you want any of the Munin Tomcat plugins besides &ldquo;tomcat_jvm&rdquo; to work (the connector name can be seen in the Catalina logs)</li>
<li>I modified <em>/etc/munin/plugin-conf.d/tomcat</em> to add the connector (with surrounding quotes!) and now the other plugins work (obviously the credentials are incorrect):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>[tomcat_*]
2019-11-28 16:30:45 +01:00
env.host 127.0.0.1
env.port 8081
2022-03-04 13:30:06 +01:00
env.connector &#34;http-bio-127.0.0.1-8443&#34;
2019-11-28 16:30:45 +01:00
env.user munin
env.password munin
</code></pre><ul>
<li>For example, I can see the threads:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># munin-run tomcat_threads
2018-02-11 17:28:23 +01:00
busy.value 0
idle.value 20
max.value 400
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Apparently you can&rsquo;t monitor more than one connector, so I guess the most important to monitor would be the one that nginx is sending stuff to</li>
<li>So for now I think I&rsquo;ll just monitor these and skip trying to configure the jmx plugins</li>
2019-11-28 16:30:45 +01:00
<li>Although following the logic of <em>/usr/share/munin/plugins/jmx_tomcat_dbpools</em> could be useful for getting the active Tomcat sessions</li>
<li>From debugging the <code>jmx_tomcat_db_pools</code> script from the <code>munin-plugins-java</code> package, I see that this is how you call arbitrary mbeans:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># port=5400 ip=&#34;127.0.0.1&#34; /usr/bin/java -cp /usr/share/munin/munin-jmx-plugins.jar org.munin.plugin.jmx.Beans Catalina:type=DataSource,class=javax.sql.DataSource,name=* maxActive
Catalina:type=DataSource,class=javax.sql.DataSource,name=&#34;jdbc/dspace&#34; maxActive 300
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>More notes here: <a href="https://github.com/munin-monitoring/contrib/tree/master/plugins/jmx">https://github.com/munin-monitoring/contrib/tree/master/plugins/jmx</a></li>
<li>Looking at the Munin graphs, I that the load is 200% every morning from 03:00 to almost 08:00</li>
2020-01-27 15:20:44 +01:00
<li>Tomcat&rsquo;s catalina.out log file is full of spam from this thing too, with lines like this</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>[===================&gt; ]38% time remaining: 5 hour(s) 21 minute(s) 47 seconds. timestamp: 2018-01-29 06:25:16
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>There are millions of these status lines, for example in just this one log file:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># zgrep -c &#34;time remaining&#34; /var/log/tomcat7/catalina.out.1.gz
2018-02-11 17:28:23 +01:00
1084741
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I filed a ticket with Atmire: <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566</a></li>
2018-02-11 17:28:23 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-01-31">2018-01-31</h2>
2018-02-11 17:28:23 +01:00
<ul>
<li>UptimeRobot says CGSpace went down at 7:57 AM, and indeed I see a lot of HTTP 499 codes in nginx logs</li>
<li>PostgreSQL activity shows 222 database connections</li>
<li>Now PostgreSQL activity shows 265 database connections!</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see any errors anywhere&hellip;</li>
2018-02-11 17:28:23 +01:00
<li>Now PostgreSQL activity shows 308 connections!</li>
2019-11-28 16:30:45 +01:00
<li>Well this is interesting, there are 400 Tomcat threads busy:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># munin-run tomcat_threads
2018-02-11 17:28:23 +01:00
busy.value 400
idle.value 0
max.value 400
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>And wow, we finally exhausted the database connections, from dspace.log:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-01-31 08:05:28,964 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL connection Error -
2018-02-11 17:28:23 +01:00
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-451] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:300; busy:300; idle:0; lastwait:5000].
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Now even the nightly Atmire background thing is getting HTTP 500 error:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Jan 31, 2018 8:16:05 AM com.sun.jersey.spi.container.ContainerResponse logException
2018-02-11 17:28:23 +01:00
SEVERE: Mapped exception to response: 500 (Internal Server Error)
javax.ws.rs.WebApplicationException
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>For now I will restart Tomcat to clear this shit and bring the site back up</li>
<li>The top IPs from this morning, during 7 and 8AM in XMLUI and REST/OAI:</li>
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep -E &#34;31/Jan/2018:(07|08)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
67 66.249.66.70
70 207.46.13.12
71 197.210.168.174
83 207.46.13.13
85 157.55.39.79
89 207.46.13.14
123 68.180.228.157
198 66.249.66.90
219 41.204.190.40
255 2405:204:a208:1e12:132:2a8e:ad28:46c0
2022-03-04 13:30:06 +01:00
# cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E &#34;31/Jan/2018:(07|08)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
2 65.55.210.187
2 66.249.66.90
3 157.55.39.79
4 197.232.39.92
4 34.216.252.127
6 104.196.152.243
6 213.55.85.89
15 122.52.115.13
16 213.55.107.186
596 45.5.184.196
</code></pre><ul>
<li>This looks reasonable to me, so I have no idea why we ran out of Tomcat threads</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/tomcat-threads-day.png" alt="Tomcat threads"></p>
2018-02-11 17:28:23 +01:00
<ul>
<li>We need to start graphing the Tomcat sessions as well, though that requires JMX</li>
<li>Also, I wonder if I could disable the nightly Atmire thing</li>
2020-01-27 15:20:44 +01:00
<li>God, I don&rsquo;t know where this load is coming from</li>
2018-02-11 17:28:23 +01:00
<li>Since I bumped up the Tomcat threads from 200 to 400 the load on the server has been sustained at about 200% for almost a whole day:</li>
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/cpu-week.png" alt="CPU usage week"></p>
2018-02-11 17:28:23 +01:00
<ul>
<li>I should make separate database pools for the web applications and the API applications like REST and OAI</li>
2020-01-27 15:20:44 +01:00
<li>Ok, so this is interesting: I figured out how to get the MBean path to query Tomcat&rsquo;s activeSessions from JMX (using <code>munin-plugins-java</code>):</li>
2019-11-28 16:30:45 +01:00
</ul>
2022-03-04 13:30:06 +01:00
<pre tabindex="0"><code># port=5400 ip=&#34;127.0.0.1&#34; /usr/bin/java -cp /usr/share/munin/munin-jmx-plugins.jar org.munin.plugin.jmx.Beans Catalina:type=Manager,context=/,host=localhost activeSessions
2018-02-11 17:28:23 +01:00
Catalina:type=Manager,context=/,host=localhost activeSessions 8
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>If you connect to Tomcat in <code>jvisualvm</code> it&rsquo;s pretty obvious when you hover over the elements</li>
2018-02-11 17:28:23 +01:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/01/jvisualvm-mbeans-path.png" alt="MBeans in JVisualVM"></p>
2018-02-11 17:28:23 +01:00
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
2023-01-01 09:12:13 +01:00
<li><a href="/cgspace-notes/2023-01/">January, 2023</a></li>
2022-12-03 08:46:29 +01:00
<li><a href="/cgspace-notes/2022-12/">December, 2022</a></li>
2022-11-01 20:12:24 +01:00
<li><a href="/cgspace-notes/2022-11/">November, 2022</a></li>
2022-10-01 18:47:37 +02:00
<li><a href="/cgspace-notes/2022-10/">October, 2022</a></li>
2022-09-15 07:37:57 +02:00
<li><a href="/cgspace-notes/2022-09/">September, 2022</a></li>
2018-02-11 17:28:23 +01:00
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p dir="auto">
2018-02-11 17:28:23 +01:00
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>