Update notes

This commit is contained in:
Alan Orth 2017-09-10 18:17:25 +03:00
parent 870ebda6a3
commit 184f0628b3
Signed by: alanorth
GPG Key ID: 0FB860CC9C45B1B9
6 changed files with 54 additions and 13 deletions

View File

@ -169,7 +169,7 @@ $ grep -rsI SQLException dspace-xmlui | wc -l
- Unfortunately I don't have the breakdown of which DSpace apps are making those connections (I'll assume XMLUI)
- So I guess a limit of 30 (DSpace default) is too low, but 70 causes problems when the load increases and the system's PostgreSQL `max_connections` is too low
- For now I think maybe setting DSpace's `db.maxconnections` to 40 and adjusting the system's `max_connections` might be a good starting point: 40 * 3 + 4 = 123
- For now I think maybe setting DSpace's `db.maxconnections` to 40 and adjusting the system's `max_connections` might be a good starting point: 40 * 3 + 3 = 123
- Apply 223 more author corrections from Peter on CGIAR Library
- Help Magdalena from CCAFS with some CUA statistics questions

View File

@ -34,3 +34,23 @@ DELETE 58
- Create pull request for Phase I and II changes to CCAFS Project Tags: [#336](https://github.com/ilri/DSpace/pull/336)
- We've been discussing with Macaroni Bros and CCAFS for the past month or so and the list of tags was recently finalized
- There will need to be some metadata updatesthough if I recall correctly it is only about seven recordsfor that as well, I had made some notes about it in [2017-07](/cgspace-notes/2017-07), but I've asked for more clarification from Lili just in case
- Looking at the DSpace logs to see if we've had a change in the "Cannot get a connection" errors since last month when we adjusted the `db.maxconnections` parameter on CGSpace:
```
# grep -c "Cannot get a connection, pool error Timeout waiting for idle object" dspace.log.2017-09-*
dspace.log.2017-09-01:0
dspace.log.2017-09-02:0
dspace.log.2017-09-03:9
dspace.log.2017-09-04:17
dspace.log.2017-09-05:752
dspace.log.2017-09-06:0
dspace.log.2017-09-07:0
dspace.log.2017-09-08:10
dspace.log.2017-09-09:0
dspace.log.2017-09-10:0
```
- Also, since last month (2017-08) Macaroni Bros no longer runs their REST API scraper every hour, so I'm sure that helped
- There are still some errors, though, so maybe I should bump the connection limit up a bit
- I remember seeing that Munin shows that the average number of connections is 50 (which is probably mostly from the XMLUI) and we're currently allowing 40 connections per app, so maybe it would be good to bump that value up to 50 or 60 along with the system's PostgreSQL `max_connections` (formula should be: webapps * 60 + 3, or 3 * 60 + 3 = 183 in our case)
- I updated both CGSpace and DSpace Test to use these new settings (60 connections per web app and 183 for system PostgreSQL limit)

View File

@ -13,7 +13,7 @@
<meta property="article:published_time" content="2017-05-01T16:21:52&#43;02:00"/>
<meta property="article:modified_time" content="2017-09-10T13:35:51&#43;03:00"/>
<meta property="article:modified_time" content="2017-09-10T17:46:54&#43;03:00"/>
@ -39,7 +39,7 @@
"url": "https://alanorth.github.io/cgspace-notes/2017-05/",
"wordCount": "2398",
"datePublished": "2017-05-01T16:21:52&#43;02:00",
"dateModified": "2017-09-10T13:35:51&#43;03:00",
"dateModified": "2017-09-10T17:46:54&#43;03:00",
"author": {
"@type": "Person",
"name": "Alan Orth"

View File

@ -352,7 +352,7 @@ $ grep -rsI SQLException dspace-xmlui | wc -l
<ul>
<li>Unfortunately I don&rsquo;t have the breakdown of which DSpace apps are making those connections (I&rsquo;ll assume XMLUI)</li>
<li>So I guess a limit of 30 (DSpace default) is too low, but 70 causes problems when the load increases and the system&rsquo;s PostgreSQL <code>max_connections</code> is too low</li>
<li>For now I think maybe setting DSpace&rsquo;s <code>db.maxconnections</code> to 40 and adjusting the system&rsquo;s <code>max_connections</code> might be a good starting point: 40 * 3 + 4 = 123</li>
<li>For now I think maybe setting DSpace&rsquo;s <code>db.maxconnections</code> to 40 and adjusting the system&rsquo;s <code>max_connections</code> might be a good starting point: 40 * 3 + 3 = 123</li>
<li>Apply 223 more author corrections from Peter on CGIAR Library</li>
<li>Help Magdalena from CCAFS with some CUA statistics questions</li>
</ul>

View File

@ -19,7 +19,7 @@ Linode sent an alert that CGSpace (linode18) was using 261% CPU for the past two
<meta property="article:published_time" content="2017-09-07T16:54:52&#43;07:00"/>
<meta property="article:modified_time" content="2017-09-10T13:35:51&#43;03:00"/>
<meta property="article:modified_time" content="2017-09-10T17:46:54&#43;03:00"/>
@ -49,9 +49,9 @@ Linode sent an alert that CGSpace (linode18) was using 261% CPU for the past two
"@type": "BlogPosting",
"headline": "September, 2017",
"url": "https://alanorth.github.io/cgspace-notes/2017-09/",
"wordCount": "258",
"wordCount": "443",
"datePublished": "2017-09-07T16:54:52&#43;07:00",
"dateModified": "2017-09-10T13:35:51&#43;03:00",
"dateModified": "2017-09-10T17:46:54&#43;03:00",
"author": {
"@type": "Person",
"name": "Alan Orth"
@ -149,6 +149,27 @@ DELETE 58
<li>Create pull request for Phase I and II changes to CCAFS Project Tags: <a href="https://github.com/ilri/DSpace/pull/336">#336</a></li>
<li>We&rsquo;ve been discussing with Macaroni Bros and CCAFS for the past month or so and the list of tags was recently finalized</li>
<li>There will need to be some metadata updatesthough if I recall correctly it is only about seven recordsfor that as well, I had made some notes about it in <a href="/cgspace-notes/2017-07">2017-07</a>, but I&rsquo;ve asked for more clarification from Lili just in case</li>
<li>Looking at the DSpace logs to see if we&rsquo;ve had a change in the &ldquo;Cannot get a connection&rdquo; errors since last month when we adjusted the <code>db.maxconnections</code> parameter on CGSpace:</li>
</ul>
<pre><code># grep -c &quot;Cannot get a connection, pool error Timeout waiting for idle object&quot; dspace.log.2017-09-*
dspace.log.2017-09-01:0
dspace.log.2017-09-02:0
dspace.log.2017-09-03:9
dspace.log.2017-09-04:17
dspace.log.2017-09-05:752
dspace.log.2017-09-06:0
dspace.log.2017-09-07:0
dspace.log.2017-09-08:10
dspace.log.2017-09-09:0
dspace.log.2017-09-10:0
</code></pre>
<ul>
<li>Also, since last month (2017-08) Macaroni Bros no longer runs their REST API scraper every hour, so I&rsquo;m sure that helped</li>
<li>There are still some errors, though, so maybe I should bump the connection limit up a bit</li>
<li>I remember seeing that Munin shows that the average number of connections is 50 (which is probably mostly from the XMLUI) and we&rsquo;re currently allowing 40 connections per app, so maybe it would be good to bump that value up to 50 or 60 along with the system&rsquo;s PostgreSQL <code>max_connections</code> (formula should be: webapps * 60 + 3, or 3 * 60 + 3 = 183 in our case)</li>
<li>I updated both CGSpace and DSpace Test to use these new settings (60 connections per web app and 183 for system PostgreSQL limit)</li>
</ul>

View File

@ -4,7 +4,7 @@
<url>
<loc>https://alanorth.github.io/cgspace-notes/2017-09/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
</url>
<url>
@ -24,7 +24,7 @@
<url>
<loc>https://alanorth.github.io/cgspace-notes/2017-05/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
</url>
<url>
@ -119,7 +119,7 @@
<url>
<loc>https://alanorth.github.io/cgspace-notes/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
<priority>0</priority>
</url>
@ -130,19 +130,19 @@
<url>
<loc>https://alanorth.github.io/cgspace-notes/tags/notes/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
<priority>0</priority>
</url>
<url>
<loc>https://alanorth.github.io/cgspace-notes/post/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
<priority>0</priority>
</url>
<url>
<loc>https://alanorth.github.io/cgspace-notes/tags/</loc>
<lastmod>2017-09-10T13:35:51+03:00</lastmod>
<lastmod>2017-09-10T17:46:54+03:00</lastmod>
<priority>0</priority>
</url>