2018-02-11 17:28:23 +01:00
<!DOCTYPE html>
2019-10-11 10:19:42 +02:00
< html lang = "en" >
2018-02-11 17:28:23 +01:00
< head >
< meta charset = "utf-8" >
< meta name = "viewport" content = "width=device-width, initial-scale=1, shrink-to-fit=no" >
< meta property = "og:title" content = "November, 2015" / >
< meta property = "og:description" content = "2015-11-22
CGSpace went down
Looks like DSpace exhausted its PostgreSQL connection pool
2019-05-05 15:45:12 +02:00
Last week I had increased the limit from 30 to 60, which seemed to help, but now there are many more idle connections:
2018-02-11 17:28:23 +01:00
$ psql -c ' SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
78
" />
< meta property = "og:type" content = "article" / >
2019-02-02 13:12:57 +01:00
< meta property = "og:url" content = "https://alanorth.github.io/cgspace-notes/2015-11/" / >
2019-08-08 17:10:44 +02:00
< meta property = "article:published_time" content = "2015-11-23T17:00:57+03:00" / >
< meta property = "article:modified_time" content = "2018-03-09T22:10:33+02:00" / >
2018-09-30 07:23:48 +02:00
2018-02-11 17:28:23 +01:00
< meta name = "twitter:card" content = "summary" / >
< meta name = "twitter:title" content = "November, 2015" / >
< meta name = "twitter:description" content = "2015-11-22
CGSpace went down
Looks like DSpace exhausted its PostgreSQL connection pool
2019-05-05 15:45:12 +02:00
Last week I had increased the limit from 30 to 60, which seemed to help, but now there are many more idle connections:
2018-02-11 17:28:23 +01:00
$ psql -c ' SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
78
"/>
2020-11-07 15:22:36 +01:00
< meta name = "generator" content = "Hugo 0.78.1" / >
2018-02-11 17:28:23 +01:00
< script type = "application/ld+json" >
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "November, 2015",
2020-04-02 09:55:42 +02:00
"url": "https://alanorth.github.io/cgspace-notes/2015-11/",
2018-04-30 18:05:39 +02:00
"wordCount": "798",
2019-10-11 10:19:42 +02:00
"datePublished": "2015-11-23T17:00:57+03:00",
"dateModified": "2018-03-09T22:10:33+02:00",
2018-02-11 17:28:23 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
< / script >
< link rel = "canonical" href = "https://alanorth.github.io/cgspace-notes/2015-11/" >
< title > November, 2015 | CGSpace Notes< / title >
2019-10-11 10:19:42 +02:00
2018-02-11 17:28:23 +01:00
<!-- combined, minified CSS -->
2020-01-23 19:19:38 +01:00
2020-11-16 09:54:00 +01:00
< link href = "https://alanorth.github.io/cgspace-notes/css/style.d20c61b183eb27beb5b2c48f70a38b91c8bb5fb929e77b447d5f77c7285221ad.css" rel = "stylesheet" integrity = "sha256-0gxhsYPrJ761ssSPcKOLkci7X7kp53tEfV93xyhSIa0=" crossorigin = "anonymous" >
2019-10-11 10:19:42 +02:00
2018-02-11 17:28:23 +01:00
2020-01-28 11:01:42 +01:00
<!-- minified Font Awesome for SVG icons -->
2020-11-16 09:54:00 +01:00
< script defer src = "https://alanorth.github.io/cgspace-notes/js/fontawesome.min.4ed405d7c7002b970d34cbe6026ff44a556b0808cb98a9db4008752110ed964b.js" integrity = "sha256-TtQF18cAK5cNNMvmAm/0SlVrCAjLmKnbQAh1IRDtlks=" crossorigin = "anonymous" > < / script >
2020-01-28 11:01:42 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2018-02-11 17:28:23 +01:00
< / head >
< body >
< div class = "blog-masthead" >
< div class = "container" >
< nav class = "nav blog-nav" >
< a class = "nav-link " href = "https://alanorth.github.io/cgspace-notes/" > Home< / a >
< / nav >
< / div >
< / div >
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
< header class = "blog-header" >
< div class = "container" >
2019-10-11 10:19:42 +02:00
< h1 class = "blog-title" dir = "auto" > < a href = "https://alanorth.github.io/cgspace-notes/" rel = "home" > CGSpace Notes< / a > < / h1 >
< p class = "lead blog-description" dir = "auto" > Documenting day-to-day work on the < a href = "https://cgspace.cgiar.org" > CGSpace< / a > repository.< / p >
2018-02-11 17:28:23 +01:00
< / div >
< / header >
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
< div class = "container" >
< div class = "row" >
< div class = "col-sm-8 blog-main" >
< article class = "blog-post" >
< header >
2019-10-11 10:19:42 +02:00
< h2 class = "blog-post-title" dir = "auto" > < a href = "https://alanorth.github.io/cgspace-notes/2015-11/" > November, 2015< / a > < / h2 >
2020-11-16 09:54:00 +01:00
< p class = "blog-post-meta" >
< time datetime = "2015-11-23T17:00:57+03:00" > Mon Nov 23, 2015< / time >
in
2018-02-11 17:28:23 +01:00
2020-01-28 11:01:42 +01:00
< span class = "fas fa-tag" aria-hidden = "true" > < / span > < a href = "/cgspace-notes/tags/notes/" rel = "tag" > Notes< / a >
2018-02-11 17:28:23 +01:00
< / p >
< / header >
2019-12-17 13:49:24 +01:00
< h2 id = "2015-11-22" > 2015-11-22< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > CGSpace went down< / li >
< li > Looks like DSpace exhausted its PostgreSQL connection pool< / li >
2019-11-28 16:30:45 +01:00
< li > Last week I had increased the limit from 30 to 60, which seemed to help, but now there are many more idle connections:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
78
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2018-02-11 17:28:23 +01:00
< li > For now I have increased the limit from 60 to 90, run updates, and rebooted the server< / li >
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2015-11-24" > 2015-11-24< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > CGSpace went down again< / li >
2020-01-27 15:20:44 +01:00
< li > Getting emails from uptimeRobot and uptimeButler that it’ s down, and Google Webmaster Tools is sending emails that there is an increase in crawl errors< / li >
2019-11-28 16:30:45 +01:00
< li > Looks like there are still a bunch of idle PostgreSQL connections:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
96
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > For some reason the number of idle connections is very high since we upgraded to DSpace 5< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2015-11-25" > 2015-11-25< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > Troubleshoot the DSpace 5 OAI breakage caused by nginx routing config< / li >
2019-11-28 16:30:45 +01:00
< li > The OAI application requests stylesheets and javascript files with the path < code > /oai/static/css< / code > , which gets matched here:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > # static assets we can load from the file system directly with nginx
location ~ /(themes|static|aspects/ReportingSuite) {
2019-11-28 16:30:45 +01:00
try_files $uri @tomcat;
2018-02-11 17:28:23 +01:00
...
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > The document root is relative to the xmlui app, so this gets a 404—I’ m not sure why it doesn’ t pass to < code > @tomcat< / code > < / li >
< li > Anyways, I can’ t find any URIs with path < code > /static< / code > , and the more important point is to handle all the static theme assets, so we can just remove < code > static< / code > from the regex for now (who cares if we can’ t use nginx to send Etags for OAI CSS!)< / li >
< li > Also, I noticed we aren’ t setting CSP headers on the static assets, because in nginx headers are inherited in child blocks, but if you use < code > add_header< / code > in a child block it doesn’ t inherit the others< / li >
2019-11-28 16:30:45 +01:00
< li > We simply need to add < code > include extra-security.conf;< / code > to the above location block (but research and test first)< / li >
< li > We should add WOFF assets to the list of things to set expires for:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > location ~* \.(?:ico|css|js|gif|jpe?g|png|woff)$ {
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > We should also add < code > aspects/Statistics< / code > to the location block for static assets (minus < code > static< / code > from above):< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > location ~ /(themes|aspects/ReportingSuite|aspects/Statistics) {
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > Need to check < code > /about< / code > on CGSpace, as it’ s blank on my local test server and we might need to add something there< / li >
2019-11-28 16:30:45 +01:00
< li > CGSpace has been up and down all day due to PostgreSQL idle connections (current DSpace pool is 90):< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
93
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I looked closer at the idle connections and saw that many have been idle for hours (current time on server is < code > 2015-11-25T20:20:42+0000< / code > ):< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | less -S
datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start |
-------+----------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+---
20951 | cgspace | 10966 | 18205 | cgspace | | 127.0.0.1 | | 37731 | 2015-11-25 13:13:02.837624+00 | | 20
20951 | cgspace | 10967 | 18205 | cgspace | | 127.0.0.1 | | 37737 | 2015-11-25 13:13:03.069421+00 | | 20
...
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > There is a relevant Jira issue about this: < a href = "https://jira.duraspace.org/browse/DS-1458" > https://jira.duraspace.org/browse/DS-1458< / a > < / li >
2020-01-27 15:20:44 +01:00
< li > It seems there is some sense changing DSpace’ s default < code > db.maxidle< / code > from unlimited (-1) to something like 8 (Tomcat default) or 10 (Confluence default)< / li >
2019-11-28 16:30:45 +01:00
< li > Change < code > db.maxidle< / code > from -1 to 10, reduce < code > db.maxconnections< / code > from 90 to 50, and restart postgres and tomcat7< / li >
< li > Also redeploy DSpace Test with a clean sync of CGSpace and mirror these database settings there as well< / li >
< li > Also deploy the nginx fixes for the < code > try_files< / code > location block as well as the expires block< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2015-11-26" > 2015-11-26< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > CGSpace behaving much better since changing < code > db.maxidle< / code > yesterday, but still two up/down notices from monitoring this morning (better than 50!)< / li >
< li > CCAFS colleagues mentioned that the REST API is very slow, 24 seconds for one item< / li >
2019-11-28 16:30:45 +01:00
< li > Not as bad for me, but still unsustainable if you have to get many:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
8.415
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Monitoring e-mailed in the evening to say CGSpace was down< / li >
< li > Idle connections in PostgreSQL again:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle
66
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > At the time, the current DSpace pool size was 50… < / li >
< li > I reduced the pool back to the default of 30, and reduced the < code > db.maxidle< / code > settings from 10 to 8< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2015-11-29" > 2015-11-29< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > Still more alerts that CGSpace has been up and down all day< / li >
2019-11-28 16:30:45 +01:00
< li > Current database settings for DSpace:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > db.maxconnections = 30
db.maxwait = 5000
db.maxidle = 8
db.statementpool = true
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > And idle connections:< / li >
< / ul >
2018-02-11 17:28:23 +01:00
< pre > < code > $ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle
49
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > Perhaps I need to start drastically increasing the connection limits—like to 300—to see if DSpace’ s thirst can ever be quenched< / li >
< li > On another note, SUNScholar’ s notes suggest adjusting some other postgres variables: < a href = "http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database" > http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database< / a > < / li >
2019-11-28 16:30:45 +01:00
< li > This might help with REST API speed (which I mentioned above and still need to do real tests)< / li >
2018-02-11 17:28:23 +01:00
< / ul >
< / article >
< / div > <!-- /.blog - main -->
< aside class = "col-sm-3 ml-auto blog-sidebar" >
< section class = "sidebar-module" >
< h4 > Recent Posts< / h4 >
< ol class = "list-unstyled" >
2020-11-02 18:34:10 +01:00
< li > < a href = "/cgspace-notes/2020-11/" > November, 2020< / a > < / li >
2020-10-06 15:59:31 +02:00
< li > < a href = "/cgspace-notes/2020-10/" > October, 2020< / a > < / li >
2020-09-03 12:50:56 +02:00
< li > < a href = "/cgspace-notes/2020-09/" > September, 2020< / a > < / li >
2020-08-06 09:56:13 +02:00
< li > < a href = "/cgspace-notes/2020-08/" > August, 2020< / a > < / li >
2020-08-02 21:14:16 +02:00
2020-07-01 14:37:20 +02:00
< li > < a href = "/cgspace-notes/2020-07/" > July, 2020< / a > < / li >
2018-02-11 17:28:23 +01:00
< / ol >
< / section >
< section class = "sidebar-module" >
< h4 > Links< / h4 >
< ol class = "list-unstyled" >
< li > < a href = "https://cgspace.cgiar.org" > CGSpace< / a > < / li >
< li > < a href = "https://dspacetest.cgiar.org" > DSpace Test< / a > < / li >
< li > < a href = "https://github.com/ilri/DSpace" > CGSpace @ GitHub< / a > < / li >
< / ol >
< / section >
< / aside >
< / div > <!-- /.row -->
< / div > <!-- /.container -->
< footer class = "blog-footer" >
2019-10-11 10:19:42 +02:00
< p dir = "auto" >
2018-02-11 17:28:23 +01:00
Blog template created by < a href = "https://twitter.com/mdo" > @mdo< / a > , ported to Hugo by < a href = 'https://twitter.com/mralanorth' > @mralanorth< / a > .
< / p >
< p >
< a href = "#" > Back to top< / a >
< / p >
< / footer >
< / body >
< / html >