CGSpace Notes /cgspace-notes/ Recent content on CGSpace Notes Hugo -- gohugo.io en-us Wed, 02 Dec 2015 13:18:00 +0300 December, 2015 /cgspace-notes/2015-12/ Wed, 02 Dec 2015 13:18:00 +0300 /cgspace-notes/2015-12/ <h2 id="2015-12-02:012a628feed6d64ae1151cbd6151ccd6">2015-12-02</h2> <ul> <li>Replace <code>lzop</code> with <code>xz</code> in log compression cron jobs on DSpace Test—it uses less space:</li> </ul> <pre><code># cd /home/dspacetest.cgiar.org/log # ls -lh dspace.log.2015-11-18* -rw-rw-r-- 1 tomcat7 tomcat7 2.0M Nov 18 23:59 dspace.log.2015-11-18 -rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo -rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz </code></pre> <ul> <li>I had used lrzip once, but it needs more memory and is harder to use as it requires the lrztar wrapper</li> <li>Need to remember to go check if everything is ok in a few days and then change CGSpace</li> <li>CGSpace went down again (due to PostgreSQL idle connections of course)</li> <li>Current database settings for DSpace are <code>db.maxconnections = 30</code> and <code>db.maxidle = 8</code>, yet idle connections are exceeding this:</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle 39 </code></pre> <ul> <li>I restarted PostgreSQL and Tomcat and it&rsquo;s back</li> <li>On a related note of why CGSpace is so slow, I decided to finally try the <code>pgtune</code> script to tune the postgres settings:</li> </ul> <pre><code># apt-get install pgtune # pgtune -i /etc/postgresql/9.3/main/postgresql.conf -o postgresql.conf-pgtune # mv /etc/postgresql/9.3/main/postgresql.conf /etc/postgresql/9.3/main/postgresql.conf.orig # mv postgresql.conf-pgtune /etc/postgresql/9.3/main/postgresql.conf </code></pre> <ul> <li>It introduced the following new settings:</li> </ul> <pre><code>default_statistics_target = 50 maintenance_work_mem = 480MB constraint_exclusion = on checkpoint_completion_target = 0.9 effective_cache_size = 5632MB work_mem = 48MB wal_buffers = 8MB checkpoint_segments = 16 shared_buffers = 1920MB max_connections = 80 </code></pre> <ul> <li>Now I need to go read PostgreSQL docs about these options, and watch memory settings in munin etc</li> <li>For what it&rsquo;s worth, now the REST API should be faster (because of these PostgreSQL tweaks):</li> </ul> <pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.474 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 2.141 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.685 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.995 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.786 </code></pre> <ul> <li>Last week it was an average of 8 seconds&hellip; now this is <sup>1</sup>&frasl;<sub>4</sub> of that</li> <li>CCAFS noticed that one of their items displays only the Atmire statlets: <a href="https://cgspace.cgiar.org/handle/10568/42445">https://cgspace.cgiar.org/handle/10568/42445</a></li> </ul> <p><img src="../images/2015/12/ccafs-item-no-metadata.png" alt="CCAFS item" /></p> <ul> <li>The authorizations for the item are all public READ, and I don&rsquo;t see any errors in dspace.log when browsing that item</li> <li>I filed a ticket on Atmire&rsquo;s issue tracker</li> <li>I also filed a ticket on Atmire&rsquo;s issue tracker for the PostgreSQL stuff</li> </ul> <h2 id="2015-12-03:012a628feed6d64ae1151cbd6151ccd6">2015-12-03</h2> <ul> <li>CGSpace very slow, and monitoring emailing me to say its down, even though I can load the page (very slowly)</li> <li>Idle postgres connections look like this (with no change in DSpace db settings lately):</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle 29 </code></pre> <ul> <li>I restarted Tomcat and postgres&hellip;</li> <li>Atmire commented that we should raise the JVM heap size by ~500M, so it is now <code>-Xms3584m -Xmx3584m</code></li> <li>We weren&rsquo;t out of heap yet, but it&rsquo;s probably fair enough that the DSpace 5 upgrade (and new Atmire modules) requires more memory so it&rsquo;s ok</li> <li>A possible side effect is that I see that the REST API is twice as fast for the request above now:</li> </ul> <pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.368 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 0.968 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 1.006 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 0.849 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 0.806 $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 0.854 </code></pre> <h2 id="2015-12-05:012a628feed6d64ae1151cbd6151ccd6">2015-12-05</h2> <ul> <li>CGSpace has been up and down all day and REST API is completely unresponsive</li> <li>PostgreSQL idle connections are currently:</li> </ul> <pre><code>postgres@linode01:~$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle 28 </code></pre> <ul> <li>I have reverted all the pgtune tweaks from the other day, as they didn&rsquo;t fix the stability issues, so I&rsquo;d rather not have them introducing more variables into the equation</li> <li>The PostgreSQL stats from Munin all point to something database-related with the DSpace 5 upgrade around mid–late November</li> </ul> <p><img src="../images/2015/12/postgres_bgwriter-year.png" alt="PostgreSQL bgwriter (year)" /> <img src="../images/2015/12/postgres_cache_cgspace-year.png" alt="PostgreSQL cache (year)" /> <img src="../images/2015/12/postgres_locks_cgspace-year.png" alt="PostgreSQL locks (year)" /> <img src="../images/2015/12/postgres_scans_cgspace-year.png" alt="PostgreSQL scans (year)" /></p> November, 2015 /cgspace-notes/2015-11/ Mon, 23 Nov 2015 17:00:57 +0300 /cgspace-notes/2015-11/ <h2 id="2015-11-22:3d03b850f8126f80d8144c2e17ea0ae7">2015-11-22</h2> <ul> <li>CGSpace went down</li> <li>Looks like DSpace exhausted its PostgreSQL connection pool</li> <li>Last week I had increased the limit from 30 to 60, which seemed to help, but now there are many more idle connections:</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace 78 </code></pre> <ul> <li>For now I have increased the limit from 60 to 90, run updates, and rebooted the server</li> </ul> <h2 id="2015-11-24:3d03b850f8126f80d8144c2e17ea0ae7">2015-11-24</h2> <ul> <li>CGSpace went down again</li> <li>Getting emails from uptimeRobot and uptimeButler that it&rsquo;s down, and Google Webmaster Tools is sending emails that there is an increase in crawl errors</li> <li>Looks like there are still a bunch of idle PostgreSQL connections:</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace 96 </code></pre> <ul> <li>For some reason the number of idle connections is very high since we upgraded to DSpace 5</li> </ul> <h2 id="2015-11-25:3d03b850f8126f80d8144c2e17ea0ae7">2015-11-25</h2> <ul> <li>Troubleshoot the DSpace 5 OAI breakage caused by nginx routing config</li> <li>The OAI application requests stylesheets and javascript files with the path <code>/oai/static/css</code>, which gets matched here:</li> </ul> <pre><code># static assets we can load from the file system directly with nginx location ~ /(themes|static|aspects/ReportingSuite) { try_files $uri @tomcat; ... </code></pre> <ul> <li>The document root is relative to the xmlui app, so this gets a 404—I&rsquo;m not sure why it doesn&rsquo;t pass to <code>@tomcat</code></li> <li>Anyways, I can&rsquo;t find any URIs with path <code>/static</code>, and the more important point is to handle all the static theme assets, so we can just remove <code>static</code> from the regex for now (who cares if we can&rsquo;t use nginx to send Etags for OAI CSS!)</li> <li>Also, I noticed we aren&rsquo;t setting CSP headers on the static assets, because in nginx headers are inherited in child blocks, but if you use <code>add_header</code> in a child block it doesn&rsquo;t inherit the others</li> <li>We simply need to add <code>include extra-security.conf;</code> to the above location block (but research and test first)</li> <li>We should add WOFF assets to the list of things to set expires for:</li> </ul> <pre><code>location ~* \.(?:ico|css|js|gif|jpe?g|png|woff)$ { </code></pre> <ul> <li>We should also add <code>aspects/Statistics</code> to the location block for static assets (minus <code>static</code> from above):</li> </ul> <pre><code>location ~ /(themes|aspects/ReportingSuite|aspects/Statistics) { </code></pre> <ul> <li>Need to check <code>/about</code> on CGSpace, as it&rsquo;s blank on my local test server and we might need to add something there</li> <li>CGSpace has been up and down all day due to PostgreSQL idle connections (current DSpace pool is 90):</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace 93 </code></pre> <ul> <li>I looked closer at the idle connections and saw that many have been idle for hours (current time on server is <code>2015-11-25T20:20:42+0000</code>):</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | less -S datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | -------+----------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+--- 20951 | cgspace | 10966 | 18205 | cgspace | | 127.0.0.1 | | 37731 | 2015-11-25 13:13:02.837624+00 | | 20 20951 | cgspace | 10967 | 18205 | cgspace | | 127.0.0.1 | | 37737 | 2015-11-25 13:13:03.069421+00 | | 20 ... </code></pre> <ul> <li>There is a relevant Jira issue about this: <a href="https://jira.duraspace.org/browse/DS-1458">https://jira.duraspace.org/browse/DS-1458</a></li> <li>It seems there is some sense changing DSpace&rsquo;s default <code>db.maxidle</code> from unlimited (-1) to something like 8 (Tomcat default) or 10 (Confluence default)</li> <li>Change <code>db.maxidle</code> from -1 to 10, reduce <code>db.maxconnections</code> from 90 to 50, and restart postgres and tomcat7</li> <li>Also redeploy DSpace Test with a clean sync of CGSpace and mirror these database settings there as well</li> <li>Also deploy the nginx fixes for the <code>try_files</code> location block as well as the expires block</li> </ul> <h2 id="2015-11-26:3d03b850f8126f80d8144c2e17ea0ae7">2015-11-26</h2> <ul> <li>CGSpace behaving much better since changing <code>db.maxidle</code> yesterday, but still two up/down notices from monitoring this morning (better than 50!)</li> <li>CCAFS colleagues mentioned that the REST API is very slow, 24 seconds for one item</li> <li>Not as bad for me, but still unsustainable if you have to get many:</li> </ul> <pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all 8.415 </code></pre> <ul> <li>Monitoring e-mailed in the evening to say CGSpace was down</li> <li>Idle connections in PostgreSQL again:</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle 66 </code></pre> <ul> <li>At the time, the current DSpace pool size was 50&hellip;</li> <li>I reduced the pool back to the default of 30, and reduced the <code>db.maxidle</code> settings from 10 to 8</li> </ul> <h2 id="2015-11-29:3d03b850f8126f80d8144c2e17ea0ae7">2015-11-29</h2> <ul> <li>Still more alerts that CGSpace has been up and down all day</li> <li>Current database settings for DSpace:</li> </ul> <pre><code>db.maxconnections = 30 db.maxwait = 5000 db.maxidle = 8 db.statementpool = true </code></pre> <ul> <li>And idle connections:</li> </ul> <pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle 49 </code></pre> <ul> <li>Perhaps I need to start drastically increasing the connection limits—like to 300—to see if DSpace&rsquo;s thirst can ever be quenched</li> <li>On another note, SUNScholar&rsquo;s notes suggest adjusting some other postgres variables: <a href="http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database">http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database</a></li> <li>This might help with REST API speed (which I mentioned above and still need to do real tests)</li> </ul>