<li>For now I have increased the limit from 60 to 90, run updates, and rebooted the server</li>
</ul>
<h2id="2015-11-24">2015-11-24</h2>
<ul>
<li>CGSpace went down again</li>
<li>Getting emails from uptimeRobot and uptimeButler that it’s down, and Google Webmaster Tools is sending emails that there is an increase in crawl errors</li>
<li>Looks like there are still a bunch of idle PostgreSQL connections:</li>
<li>The document root is relative to the xmlui app, so this gets a 404—I’m not sure why it doesn’t pass to <code>@tomcat</code></li>
<li>Anyways, I can’t find any URIs with path <code>/static</code>, and the more important point is to handle all the static theme assets, so we can just remove <code>static</code> from the regex for now (who cares if we can’t use nginx to send Etags for OAI CSS!)</li>
<li>Also, I noticed we aren’t setting CSP headers on the static assets, because in nginx headers are inherited in child blocks, but if you use <code>add_header</code> in a child block it doesn’t inherit the others</li>
<li>We simply need to add <code>include extra-security.conf;</code> to the above location block (but research and test first)</li>
<li>We should add WOFF assets to the list of things to set expires for:</li>
<li>I looked closer at the idle connections and saw that many have been idle for hours (current time on server is <code>2015-11-25T20:20:42+0000</code>):</li>
</ul>
<pretabindex="0"><code>$ psql -c 'SELECT * from pg_stat_activity;' | less -S
<li>There is a relevant Jira issue about this: <ahref="https://jira.duraspace.org/browse/DS-1458">https://jira.duraspace.org/browse/DS-1458</a></li>
<li>It seems there is some sense changing DSpace’s default <code>db.maxidle</code> from unlimited (-1) to something like 8 (Tomcat default) or 10 (Confluence default)</li>
<li>Change <code>db.maxidle</code> from -1 to 10, reduce <code>db.maxconnections</code> from 90 to 50, and restart postgres and tomcat7</li>
<li>Also redeploy DSpace Test with a clean sync of CGSpace and mirror these database settings there as well</li>
<li>Also deploy the nginx fixes for the <code>try_files</code> location block as well as the expires block</li>
</ul>
<h2id="2015-11-26">2015-11-26</h2>
<ul>
<li>CGSpace behaving much better since changing <code>db.maxidle</code> yesterday, but still two up/down notices from monitoring this morning (better than 50!)</li>
<li>CCAFS colleagues mentioned that the REST API is very slow, 24 seconds for one item</li>
<li>Not as bad for me, but still unsustainable if you have to get many:</li>
<li>Perhaps I need to start drastically increasing the connection limits—like to 300—to see if DSpace’s thirst can ever be quenched</li>
<li>On another note, SUNScholar’s notes suggest adjusting some other postgres variables: <ahref="http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database">http://wiki.lib.sun.ac.za/index.php/SUNScholar/Optimisations/Database</a></li>
<li>This might help with REST API speed (which I mentioned above and still need to do real tests)</li>