mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2019-12-17
This commit is contained in:
@ -31,7 +31,7 @@ Last week I had increased the limit from 30 to 60, which seemed to help, but now
|
||||
$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
|
||||
78
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.60.1" />
|
||||
<meta name="generator" content="Hugo 0.61.0" />
|
||||
|
||||
|
||||
|
||||
@ -112,7 +112,7 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac
|
||||
|
||||
</p>
|
||||
</header>
|
||||
<h2 id="20151122">2015-11-22</h2>
|
||||
<h2 id="2015-11-22">2015-11-22</h2>
|
||||
<ul>
|
||||
<li>CGSpace went down</li>
|
||||
<li>Looks like DSpace exhausted its PostgreSQL connection pool</li>
|
||||
@ -123,7 +123,7 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac
|
||||
</code></pre><ul>
|
||||
<li>For now I have increased the limit from 60 to 90, run updates, and rebooted the server</li>
|
||||
</ul>
|
||||
<h2 id="20151124">2015-11-24</h2>
|
||||
<h2 id="2015-11-24">2015-11-24</h2>
|
||||
<ul>
|
||||
<li>CGSpace went down again</li>
|
||||
<li>Getting emails from uptimeRobot and uptimeButler that it's down, and Google Webmaster Tools is sending emails that there is an increase in crawl errors</li>
|
||||
@ -134,7 +134,7 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac
|
||||
</code></pre><ul>
|
||||
<li>For some reason the number of idle connections is very high since we upgraded to DSpace 5</li>
|
||||
</ul>
|
||||
<h2 id="20151125">2015-11-25</h2>
|
||||
<h2 id="2015-11-25">2015-11-25</h2>
|
||||
<ul>
|
||||
<li>Troubleshoot the DSpace 5 OAI breakage caused by nginx routing config</li>
|
||||
<li>The OAI application requests stylesheets and javascript files with the path <code>/oai/static/css</code>, which gets matched here:</li>
|
||||
@ -177,7 +177,7 @@ datid | datname | pid | usesysid | usename | application_name | client_addr
|
||||
<li>Also redeploy DSpace Test with a clean sync of CGSpace and mirror these database settings there as well</li>
|
||||
<li>Also deploy the nginx fixes for the <code>try_files</code> location block as well as the expires block</li>
|
||||
</ul>
|
||||
<h2 id="20151126">2015-11-26</h2>
|
||||
<h2 id="2015-11-26">2015-11-26</h2>
|
||||
<ul>
|
||||
<li>CGSpace behaving much better since changing <code>db.maxidle</code> yesterday, but still two up/down notices from monitoring this morning (better than 50!)</li>
|
||||
<li>CCAFS colleagues mentioned that the REST API is very slow, 24 seconds for one item</li>
|
||||
@ -195,7 +195,7 @@ datid | datname | pid | usesysid | usename | application_name | client_addr
|
||||
<li>At the time, the current DSpace pool size was 50…</li>
|
||||
<li>I reduced the pool back to the default of 30, and reduced the <code>db.maxidle</code> settings from 10 to 8</li>
|
||||
</ul>
|
||||
<h2 id="20151129">2015-11-29</h2>
|
||||
<h2 id="2015-11-29">2015-11-29</h2>
|
||||
<ul>
|
||||
<li>Still more alerts that CGSpace has been up and down all day</li>
|
||||
<li>Current database settings for DSpace:</li>
|
||||
|
Reference in New Issue
Block a user