Add notes for 2019-11-28

This commit is contained in:
2019-11-28 17:30:45 +02:00
parent 1f2be05583
commit 6bae7849e6
90 changed files with 14955 additions and 21478 deletions

View File

@ -8,7 +8,6 @@
<meta property="og:title" content="December, 2015" />
<meta property="og:description" content="2015-12-02
Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less space:
# cd /home/dspacetest.cgiar.org/log
@ -16,7 +15,6 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
-rw-rw-r-- 1 tomcat7 tomcat7 2.0M Nov 18 23:59 dspace.log.2015-11-18
-rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo
-rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz
" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2015-12/" />
@ -27,7 +25,6 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
<meta name="twitter:title" content="December, 2015"/>
<meta name="twitter:description" content="2015-12-02
Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less space:
# cd /home/dspacetest.cgiar.org/log
@ -35,9 +32,8 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
-rw-rw-r-- 1 tomcat7 tomcat7 2.0M Nov 18 23:59 dspace.log.2015-11-18
-rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo
-rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz
"/>
<meta name="generator" content="Hugo 0.59.1" />
<meta name="generator" content="Hugo 0.60.0" />
@ -118,42 +114,34 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
</p>
</header>
<h2 id="2015-12-02">2015-12-02</h2>
<h2 id="20151202">2015-12-02</h2>
<ul>
<li><p>Replace <code>lzop</code> with <code>xz</code> in log compression cron jobs on DSpace Test—it uses less space:</p>
<li>Replace <code>lzop</code> with <code>xz</code> in log compression cron jobs on DSpace Test—it uses less space:</li>
</ul>
<pre><code># cd /home/dspacetest.cgiar.org/log
# ls -lh dspace.log.2015-11-18*
-rw-rw-r-- 1 tomcat7 tomcat7 2.0M Nov 18 23:59 dspace.log.2015-11-18
-rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo
-rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz
</code></pre></li>
</ul>
<ul>
</code></pre><ul>
<li>I had used lrzip once, but it needs more memory and is harder to use as it requires the lrztar wrapper</li>
<li>Need to remember to go check if everything is ok in a few days and then change CGSpace</li>
<li>CGSpace went down again (due to PostgreSQL idle connections of course)</li>
<li><p>Current database settings for DSpace are <code>db.maxconnections = 30</code> and <code>db.maxidle = 8</code>, yet idle connections are exceeding this:</p>
<li>Current database settings for DSpace are <code>db.maxconnections = 30</code> and <code>db.maxidle = 8</code>, yet idle connections are exceeding this:</li>
</ul>
<pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle
39
</code></pre></li>
<li><p>I restarted PostgreSQL and Tomcat and it&rsquo;s back</p></li>
<li><p>On a related note of why CGSpace is so slow, I decided to finally try the <code>pgtune</code> script to tune the postgres settings:</p>
</code></pre><ul>
<li>I restarted PostgreSQL and Tomcat and it's back</li>
<li>On a related note of why CGSpace is so slow, I decided to finally try the <code>pgtune</code> script to tune the postgres settings:</li>
</ul>
<pre><code># apt-get install pgtune
# pgtune -i /etc/postgresql/9.3/main/postgresql.conf -o postgresql.conf-pgtune
# mv /etc/postgresql/9.3/main/postgresql.conf /etc/postgresql/9.3/main/postgresql.conf.orig
# mv postgresql.conf-pgtune /etc/postgresql/9.3/main/postgresql.conf
</code></pre></li>
<li><p>It introduced the following new settings:</p>
</code></pre><ul>
<li>It introduced the following new settings:</li>
</ul>
<pre><code>default_statistics_target = 50
maintenance_work_mem = 480MB
constraint_exclusion = on
@ -164,12 +152,10 @@ wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 1920MB
max_connections = 80
</code></pre></li>
<li><p>Now I need to go read PostgreSQL docs about these options, and watch memory settings in munin etc</p></li>
<li><p>For what it&rsquo;s worth, now the REST API should be faster (because of these PostgreSQL tweaks):</p>
</code></pre><ul>
<li>Now I need to go read PostgreSQL docs about these options, and watch memory settings in munin etc</li>
<li>For what it's worth, now the REST API should be faster (because of these PostgreSQL tweaks):</li>
</ul>
<pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
1.474
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
@ -180,40 +166,29 @@ $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle
1.995
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
1.786
</code></pre></li>
<li><p>Last week it was an average of 8 seconds&hellip; now this is <sup>1</sup>&frasl;<sub>4</sub> of that</p></li>
<li><p>CCAFS noticed that one of their items displays only the Atmire statlets: <a href="https://cgspace.cgiar.org/handle/10568/42445">https://cgspace.cgiar.org/handle/10568/42445</a></p></li>
</code></pre><ul>
<li>Last week it was an average of 8 seconds&hellip; now this is 1/4 of that</li>
<li>CCAFS noticed that one of their items displays only the Atmire statlets: <a href="https://cgspace.cgiar.org/handle/10568/42445">https://cgspace.cgiar.org/handle/10568/42445</a></li>
</ul>
<p><img src="/cgspace-notes/2015/12/ccafs-item-no-metadata.png" alt="CCAFS item" /></p>
<p><img src="/cgspace-notes/2015/12/ccafs-item-no-metadata.png" alt="CCAFS item"></p>
<ul>
<li>The authorizations for the item are all public READ, and I don&rsquo;t see any errors in dspace.log when browsing that item</li>
<li>I filed a ticket on Atmire&rsquo;s issue tracker</li>
<li>I also filed a ticket on Atmire&rsquo;s issue tracker for the PostgreSQL stuff</li>
<li>The authorizations for the item are all public READ, and I don't see any errors in dspace.log when browsing that item</li>
<li>I filed a ticket on Atmire's issue tracker</li>
<li>I also filed a ticket on Atmire's issue tracker for the PostgreSQL stuff</li>
</ul>
<h2 id="2015-12-03">2015-12-03</h2>
<h2 id="20151203">2015-12-03</h2>
<ul>
<li>CGSpace very slow, and monitoring emailing me to say its down, even though I can load the page (very slowly)</li>
<li><p>Idle postgres connections look like this (with no change in DSpace db settings lately):</p>
<li>Idle postgres connections look like this (with no change in DSpace db settings lately):</li>
</ul>
<pre><code>$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle
29
</code></pre></li>
<li><p>I restarted Tomcat and postgres&hellip;</p></li>
<li><p>Atmire commented that we should raise the JVM heap size by ~500M, so it is now <code>-Xms3584m -Xmx3584m</code></p></li>
<li><p>We weren&rsquo;t out of heap yet, but it&rsquo;s probably fair enough that the DSpace 5 upgrade (and new Atmire modules) requires more memory so it&rsquo;s ok</p></li>
<li><p>A possible side effect is that I see that the REST API is twice as fast for the request above now:</p>
</code></pre><ul>
<li>I restarted Tomcat and postgres&hellip;</li>
<li>Atmire commented that we should raise the JVM heap size by ~500M, so it is now <code>-Xms3584m -Xmx3584m</code></li>
<li>We weren't out of heap yet, but it's probably fair enough that the DSpace 5 upgrade (and new Atmire modules) requires more memory so it's ok</li>
<li>A possible side effect is that I see that the REST API is twice as fast for the request above now:</li>
</ul>
<pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
1.368
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
@ -226,37 +201,26 @@ $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle
0.806
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
0.854
</code></pre></li>
</ul>
<h2 id="2015-12-05">2015-12-05</h2>
</code></pre><h2 id="20151205">2015-12-05</h2>
<ul>
<li>CGSpace has been up and down all day and REST API is completely unresponsive</li>
<li><p>PostgreSQL idle connections are currently:</p>
<li>PostgreSQL idle connections are currently:</li>
</ul>
<pre><code>postgres@linode01:~$ psql -c 'SELECT * from pg_stat_activity;' | grep cgspace | grep -c idle
28
</code></pre></li>
<li><p>I have reverted all the pgtune tweaks from the other day, as they didn&rsquo;t fix the stability issues, so I&rsquo;d rather not have them introducing more variables into the equation</p></li>
<li><p>The PostgreSQL stats from Munin all point to something database-related with the DSpace 5 upgrade around midlate November</p></li>
</code></pre><ul>
<li>I have reverted all the pgtune tweaks from the other day, as they didn't fix the stability issues, so I'd rather not have them introducing more variables into the equation</li>
<li>The PostgreSQL stats from Munin all point to something database-related with the DSpace 5 upgrade around midlate November</li>
</ul>
<p><img src="/cgspace-notes/2015/12/postgres_bgwriter-year.png" alt="PostgreSQL bgwriter (year)" />
<img src="/cgspace-notes/2015/12/postgres_cache_cgspace-year.png" alt="PostgreSQL cache (year)" />
<img src="/cgspace-notes/2015/12/postgres_locks_cgspace-year.png" alt="PostgreSQL locks (year)" />
<img src="/cgspace-notes/2015/12/postgres_scans_cgspace-year.png" alt="PostgreSQL scans (year)" /></p>
<h2 id="2015-12-07">2015-12-07</h2>
<p><img src="/cgspace-notes/2015/12/postgres_bgwriter-year.png" alt="PostgreSQL bgwriter (year)">
<img src="/cgspace-notes/2015/12/postgres_cache_cgspace-year.png" alt="PostgreSQL cache (year)">
<img src="/cgspace-notes/2015/12/postgres_locks_cgspace-year.png" alt="PostgreSQL locks (year)">
<img src="/cgspace-notes/2015/12/postgres_scans_cgspace-year.png" alt="PostgreSQL scans (year)"></p>
<h2 id="20151207">2015-12-07</h2>
<ul>
<li>Atmire sent <a href="https://github.com/ilri/DSpace/pull/161">some fixes</a> to DSpace&rsquo;s REST API code that was leaving contexts open (causing the slow performance and database issues)</li>
<li><p>After deploying the fix to CGSpace the REST API is consistently faster:</p>
<li>Atmire sent <a href="https://github.com/ilri/DSpace/pull/161">some fixes</a> to DSpace's REST API code that was leaving contexts open (causing the slow performance and database issues)</li>
<li>After deploying the fix to CGSpace the REST API is consistently faster:</li>
</ul>
<pre><code>$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
0.675
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
@ -267,14 +231,10 @@ $ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle
0.566
$ curl -o /dev/null -s -w %{time_total}\\n https://cgspace.cgiar.org/rest/handle/10568/32802?expand=all
0.497
</code></pre></li>
</ul>
<h2 id="2015-12-08">2015-12-08</h2>
</code></pre><h2 id="20151208">2015-12-08</h2>
<ul>
<li>Switch CGSpace log compression cron jobs from using lzop to xz—the compression isn&rsquo;t as good, but it&rsquo;s much faster and causes less IO/CPU load</li>
<li>Since we figured out (and fixed) the cause of the performance issue, I reverted Google Bot&rsquo;s crawl rate to the &ldquo;Let Google optimize&rdquo; setting</li>
<li>Switch CGSpace log compression cron jobs from using lzop to xz—the compression isn't as good, but it's much faster and causes less IO/CPU load</li>
<li>Since we figured out (and fixed) the cause of the performance issue, I reverted Google Bot's crawl rate to the &ldquo;Let Google optimize&rdquo; setting</li>
</ul>