mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2019-05-05
This commit is contained in:
@ -15,7 +15,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Categories"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.55.3" />
|
||||
<meta name="generator" content="Hugo 0.55.5" />
|
||||
|
||||
|
||||
|
||||
@ -108,15 +108,14 @@
|
||||
<li>Apparently if the item is in the <code>workflowitem</code> table it is submitted to a workflow</li>
|
||||
<li>And if it is in the <code>workspaceitem</code> table it is in the pre-submitted state</li>
|
||||
</ul></li>
|
||||
<li>The item seems to be in a pre-submitted state, so I tried to delete it from there:</li>
|
||||
</ul>
|
||||
|
||||
<li><p>The item seems to be in a pre-submitted state, so I tried to delete it from there:</p>
|
||||
|
||||
<pre><code>dspace=# DELETE FROM workspaceitem WHERE item_id=74648;
|
||||
DELETE 1
|
||||
</code></pre>
|
||||
</code></pre></li>
|
||||
|
||||
<ul>
|
||||
<li>But after this I tried to delete the item from the XMLUI and it is <em>still</em> present…</li>
|
||||
<li><p>But after this I tried to delete the item from the XMLUI and it is <em>still</em> present…</p></li>
|
||||
</ul>
|
||||
<a href='https://alanorth.github.io/cgspace-notes/2019-05/'>Read more →</a>
|
||||
</article>
|
||||
@ -143,27 +142,27 @@ DELETE 1
|
||||
<ul>
|
||||
<li>They asked if we had plans to enable RDF support in CGSpace</li>
|
||||
</ul></li>
|
||||
<li>There have been 4,400 more downloads of the CTA Spore publication from those strange Amazon IP addresses today
|
||||
|
||||
<li><p>There have been 4,400 more downloads of the CTA Spore publication from those strange Amazon IP addresses today</p>
|
||||
|
||||
<ul>
|
||||
<li>I suspected that some might not be successful, because the stats show less, but today they were all HTTP 200!</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<li><p>I suspected that some might not be successful, because the stats show less, but today they were all HTTP 200!</p>
|
||||
|
||||
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep 'Spore-192-EN-web.pdf' | grep -E '(18.196.196.108|18.195.78.144|18.195.218.6)' | awk '{print $9}' | sort | uniq -c | sort -n | tail -n 5
|
||||
4432 200
|
||||
</code></pre>
|
||||
4432 200
|
||||
</code></pre></li>
|
||||
</ul></li>
|
||||
|
||||
<ul>
|
||||
<li>In the last two weeks there have been 47,000 downloads of this <em>same exact PDF</em> by these three IP addresses</li>
|
||||
<li>Apply country and region corrections and deletions on DSpace Test and CGSpace:</li>
|
||||
</ul>
|
||||
<li><p>In the last two weeks there have been 47,000 downloads of this <em>same exact PDF</em> by these three IP addresses</p></li>
|
||||
|
||||
<li><p>Apply country and region corrections and deletions on DSpace Test and CGSpace:</p>
|
||||
|
||||
<pre><code>$ ./fix-metadata-values.py -i /tmp/2019-02-21-fix-9-countries.csv -db dspace -u dspace -p 'fuuu' -f cg.coverage.country -m 228 -t ACTION -d
|
||||
$ ./fix-metadata-values.py -i /tmp/2019-02-21-fix-4-regions.csv -db dspace -u dspace -p 'fuuu' -f cg.coverage.region -m 231 -t action -d
|
||||
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-2-countries.csv -db dspace -u dspace -p 'fuuu' -m 228 -f cg.coverage.country -d
|
||||
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace -u dspace -p 'fuuu' -m 231 -f cg.coverage.region -d
|
||||
</code></pre>
|
||||
</code></pre></li>
|
||||
</ul>
|
||||
<a href='https://alanorth.github.io/cgspace-notes/2019-04/'>Read more →</a>
|
||||
</article>
|
||||
|
||||
@ -218,27 +217,27 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace
|
||||
|
||||
<ul>
|
||||
<li>Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!</li>
|
||||
<li>The top IPs before, during, and after this latest alert tonight were:</li>
|
||||
</ul>
|
||||
|
||||
<li><p>The top IPs before, during, and after this latest alert tonight were:</p>
|
||||
|
||||
<pre><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "01/Feb/2019:(17|18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
||||
245 207.46.13.5
|
||||
332 54.70.40.11
|
||||
385 5.143.231.38
|
||||
405 207.46.13.173
|
||||
405 207.46.13.75
|
||||
1117 66.249.66.219
|
||||
1121 35.237.175.180
|
||||
1546 5.9.6.51
|
||||
2474 45.5.186.2
|
||||
5490 85.25.237.71
|
||||
</code></pre>
|
||||
245 207.46.13.5
|
||||
332 54.70.40.11
|
||||
385 5.143.231.38
|
||||
405 207.46.13.173
|
||||
405 207.46.13.75
|
||||
1117 66.249.66.219
|
||||
1121 35.237.175.180
|
||||
1546 5.9.6.51
|
||||
2474 45.5.186.2
|
||||
5490 85.25.237.71
|
||||
</code></pre></li>
|
||||
|
||||
<ul>
|
||||
<li><code>85.25.237.71</code> is the “Linguee Bot” that I first saw last month</li>
|
||||
<li>The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase</li>
|
||||
<li>There were just over 3 million accesses in the nginx logs last month:</li>
|
||||
</ul>
|
||||
<li><p><code>85.25.237.71</code> is the “Linguee Bot” that I first saw last month</p></li>
|
||||
|
||||
<li><p>The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase</p></li>
|
||||
|
||||
<li><p>There were just over 3 million accesses in the nginx logs last month:</p>
|
||||
|
||||
<pre><code># time zcat --force /var/log/nginx/* | grep -cE "[0-9]{1,2}/Jan/2019"
|
||||
3018243
|
||||
@ -246,7 +245,8 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace
|
||||
real 0m19.873s
|
||||
user 0m22.203s
|
||||
sys 0m1.979s
|
||||
</code></pre>
|
||||
</code></pre></li>
|
||||
</ul>
|
||||
<a href='https://alanorth.github.io/cgspace-notes/2019-02/'>Read more →</a>
|
||||
</article>
|
||||
|
||||
@ -268,21 +268,22 @@ sys 0m1.979s
|
||||
|
||||
<ul>
|
||||
<li>Linode alerted that CGSpace (linode18) had a higher outbound traffic rate than normal early this morning</li>
|
||||
<li>I don’t see anything interesting in the web server logs around that time though:</li>
|
||||
</ul>
|
||||
|
||||
<li><p>I don’t see anything interesting in the web server logs around that time though:</p>
|
||||
|
||||
<pre><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "02/Jan/2019:0(1|2|3)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
||||
92 40.77.167.4
|
||||
99 210.7.29.100
|
||||
120 38.126.157.45
|
||||
177 35.237.175.180
|
||||
177 40.77.167.32
|
||||
216 66.249.75.219
|
||||
225 18.203.76.93
|
||||
261 46.101.86.248
|
||||
357 207.46.13.1
|
||||
903 54.70.40.11
|
||||
</code></pre>
|
||||
92 40.77.167.4
|
||||
99 210.7.29.100
|
||||
120 38.126.157.45
|
||||
177 35.237.175.180
|
||||
177 40.77.167.32
|
||||
216 66.249.75.219
|
||||
225 18.203.76.93
|
||||
261 46.101.86.248
|
||||
357 207.46.13.1
|
||||
903 54.70.40.11
|
||||
</code></pre></li>
|
||||
</ul>
|
||||
<a href='https://alanorth.github.io/cgspace-notes/2019-01/'>Read more →</a>
|
||||
</article>
|
||||
|
||||
@ -411,21 +412,24 @@ sys 0m1.979s
|
||||
<h2 id="2018-08-01">2018-08-01</h2>
|
||||
|
||||
<ul>
|
||||
<li>DSpace Test had crashed at some point yesterday morning and I see the following in <code>dmesg</code>:</li>
|
||||
</ul>
|
||||
<li><p>DSpace Test had crashed at some point yesterday morning and I see the following in <code>dmesg</code>:</p>
|
||||
|
||||
<pre><code>[Tue Jul 31 00:00:41 2018] Out of memory: Kill process 1394 (java) score 668 or sacrifice child
|
||||
[Tue Jul 31 00:00:41 2018] Killed process 1394 (java) total-vm:15601860kB, anon-rss:5355528kB, file-rss:0kB, shmem-rss:0kB
|
||||
[Tue Jul 31 00:00:41 2018] oom_reaper: reaped process 1394 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
|
||||
</code></pre>
|
||||
</code></pre></li>
|
||||
|
||||
<ul>
|
||||
<li>Judging from the time of the crash it was probably related to the Discovery indexing that starts at midnight</li>
|
||||
<li>From the DSpace log I see that eventually Solr stopped responding, so I guess the <code>java</code> process that was OOM killed above was Tomcat’s</li>
|
||||
<li>I’m not sure why Tomcat didn’t crash with an OutOfMemoryError…</li>
|
||||
<li>Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did a few months ago when we tried to run the whole CGSpace Solr core</li>
|
||||
<li>The server only has 8GB of RAM so we’ll eventually need to upgrade to a larger one because we’ll start starving the OS, PostgreSQL, and command line batch processes</li>
|
||||
<li>I ran all system updates on DSpace Test and rebooted it</li>
|
||||
<li><p>Judging from the time of the crash it was probably related to the Discovery indexing that starts at midnight</p></li>
|
||||
|
||||
<li><p>From the DSpace log I see that eventually Solr stopped responding, so I guess the <code>java</code> process that was OOM killed above was Tomcat’s</p></li>
|
||||
|
||||
<li><p>I’m not sure why Tomcat didn’t crash with an OutOfMemoryError…</p></li>
|
||||
|
||||
<li><p>Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did a few months ago when we tried to run the whole CGSpace Solr core</p></li>
|
||||
|
||||
<li><p>The server only has 8GB of RAM so we’ll eventually need to upgrade to a larger one because we’ll start starving the OS, PostgreSQL, and command line batch processes</p></li>
|
||||
|
||||
<li><p>I ran all system updates on DSpace Test and rebooted it</p></li>
|
||||
</ul>
|
||||
<a href='https://alanorth.github.io/cgspace-notes/2018-08/'>Read more →</a>
|
||||
</article>
|
||||
|
Reference in New Issue
Block a user