1
0
mirror of https://github.com/alanorth/cgspace-notes.git synced 2024-12-30 17:04:30 +01:00
cgspace-notes/docs/2017-11/index.html

1212 lines
52 KiB
HTML
Raw Normal View History

2018-02-11 17:28:23 +01:00
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta property="og:title" content="November, 2017" />
<meta property="og:description" content="2017-11-01
The CORE developers responded to say they are looking into their bot not respecting our robots.txt
2017-11-02
Today there have been no hits by CORE and no alerts from Linode (coincidence?)
# grep -c &quot;CORE&quot; /var/log/nginx/access.log
0
Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = &#39;contributor&#39; and qualifier = &#39;author&#39;) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
2019-05-05 15:45:12 +02:00
2018-02-11 17:28:23 +01:00
" />
<meta property="og:type" content="article" />
2019-02-02 13:12:57 +01:00
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2017-11/" />
2019-08-08 17:10:44 +02:00
<meta property="article:published_time" content="2017-11-02T09:37:54+02:00" />
<meta property="article:modified_time" content="2018-04-10T08:27:55+03:00" />
2018-09-30 07:23:48 +02:00
2018-02-11 17:28:23 +01:00
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="November, 2017"/>
<meta name="twitter:description" content="2017-11-01
The CORE developers responded to say they are looking into their bot not respecting our robots.txt
2017-11-02
Today there have been no hits by CORE and no alerts from Linode (coincidence?)
# grep -c &quot;CORE&quot; /var/log/nginx/access.log
0
Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = &#39;contributor&#39; and qualifier = &#39;author&#39;) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
2019-05-05 15:45:12 +02:00
2018-02-11 17:28:23 +01:00
"/>
2019-08-08 17:10:44 +02:00
<meta name="generator" content="Hugo 0.56.3" />
2018-02-11 17:28:23 +01:00
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "November, 2017",
2019-04-13 11:15:55 +02:00
"url": "https:\/\/alanorth.github.io\/cgspace-notes\/2017-11\/",
2018-04-30 18:05:39 +02:00
"wordCount": "5428",
2019-04-13 11:15:55 +02:00
"datePublished": "2017-11-02T09:37:54\x2b02:00",
"dateModified": "2018-04-10T08:27:55\x2b03:00",
2018-02-11 17:28:23 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2017-11/">
<title>November, 2017 | CGSpace Notes</title>
<!-- combined, minified CSS -->
2019-02-13 17:47:17 +01:00
<link href="https://alanorth.github.io/cgspace-notes/css/style.css" rel="stylesheet" integrity="sha384-G5B34w7DFTumWTswxYzTX7NWfbvQEg1HbFFEg6ItN03uTAAoS2qkPS/fu3LhuuSA" crossorigin="anonymous">
2018-02-11 17:28:23 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2018-02-11 17:28:23 +01:00
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
<header class="blog-header">
<div class="container">
<h1 class="blog-title"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
</div>
</header>
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title"><a href="https://alanorth.github.io/cgspace-notes/2017-11/">November, 2017</a></h2>
<p class="blog-post-meta"><time datetime="2017-11-02T09:37:54&#43;02:00">Thu Nov 02, 2017</time> by Alan Orth in
<i class="fa fa-tag" aria-hidden="true"></i>&nbsp;<a href="/cgspace-notes/tags/notes" rel="tag">Notes</a>
</p>
</header>
<h2 id="2017-11-01">2017-11-01</h2>
<ul>
<li>The CORE developers responded to say they are looking into their bot not respecting our robots.txt</li>
</ul>
<h2 id="2017-11-02">2017-11-02</h2>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>Today there have been no hits by CORE and no alerts from Linode (coincidence?)</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c &quot;CORE&quot; /var/log/nginx/access.log
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Generate list of authors on CGSpace for Peter to go through and correct:</p>
2018-02-11 17:28:23 +01:00
<pre><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
2019-05-05 15:45:12 +02:00
</code></pre></li>
</ul>
2018-02-11 17:28:23 +01:00
<ul>
<li>Abenet asked if it would be possible to generate a report of items in Listing and Reports that had &ldquo;International Fund for Agricultural Development&rdquo; as the <em>only</em> investor</li>
<li>I opened a ticket with Atmire to ask if this was possible: <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=540">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=540</a></li>
<li>Work on making the thumbnails in the item view clickable</li>
2019-05-05 15:45:12 +02:00
<li><p>Basically, once you read the METS XML for an item it becomes easy to trace the structure to find the bitstream link</p>
2018-02-11 17:28:23 +01:00
<pre><code>//mets:fileSec/mets:fileGrp[@USE='CONTENT']/mets:file/mets:FLocat[@LOCTYPE='URL']/@xlink:href
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>METS XML is available for all items with this pattern: /metadata/handle/10568/95947/mets.xml</p></li>
<li><p>I whipped up a quick hack to print a clickable link with this URL on the thumbnail but it needs to check a few corner cases, like when there is a thumbnail but no content bitstream!</p></li>
<li><p>Help proof fifty-three CIAT records for Sisay: <a href="https://dspacetest.cgiar.org/handle/10568/95895">https://dspacetest.cgiar.org/handle/10568/95895</a></p></li>
<li><p>A handful of issues with <code>cg.place</code> using format like &ldquo;Lima, PE&rdquo; instead of &ldquo;Lima, Peru&rdquo;</p></li>
<li><p>Also, some dates like with completely invalid format like &ldquo;2010- 06&rdquo; and &ldquo;2011-3-28&rdquo;</p></li>
<li><p>I also collapsed some consecutive whitespace on a handful of fields</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-03">2017-11-03</h2>
<ul>
<li>Atmire got back to us to say that they estimate it will take two days of labor to implement the change to Listings and Reports</li>
<li>I said I&rsquo;d ask Abenet if she wants that feature</li>
</ul>
<h2 id="2017-11-04">2017-11-04</h2>
<ul>
<li>I finished looking through Sisay&rsquo;s CIAT records for the &ldquo;Alianzas de Aprendizaje&rdquo; data</li>
<li>I corrected about half of the authors to standardize them</li>
<li>Linode emailed this morning to say that the CPU usage was high again, this time at 6:14AM</li>
<li>It&rsquo;s the first time in a few days that this has happened</li>
2019-05-05 15:45:12 +02:00
<li><p>I had a look to see what was going on, but it isn&rsquo;t the CORE bot:</p>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<pre><code># awk '{print $1}' /var/log/nginx/access.log | sort -n | uniq -c | sort -h | tail
306 68.180.229.31
323 61.148.244.116
414 66.249.66.91
507 40.77.167.16
618 157.55.39.161
652 207.46.13.103
666 157.55.39.254
1173 104.196.152.243
1737 66.249.66.90
23101 138.201.52.218
</code></pre></li>
<li><p>138.201.52.218 is from some Hetzner server, and I see it making 40,000 requests yesterday too, but none before that:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zgrep -c 138.201.52.218 /var/log/nginx/access.log*
/var/log/nginx/access.log:24403
/var/log/nginx/access.log.1:45958
/var/log/nginx/access.log.2.gz:0
/var/log/nginx/access.log.3.gz:0
/var/log/nginx/access.log.4.gz:0
/var/log/nginx/access.log.5.gz:0
/var/log/nginx/access.log.6.gz:0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>It&rsquo;s clearly a bot as it&rsquo;s making tens of thousands of requests, but it&rsquo;s using a &ldquo;normal&rdquo; user agent:</p>
2018-02-11 17:28:23 +01:00
<pre><code>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>For now I don&rsquo;t know what this user is!</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-05">2017-11-05</h2>
<ul>
<li>Peter asked if I could fix the appearance of &ldquo;International Livestock Research Institute&rdquo; in the author lookup during item submission</li>
<li>It looks to be just an issue with the user interface expecting authors to have both a first and last name:</li>
</ul>
<p><img src="/cgspace-notes/2017/11/author-lookup.png" alt="Author lookup" />
<img src="/cgspace-notes/2017/11/add-author.png" alt="Add author" /></p>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>But in the database the authors are correct (none with weird <code>, /</code> characters):</p>
2018-02-11 17:28:23 +01:00
<pre><code>dspace=# select distinct text_value, authority, confidence from metadatavalue value where resource_type_id=2 and metadata_field_id=3 and text_value like 'International Livestock Research Institute%';
2019-05-05 15:45:12 +02:00
text_value | authority | confidence
2018-02-11 17:28:23 +01:00
--------------------------------------------+--------------------------------------+------------
2019-05-05 15:45:12 +02:00
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 0
International Livestock Research Institute | f4db1627-47cd-4699-b394-bab7eba6dadc | 0
International Livestock Research Institute | | -1
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 600
International Livestock Research Institute | f4db1627-47cd-4699-b394-bab7eba6dadc | -1
International Livestock Research Institute | | 600
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | -1
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 500
2018-02-11 17:28:23 +01:00
(8 rows)
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>So I&rsquo;m not sure if this is just a graphical glitch or if editors have to edit this metadata field prior to approval</p></li>
<li><p>Looking at monitoring Tomcat&rsquo;s JVM heap with Prometheus, it looks like we need to use JMX + <a href="https://github.com/prometheus/jmx_exporter">jmx_exporter</a></p></li>
<li><p>This guide shows how to <a href="https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/">enable JMX in Tomcat</a> by modifying <code>CATALINA_OPTS</code></p></li>
<li><p>I was able to successfully connect to my local Tomcat with jconsole!</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-07">2017-11-07</h2>
<ul>
<li>CGSpace when down and up a few times this morning, first around 3AM, then around 7</li>
<li>Tsega had to restart Tomcat 7 to fix it temporarily</li>
2019-05-05 15:45:12 +02:00
<li><p>I will start by looking at bot usage (access.log.1 includes usage until 6AM today):</p>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<pre><code># cat /var/log/nginx/access.log.1 | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
619 65.49.68.184
840 65.49.68.199
924 66.249.66.91
1131 68.180.229.254
1583 66.249.66.90
1953 207.46.13.103
1999 207.46.13.80
2021 157.55.39.161
2034 207.46.13.36
4681 104.196.152.243
</code></pre></li>
<li><p>104.196.152.243 seems to be a top scraper for a few weeks now:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zgrep -c 104.196.152.243 /var/log/nginx/access.log*
/var/log/nginx/access.log:336
/var/log/nginx/access.log.1:4681
/var/log/nginx/access.log.2.gz:3531
/var/log/nginx/access.log.3.gz:3532
/var/log/nginx/access.log.4.gz:5786
/var/log/nginx/access.log.5.gz:8542
/var/log/nginx/access.log.6.gz:6988
/var/log/nginx/access.log.7.gz:7517
/var/log/nginx/access.log.8.gz:7211
/var/log/nginx/access.log.9.gz:2763
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>This user is responsible for hundreds and sometimes thousands of Tomcat sessions:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ grep 104.196.152.243 dspace.log.2017-11-07 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
954
$ grep 104.196.152.243 dspace.log.2017-11-03 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
6199
$ grep 104.196.152.243 dspace.log.2017-11-01 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
7051
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The worst thing is that this user never specifies a user agent string so we can&rsquo;t lump it in with the other bots using the Tomcat Session Crawler Manager Valve</p></li>
<li><p>They don&rsquo;t request dynamic URLs like &ldquo;/discover&rdquo; but they seem to be fetching handles from XMLUI instead of REST (and some with <code>//handle</code>, note the regex below):</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c 104.196.152.243 /var/log/nginx/access.log.1
4681
# grep 104.196.152.243 /var/log/nginx/access.log.1 | grep -c -P 'GET //?handle'
4618
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I just realized that <code>ciat.cgiar.org</code> points to 104.196.152.243, so I should contact Leroy from CIAT to see if we can change their scraping behavior</p></li>
<li><p>The next IP (207.46.13.36) seem to be Microsoft&rsquo;s bingbot, but all its requests specify the &ldquo;bingbot&rdquo; user agent and there are no requests for dynamic URLs that are forbidden, like &ldquo;/discover&rdquo;:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ grep -c 207.46.13.36 /var/log/nginx/access.log.1
2034
# grep 207.46.13.36 /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The next IP (157.55.39.161) also seems to be bingbot, and none of its requests are for URLs forbidden by robots.txt either:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep 157.55.39.161 /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The next few seem to be bingbot as well, and they declare a proper user agent and do not request dynamic URLs like &ldquo;/discover&rdquo;:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1
5997
# grep -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c &quot;bingbot&quot;
5988
# grep -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The next few seem to be Googlebot, and they declare a proper user agent and do not request dynamic URLs like &ldquo;/discover&rdquo;:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1
3048
# grep -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c Google
3048
# grep -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The next seems to be Yahoo, which declares a proper user agent and does not request dynamic URLs like &ldquo;/discover&rdquo;:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c 68.180.229.254 /var/log/nginx/access.log.1
1131
# grep 68.180.229.254 /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The last of the top ten IPs seems to be some bot with a weird user agent, but they are not behaving too well:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c -E '65.49.68.[0-9]{3}' /var/log/nginx/access.log.1
2950
# grep -E '65.49.68.[0-9]{3}' /var/log/nginx/access.log.1 | grep -c &quot;GET /discover&quot;
330
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Their user agents vary, ie:</p>
2018-02-11 17:28:23 +01:00
<ul>
<li><code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36</code></li>
<li><code>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11</code></li>
<li><code>Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)</code></li>
</ul></li>
2019-05-05 15:45:12 +02:00
<li><p>I&rsquo;ll just keep an eye on that one for now, as it only made a few hundred requests to dynamic discovery URLs</p></li>
<li><p>While it&rsquo;s not in the top ten, Baidu is one bot that seems to not give a fuck:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep &quot;7/Nov/2017&quot; | grep -c Baiduspider
8912
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep &quot;7/Nov/2017&quot; | grep Baiduspider | grep -c -E &quot;GET /(browse|discover|search-filter)&quot;
2521
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>According to their documentation their bot <a href="http://www.baidu.com/search/robots_english.html">respects <code>robots.txt</code></a>, but I don&rsquo;t see this being the case</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I think I will end up blocking Baidu as well&hellip;</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Next is for me to look and see what was happening specifically at 3AM and 7AM when the server crashed</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I should look in nginx access.log, rest.log, oai.log, and DSpace&rsquo;s dspace.log.2017-11-07</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Here are the top IPs making requests to XMLUI from 2 to 8 AM:</p>
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
279 66.249.66.91
373 65.49.68.199
446 68.180.229.254
470 104.196.152.243
470 197.210.168.174
598 207.46.13.103
603 157.55.39.161
637 207.46.13.80
703 207.46.13.36
724 66.249.66.90
</code></pre></li>
<li><p>Of those, most are Google, Bing, Yahoo, etc, except 63.143.42.244 and 63.143.42.242 which are Uptime Robot</p></li>
<li><p>Here are the top IPs making requests to REST from 2 to 8 AM:</p>
<pre><code># cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
8 207.241.229.237
10 66.249.66.90
16 104.196.152.243
25 41.60.238.61
26 157.55.39.161
27 207.46.13.103
27 207.46.13.80
31 207.46.13.36
1498 50.116.102.77
</code></pre></li>
<li><p>The OAI requests during that same time period are nothing to worry about:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
1 66.249.66.92
4 66.249.66.90
6 68.180.229.254
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The top IPs from dspace.log during the 28 AM period:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ grep -E '2017-11-07 0[2-8]' dspace.log.2017-11-07 | grep -o -E 'ip_addr=[0-9.]+' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
143 ip_addr=213.55.99.121
181 ip_addr=66.249.66.91
223 ip_addr=157.55.39.161
248 ip_addr=207.46.13.80
251 ip_addr=207.46.13.103
291 ip_addr=207.46.13.36
297 ip_addr=197.210.168.174
312 ip_addr=65.49.68.199
462 ip_addr=104.196.152.243
488 ip_addr=66.249.66.90
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>These aren&rsquo;t actually very interesting, as the top few are Google, CIAT, Bingbot, and a few other unknown scrapers</p></li>
<li><p>The number of requests isn&rsquo;t even that high to be honest</p></li>
<li><p>As I was looking at these logs I noticed another heavy user (124.17.34.59) that was not active during this time period, but made many requests today alone:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zgrep -c 124.17.34.59 /var/log/nginx/access.log*
/var/log/nginx/access.log:22581
/var/log/nginx/access.log.1:0
/var/log/nginx/access.log.2.gz:14
/var/log/nginx/access.log.3.gz:0
/var/log/nginx/access.log.4.gz:0
/var/log/nginx/access.log.5.gz:3
/var/log/nginx/access.log.6.gz:0
/var/log/nginx/access.log.7.gz:0
/var/log/nginx/access.log.8.gz:0
/var/log/nginx/access.log.9.gz:1
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The whois data shows the IP is from China, but the user agent doesn&rsquo;t really give any clues:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep 124.17.34.59 /var/log/nginx/access.log | awk -F'&quot; ' '{print $3}' | sort | uniq -c | sort -h
2019-05-05 15:45:12 +02:00
210 &quot;Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36&quot;
22610 &quot;Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.2; Win64; x64; Trident/7.0; LCTE)&quot;
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>A Google search for &ldquo;LCTE bot&rdquo; doesn&rsquo;t return anything interesting, but this <a href="https://stackoverflow.com/questions/42500881/what-is-lcte-in-user-agent">Stack Overflow discussion</a> references the lack of information</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>So basically after a few hours of looking at the log files I am not closer to understanding what is going on!</p></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I do know that we want to block Baidu, though, as it does not respect <code>robots.txt</code></p></li>
<li><p>And as we speak Linode alerted that the outbound traffic rate is very high for the past two hours (about 1214 hours)</p></li>
<li><p>At least for now it seems to be that new Chinese IP (124.17.34.59):</p>
<pre><code># grep -E &quot;07/Nov/2017:1[234]:&quot; /var/log/nginx/access.log | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
198 207.46.13.103
203 207.46.13.80
205 207.46.13.36
218 157.55.39.161
249 45.5.184.221
258 45.5.187.130
386 66.249.66.90
410 197.210.168.174
1896 104.196.152.243
11005 124.17.34.59
</code></pre></li>
<li><p>Seems 124.17.34.59 are really downloading all our PDFs, compared to the next top active IPs during this time!</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -E &quot;07/Nov/2017:1[234]:&quot; /var/log/nginx/access.log | grep 124.17.34.59 | grep -c pdf
5948
# grep -E &quot;07/Nov/2017:1[234]:&quot; /var/log/nginx/access.log | grep 104.196.152.243 | grep -c pdf
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>About CIAT, I think I need to encourage them to specify a user agent string for their requests, because they are not reuising their Tomcat session and they are creating thousands of sessions per day</p></li>
<li><p>All CIAT requests vs unique ones:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=104.196.152.243' dspace.log.2017-11-07 | wc -l
3506
$ grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=104.196.152.243' dspace.log.2017-11-07 | sort | uniq | wc -l
3506
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I emailed CIAT about the session issue, user agent issue, and told them they should not scrape the HTML contents of communities, instead using the REST API</p></li>
<li><p>About Baidu, I found a link to their <a href="http://ziyuan.baidu.com/robots/">robots.txt tester tool</a></p></li>
<li><p>It seems like our robots.txt file is valid, and they claim to recognize that URLs like <code>/discover</code> should be forbidden (不允许, aka &ldquo;not allowed&rdquo;):</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<p><img src="/cgspace-notes/2017/11/baidu-robotstxt.png" alt="Baidu robots.txt tester" /></p>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>But they literally just made this request today:</p>
2018-02-11 17:28:23 +01:00
<pre><code>180.76.15.136 - - [07/Nov/2017:06:25:11 +0000] &quot;GET /discover?filtertype_0=crpsubject&amp;filter_relational_operator_0=equals&amp;filter_0=WATER%2C+LAND+AND+ECOSYSTEMS&amp;filtertype=subject&amp;filter_relational_operator=equals&amp;filter=WATER+RESOURCES HTTP/1.1&quot; 200 82265 &quot;-&quot; &quot;Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)&quot;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Along with another thousand or so requests to URLs that are forbidden in robots.txt today alone:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep -c Baiduspider /var/log/nginx/access.log
3806
# grep Baiduspider /var/log/nginx/access.log | grep -c -E &quot;GET /(browse|discover|search-filter)&quot;
1085
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I will think about blocking their IPs but they have 164 of them!</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep &quot;Baiduspider/2.0&quot; /var/log/nginx/access.log | awk '{print $1}' | sort -n | uniq | wc -l
164
2019-05-05 15:45:12 +02:00
</code></pre></li>
</ul>
2018-02-11 17:28:23 +01:00
<h2 id="2017-11-08">2017-11-08</h2>
<ul>
<li>Linode sent several alerts last night about CPU usage and outbound traffic rate at 6:13PM</li>
<li>Linode sent another alert about CPU usage in the morning at 6:12AM</li>
2019-05-05 15:45:12 +02:00
<li><p>Jesus, the new Chinese IP (124.17.34.59) has downloaded 24,000 PDFs in the last 24 hours:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E &quot;0[78]/Nov/2017:&quot; | grep 124.17.34.59 | grep -v pdf.jpg | grep -c pdf
24981
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>This is about 20,000 Tomcat sessions:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ cat dspace.log.2017-11-07 dspace.log.2017-11-08 | grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=124.17.34.59' | sort | uniq | wc -l
20733
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I&rsquo;m getting really sick of this</p></li>
<li><p>Sisay re-uploaded the CIAT records that I had already corrected earlier this week, erasing all my corrections</p></li>
<li><p>I had to re-correct all the publishers, places, names, dates, etc and apply the changes on DSpace Test</p></li>
<li><p>Run system updates on DSpace Test and reboot the server</p></li>
<li><p>Magdalena had written to say that two of their Phase II project tags were missing on CGSpace, so I added them (<a href="https://github.com/ilri/DSpace/pull/346">#346</a>)</p></li>
<li><p>I figured out a way to use nginx&rsquo;s map function to assign a &ldquo;bot&rdquo; user agent to misbehaving clients who don&rsquo;t define a user agent</p></li>
<li><p>Most bots are automatically lumped into one generic session by <a href="https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Crawler_Session_Manager_Valve">Tomcat&rsquo;s Crawler Session Manager Valve</a> but this only works if their user agent matches a pre-defined regular expression like <code>.*[bB]ot.*</code></p></li>
<li><p>Some clients send thousands of requests without a user agent which ends up creating thousands of Tomcat sessions, wasting precious memory, CPU, and database resources in the process</p></li>
<li><p>Basically, we modify the nginx config to add a mapping with a modified user agent <code>$ua</code>:</p>
2018-02-11 17:28:23 +01:00
<pre><code>map $remote_addr $ua {
2019-05-05 15:45:12 +02:00
# 2017-11-08 Random Chinese host grabbing 20,000 PDFs
124.17.34.59 'ChineseBot';
default $http_user_agent;
2018-02-11 17:28:23 +01:00
}
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>If the client&rsquo;s address matches then the user agent is set, otherwise the default <code>$http_user_agent</code> variable is used</p></li>
<li><p>Then, in the server&rsquo;s <code>/</code> block we pass this header to Tomcat:</p>
2018-02-11 17:28:23 +01:00
<pre><code>proxy_pass http://tomcat_http;
proxy_set_header User-Agent $ua;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Note to self: the <code>$ua</code> variable won&rsquo;t show up in nginx access logs because the default <code>combined</code> log format doesn&rsquo;t show it, so don&rsquo;t run around pulling your hair out wondering with the modified user agents aren&rsquo;t showing in the logs!</p></li>
<li><p>If a client matching one of these IPs connects without a session, it will be assigned one by the Crawler Session Manager Valve</p></li>
<li><p>You can verify by cross referencing nginx&rsquo;s <code>access.log</code> and DSpace&rsquo;s <code>dspace.log.2017-11-08</code>, for example</p></li>
<li><p>I will deploy this on CGSpace later this week</p></li>
<li><p>I am interested to check how this affects the number of sessions used by the CIAT and Chinese bots (see above on <a href="#2017-11-07">2017-11-07</a> for example)</p></li>
<li><p>I merged the clickable thumbnails code to <code>5_x-prod</code> (<a href="https://github.com/ilri/DSpace/pull/347">#347</a>) and will deploy it later along with the new bot mapping stuff (and re-run the Asible <code>nginx</code> and <code>tomcat</code> tags)</p></li>
<li><p>I was thinking about Baidu again and decided to see how many requests they have versus Google to URL paths that are explicitly forbidden in <code>robots.txt</code>:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zgrep Baiduspider /var/log/nginx/access.log* | grep -c -E &quot;GET /(browse|discover|search-filter)&quot;
22229
# zgrep Googlebot /var/log/nginx/access.log* | grep -c -E &quot;GET /(browse|discover|search-filter)&quot;
0
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>It seems that they rarely even bother checking <code>robots.txt</code>, but Google does multiple times per day!</p>
2018-02-11 17:28:23 +01:00
<pre><code># zgrep Baiduspider /var/log/nginx/access.log* | grep -c robots.txt
14
# zgrep Googlebot /var/log/nginx/access.log* | grep -c robots.txt
1134
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I have been looking for a reason to ban Baidu and this is definitely a good one</p></li>
<li><p>Disallowing <code>Baiduspider</code> in <code>robots.txt</code> probably won&rsquo;t work because this bot doesn&rsquo;t seem to respect the robot exclusion standard anyways!</p></li>
<li><p>I will whip up something in nginx later</p></li>
<li><p>Run system updates on CGSpace and reboot the server</p></li>
<li><p>Re-deploy latest <code>5_x-prod</code> branch on CGSpace and DSpace Test (includes the clickable thumbnails, CCAFS phase II project tags, and updated news text)</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-09">2017-11-09</h2>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>Awesome, it seems my bot mapping stuff in nginx actually reduced the number of Tomcat sessions used by the CIAT scraper today, total requests and unique sessions:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep '09/Nov/2017' | grep -c 104.196.152.243
8956
$ grep 104.196.152.243 dspace.log.2017-11-09 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
223
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Versus the same stats for yesterday and the day before:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep '08/Nov/2017' | grep -c 104.196.152.243
10216
$ grep 104.196.152.243 dspace.log.2017-11-08 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2592
# zcat -f -- /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz | grep '07/Nov/2017' | grep -c 104.196.152.243
8120
$ grep 104.196.152.243 dspace.log.2017-11-07 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
3506
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The number of sessions is over <em>ten times less</em>!</p></li>
<li><p>This gets me thinking, I wonder if I can use something like nginx&rsquo;s rate limiter to automatically change the user agent of clients who make too many requests</p></li>
<li><p>Perhaps using a combination of geo and map, like illustrated here: <a href="https://www.nginx.com/blog/rate-limiting-nginx/">https://www.nginx.com/blog/rate-limiting-nginx/</a></p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-11">2017-11-11</h2>
<ul>
<li>I was looking at the Google index and noticed there are 4,090 search results for dspace.ilri.org but only seven for mahider.ilri.org</li>
<li>Search with something like: inurl:dspace.ilri.org inurl:https</li>
<li>I want to get rid of those legacy domains eventually!</li>
</ul>
<h2 id="2017-11-12">2017-11-12</h2>
<ul>
<li>Update the <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure templates</a> to be a little more modular and flexible</li>
2019-05-05 15:45:12 +02:00
<li><p>Looking at the top client IPs on CGSpace so far this morning, even though it&rsquo;s only been eight hours:</p>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep &quot;12/Nov/2017&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
243 5.83.120.111
335 40.77.167.103
424 66.249.66.91
529 207.46.13.36
554 40.77.167.129
604 207.46.13.53
754 104.196.152.243
883 66.249.66.90
1150 95.108.181.88
1381 5.9.6.51
</code></pre></li>
<li><p>5.9.6.51 seems to be a Russian bot:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep 5.9.6.51 /var/log/nginx/access.log | tail -n 1
5.9.6.51 - - [12/Nov/2017:08:13:13 +0000] &quot;GET /handle/10568/16515/recent-submissions HTTP/1.1&quot; 200 5097 &quot;-&quot; &quot;Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)&quot;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>What&rsquo;s amazing is that it seems to reuse its Java session across all requests:</p>
2018-02-11 17:28:23 +01:00
2018-04-10 07:27:55 +02:00
<pre><code>$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=5.9.6.51' dspace.log.2017-11-12
2018-02-11 17:28:23 +01:00
1558
2018-04-10 07:27:55 +02:00
$ grep 5.9.6.51 dspace.log.2017-11-12 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
1
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Bravo to MegaIndex.ru!</p></li>
<li><p>The same cannot be said for 95.108.181.88, which appears to be YandexBot, even though Tomcat&rsquo;s Crawler Session Manager valve regex should match &lsquo;YandexBot&rsquo;:</p>
2018-02-11 17:28:23 +01:00
<pre><code># grep 95.108.181.88 /var/log/nginx/access.log | tail -n 1
95.108.181.88 - - [12/Nov/2017:08:33:17 +0000] &quot;GET /bitstream/handle/10568/57004/GenebankColombia_23Feb2015.pdf HTTP/1.1&quot; 200 972019 &quot;-&quot; &quot;Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)&quot;
2018-04-10 07:27:55 +02:00
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=95.108.181.88' dspace.log.2017-11-12
2018-02-11 17:28:23 +01:00
991
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Move some items and collections on CGSpace for Peter Ballantyne, running <a href="https://gist.github.com/alanorth/e60b530ed4989df0c731afbb0c640515"><code>move_collections.sh</code></a> with the following configuration:</p>
2018-02-11 17:28:23 +01:00
<pre><code>10947/6 10947/1 10568/83389
10947/34 10947/1 10568/83389
10947/2512 10947/1 10568/83389
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I explored nginx rate limits as a way to aggressively throttle Baidu bot which doesn&rsquo;t seem to respect disallowed URLs in robots.txt</p></li>
<li><p>There&rsquo;s an interesting <a href="https://www.nginx.com/blog/rate-limiting-nginx/">blog post from Nginx&rsquo;s team about rate limiting</a> as well as a <a href="https://gist.github.com/arosenhagen/8aaf5d7f94171778c0e9">clever use of mapping with rate limits</a></p></li>
<li><p>The solution <a href="https://github.com/ilri/rmg-ansible-public/commit/f0646991772660c505bea9c5ac586490e7c86156">I came up with</a> uses tricks from both of those</p></li>
<li><p>I deployed the limit on CGSpace and DSpace Test and it seems to work well:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ http --print h https://cgspace.cgiar.org/handle/10568/1 User-Agent:'Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)'
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Language: en-US
Content-Type: text/html;charset=utf-8
Date: Sun, 12 Nov 2017 16:30:19 GMT
Server: nginx
Strict-Transport-Security: max-age=15768000
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Cocoon-Version: 2.2.0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
$ http --print h https://cgspace.cgiar.org/handle/10568/1 User-Agent:'Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)'
HTTP/1.1 503 Service Temporarily Unavailable
Connection: keep-alive
Content-Length: 206
Content-Type: text/html
Date: Sun, 12 Nov 2017 16:30:21 GMT
Server: nginx
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>The first request works, second is denied with an HTTP 503!</p></li>
<li><p>I need to remember to check the Munin graphs for PostgreSQL and JVM next week to see how this affects them</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-13">2017-11-13</h2>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>At the end of the day I checked the logs and it really looks like the Baidu rate limiting is working, HTTP 200 vs 503:</p>
2018-02-11 17:28:23 +01:00
<pre><code># zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep &quot;13/Nov/2017&quot; | grep &quot;Baiduspider&quot; | grep -c &quot; 200 &quot;
1132
# zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep &quot;13/Nov/2017&quot; | grep &quot;Baiduspider&quot; | grep -c &quot; 503 &quot;
10105
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Helping Sisay proof 47 records for IITA: <a href="https://dspacetest.cgiar.org/handle/10568/97029">https://dspacetest.cgiar.org/handle/10568/97029</a></p></li>
<li><p>From looking at the data in OpenRefine I found:</p>
2018-02-11 17:28:23 +01:00
<ul>
<li>Errors in <code>cg.authorship.types</code></li>
<li>Errors in <code>cg.coverage.country</code> (smart quote in &ldquo;COTE DIVOIRE&rdquo;, &ldquo;HAWAII&rdquo; is not a country)</li>
<li>Whitespace issues in some <code>cg.contributor.affiliation</code></li>
<li>Whitespace issues in some <code>cg.identifier.doi</code> fields and most values are using HTTP instead of HTTPS</li>
<li>Whitespace issues in some <code>dc.contributor.author</code> fields</li>
<li>Issue with invalid <code>dc.date.issued</code> value &ldquo;2011-3&rdquo;</li>
<li>Description fields are poorly copypasted</li>
<li>Whitespace issues in <code>dc.description.sponsorship</code></li>
<li>Lots of inconsistency in <code>dc.format.extent</code> (mixed dash style, periods at the end of values)</li>
<li>Whitespace errors in <code>dc.identifier.citation</code></li>
<li>Whitespace errors in <code>dc.subject</code></li>
<li>Whitespace errors in <code>dc.title</code></li>
</ul></li>
2019-05-05 15:45:12 +02:00
<li><p>After uploading and looking at the data in DSpace Test I saw more errors with CRPs, subjects (one item had four copies of all of its subjects, another had a &ldquo;.&rdquo; in it), affiliations, sponsors, etc.</p></li>
<li><p>Atmire responded to the <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=510">ticket about ORCID stuff</a> a few days ago, today I told them that I need to talk to Peter and the partners to see what we would like to do</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-14">2017-11-14</h2>
<ul>
<li>Deploy some nginx configuration updates to CGSpace</li>
<li>They had been waiting on a branch for a few months and I think I just forgot about them</li>
<li>I have been running them on DSpace Test for a few days and haven&rsquo;t seen any issues there</li>
<li>Started testing DSpace 6.2 and a few things have changed</li>
2019-05-05 15:45:12 +02:00
<li><p>Now PostgreSQL needs <code>pgcrypto</code>:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ psql dspace6
dspace6=# CREATE EXTENSION pgcrypto;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Also, local settings are no longer in <code>build.properties</code>, they are now in <code>local.cfg</code></p></li>
<li><p>I&rsquo;m not sure if we can use separate profiles like we did before with <code>mvn -Denv=blah</code> to use blah.properties</p></li>
<li><p>It seems we need to use &ldquo;system properties&rdquo; to override settings, ie: <code>-Ddspace.dir=/Users/aorth/dspace6</code></p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-15">2017-11-15</h2>
<ul>
<li>Send Adam Hunt an invite to the DSpace Developers network on Yammer</li>
<li>He is the new head of communications at WLE, since Michael left</li>
<li>Merge changes to item view&rsquo;s wording of link metadata (<a href="https://github.com/ilri/DSpace/pull/348">#348</a>)</li>
</ul>
<h2 id="2017-11-17">2017-11-17</h2>
<ul>
<li>Uptime Robot said that CGSpace went down today and I see lots of <code>Timeout waiting for idle object</code> errors in the DSpace logs</li>
<li>I looked in PostgreSQL using <code>SELECT * FROM pg_stat_activity;</code> and saw that there were 73 active connections</li>
<li>After a few minutes the connecitons went down to 44 and CGSpace was kinda back up, it seems like Tsega restarted Tomcat</li>
2019-05-05 15:45:12 +02:00
<li><p>Looking at the REST and XMLUI log files, I don&rsquo;t see anything too crazy:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep &quot;17/Nov/2017&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
13 66.249.66.223
14 207.46.13.36
17 207.46.13.137
22 207.46.13.23
23 66.249.66.221
92 66.249.66.219
187 104.196.152.243
1400 70.32.83.92
1503 50.116.102.77
6037 45.5.184.196
2018-02-11 17:28:23 +01:00
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep &quot;17/Nov/2017&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
325 139.162.247.24
354 66.249.66.223
422 207.46.13.36
434 207.46.13.23
501 207.46.13.137
647 66.249.66.221
662 34.192.116.178
762 213.55.99.121
1867 104.196.152.243
2020 66.249.66.219
</code></pre></li>
<li><p>I need to look into using JMX to analyze active sessions I think, rather than looking at log files</p></li>
<li><p>After adding appropriate <a href="https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/">JMX listener options to Tomcat&rsquo;s JAVA_OPTS</a> and restarting Tomcat, I can connect remotely using an SSH dynamic port forward (SOCKS) on port 7777 for example, and then start jconsole locally like:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=7777 service:jmx:rmi:///jndi/rmi://localhost:9000/jmxrmi -J-DsocksNonProxyHosts=
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Looking at the MBeans you can drill down in Catalina→Manager→webapp→localhost→Attributes and see active sessions, etc</p></li>
<li><p>I want to enable JMX listener on CGSpace but I need to do some more testing on DSpace Test and see if it causes any performance impact, for example</p></li>
<li><p>If I hit the server with some requests as a normal user I see the session counter increase, but if I specify a bot user agent then the sessions seem to be reused (meaning the Crawler Session Manager is working)</p></li>
<li><p>Here is the Jconsole screen after looping <code>http --print Hh https://dspacetest.cgiar.org/handle/10568/1</code> for a few minutes:</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<p><img src="/cgspace-notes/2017/11/jconsole-sessions.png" alt="Jconsole sessions for XMLUI" /></p>
<ul>
<li>Switch DSpace Test to using the G1GC for JVM so I can see what the JVM graph looks like eventually, and start evaluating it for production</li>
</ul>
<h2 id="2017-11-19">2017-11-19</h2>
<ul>
<li>Linode sent an alert that CGSpace was using a lot of CPU around 46 AM</li>
2019-05-05 15:45:12 +02:00
<li><p>Looking in the nginx access logs I see the most active XMLUI users between 4 and 6 AM:</p>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E &quot;19/Nov/2017:0[456]&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
111 66.249.66.155
171 5.9.6.51
188 54.162.241.40
229 207.46.13.23
233 207.46.13.137
247 40.77.167.6
251 207.46.13.36
275 68.180.229.254
325 104.196.152.243
1610 66.249.66.153
</code></pre></li>
<li><p>66.249.66.153 appears to be Googlebot:</p>
2018-02-11 17:28:23 +01:00
<pre><code>66.249.66.153 - - [19/Nov/2017:06:26:01 +0000] &quot;GET /handle/10568/2203 HTTP/1.1&quot; 200 6309 &quot;-&quot; &quot;Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)&quot;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>We know Googlebot is persistent but behaves well, so I guess it was just a coincidence that it came at a time when we had other traffic and server activity</p></li>
<li><p>In related news, I see an Atmire update process going for many hours and responsible for hundreds of thousands of log entries (two thirds of all log entries)</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ wc -l dspace.log.2017-11-19
388472 dspace.log.2017-11-19
$ grep -c com.atmire.utils.UpdateSolrStatsMetadata dspace.log.2017-11-19
267494
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>WTF is this process doing every day, and for so many hours?</p></li>
<li><p>In unrelated news, when I was looking at the DSpace logs I saw a bunch of errors like this:</p>
2018-02-11 17:28:23 +01:00
<pre><code>2017-11-19 03:00:32,806 INFO org.apache.pdfbox.pdfparser.PDFParser @ Document is encrypted
2017-11-19 03:00:32,807 ERROR org.apache.pdfbox.filter.FlateFilter @ FlateFilter: stop reading corrupt stream due to a DataFormatException
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>It&rsquo;s been a few days since I enabled the G1GC on DSpace Test and the JVM graph definitely changed:</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<p><img src="/cgspace-notes/2017/11/tomcat-jvm-g1gc.png" alt="Tomcat G1GC" /></p>
<h2 id="2017-11-20">2017-11-20</h2>
<ul>
<li>I found <a href="https://www.cakesolutions.net/teamblogs/low-pause-gc-on-the-jvm">an article about JVM tuning</a> that gives some pointers how to enable logging and tools to analyze logs for you</li>
<li>Also notes on <a href="https://blog.gceasy.io/2016/11/15/rotating-gc-log-files/">rotating GC logs</a></li>
<li>I decided to switch DSpace Test back to the CMS garbage collector because it is designed for low pauses and high throughput (like G1GC!) and because we haven&rsquo;t even tried to monitor or tune it</li>
</ul>
<h2 id="2017-11-21">2017-11-21</h2>
<ul>
2019-05-05 15:45:12 +02:00
<li><p>Magdalena was having problems logging in via LDAP and it seems to be a problem with the CGIAR LDAP server:</p>
2018-02-11 17:28:23 +01:00
<pre><code>2017-11-21 11:11:09,621 WARN org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=2FEC0E5286C17B6694567FFD77C3171C:ip_addr=77.241.141.58:ldap_authentication:type=failed_auth javax.naming.CommunicationException\colon; simple bind failed\colon; svcgroot2.cgiarad.org\colon;3269 [Root exception is javax.net.ssl.SSLHandshakeException\colon; sun.security.validator.ValidatorException\colon; PKIX path validation failed\colon; java.security.cert.CertPathValidatorException\colon; validity check failed]
2019-05-05 15:45:12 +02:00
</code></pre></li>
</ul>
2018-02-11 17:28:23 +01:00
<h2 id="2017-11-22">2017-11-22</h2>
<ul>
<li>Linode sent an alert that the CPU usage on the CGSpace server was very high around 4 to 6 AM</li>
2019-05-05 15:45:12 +02:00
<li><p>The logs don&rsquo;t show anything particularly abnormal between those hours:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E &quot;22/Nov/2017:0[456]&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
136 31.6.77.23
174 68.180.229.254
217 66.249.66.91
256 157.55.39.79
268 54.144.57.183
281 207.46.13.137
282 207.46.13.36
290 207.46.13.23
696 66.249.66.90
707 104.196.152.243
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I haven&rsquo;t seen 54.144.57.183 before, it is apparently the CCBot from commoncrawl.org</p></li>
<li><p>In other news, it looks like the JVM garbage collection pattern is back to its standard jigsaw pattern after switching back to CMS a few days ago:</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<p><img src="/cgspace-notes/2017/11/tomcat-jvm-cms.png" alt="Tomcat JVM with CMS GC" /></p>
<h2 id="2017-11-23">2017-11-23</h2>
<ul>
<li>Linode alerted again that CPU usage was high on CGSpace from 4:13 to 6:13 AM</li>
2019-05-05 15:45:12 +02:00
<li><p>I see a lot of Googlebot (66.249.66.90) in the XMLUI access logs</p>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E &quot;23/Nov/2017:0[456]&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
88 66.249.66.91
140 68.180.229.254
155 54.196.2.131
182 54.224.164.166
301 157.55.39.79
315 207.46.13.36
331 207.46.13.23
358 207.46.13.137
565 104.196.152.243
1570 66.249.66.90
</code></pre></li>
<li><p>&hellip; and the usual REST scrapers from CIAT (45.5.184.196) and CCAFS (70.32.83.92):</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep -E &quot;23/Nov/2017:0[456]&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
5 190.120.6.219
6 104.198.9.108
14 104.196.152.243
21 112.134.150.6
22 157.55.39.79
22 207.46.13.137
23 207.46.13.36
26 207.46.13.23
942 45.5.184.196
3995 70.32.83.92
</code></pre></li>
<li><p>These IPs crawling the REST API don&rsquo;t specify user agents and I&rsquo;d assume they are creating many Tomcat sessions</p></li>
<li><p>I would catch them in nginx to assign a &ldquo;bot&rdquo; user agent to them so that the Tomcat Crawler Session Manager valve could deal with them, but they seem to create any reallyat least not in the dspace.log:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ grep 70.32.83.92 dspace.log.2017-11-23 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I&rsquo;m wondering if REST works differently, or just doesn&rsquo;t log these sessions?</p></li>
<li><p>I wonder if they are measurable via JMX MBeans?</p></li>
<li><p>I did some tests locally and I don&rsquo;t see the sessionCounter incrementing after making requests to REST, but it does with XMLUI and OAI</p></li>
<li><p>I came across some interesting PostgreSQL tuning advice for SSDs: <a href="https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0">https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0</a></p></li>
<li><p>Apparently setting <code>random_page_cost</code> to 1 is &ldquo;common&rdquo; advice for systems running PostgreSQL on SSD (the default is 4)</p></li>
<li><p>So I deployed this on DSpace Test and will check the Munin PostgreSQL graphs in a few days to see if anything changes</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-24">2017-11-24</h2>
<ul>
<li>It&rsquo;s too early to tell for sure, but after I made the <code>random_page_cost</code> change on DSpace Test&rsquo;s PostgreSQL yesterday the number of connections dropped drastically:</li>
</ul>
<p><img src="/cgspace-notes/2017/11/postgres-connections-week.png" alt="PostgreSQL connections after tweak (week)" /></p>
<ul>
<li>There have been other temporary drops before, but if I look at the past month and actually the whole year, the trend is that connections are four or five times higher on average:</li>
</ul>
<p><img src="/cgspace-notes/2017/11/postgres-connections-month.png" alt="PostgreSQL connections after tweak (month)" /></p>
<ul>
<li>I just realized that we&rsquo;re not logging access requests to other vhosts on CGSpace, so it&rsquo;s possible I have no idea that we&rsquo;re getting slammed at 4AM on another domain that we&rsquo;re just silently redirecting to cgspace.cgiar.org</li>
<li>I&rsquo;ve enabled logging on the CGIAR Library on CGSpace so I can check to see if there are many requests there</li>
<li>In just a few seconds I already see a dozen requests from Googlebot (of course they get HTTP 301 redirects to cgspace.cgiar.org)</li>
2019-05-05 15:45:12 +02:00
<li><p>I also noticed that CGNET appears to be monitoring the old domain every few minutes:</p>
2018-02-11 17:28:23 +01:00
<pre><code>192.156.137.184 - - [24/Nov/2017:20:33:58 +0000] &quot;HEAD / HTTP/1.1&quot; 301 0 &quot;-&quot; &quot;curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2&quot;
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I should probably tell CGIAR people to have CGNET stop that</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-26">2017-11-26</h2>
<ul>
<li>Linode alerted that CGSpace server was using too much CPU from 5:18 to 7:18 AM</li>
2019-05-05 15:45:12 +02:00
<li><p>Yet another mystery because the load for all domains looks fine at that time:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E &quot;26/Nov/2017:0[567]&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
190 66.249.66.83
195 104.196.152.243
220 40.77.167.82
246 207.46.13.137
247 68.180.229.254
257 157.55.39.214
289 66.249.66.91
298 157.55.39.206
379 66.249.66.70
1855 66.249.66.90
</code></pre></li>
</ul>
2018-02-11 17:28:23 +01:00
<h2 id="2017-11-29">2017-11-29</h2>
<ul>
<li>Linode alerted that CGSpace was using 279% CPU from 6 to 8 AM this morning</li>
<li>About an hour later Uptime Robot said that the server was down</li>
2019-05-05 15:45:12 +02:00
<li><p>Here are all the top XMLUI and REST users from today:</p>
2018-02-11 17:28:23 +01:00
<pre><code># cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep -E &quot;29/Nov/2017&quot; | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2019-05-05 15:45:12 +02:00
540 66.249.66.83
659 40.77.167.36
663 157.55.39.214
681 157.55.39.206
733 157.55.39.158
850 66.249.66.70
1311 66.249.66.90
1340 104.196.152.243
4008 70.32.83.92
6053 45.5.184.196
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>PostgreSQL activity shows 69 connections</p></li>
<li><p>I don&rsquo;t have time to troubleshoot more as I&rsquo;m in Nairobi working on the HPC so I just restarted Tomcat for now</p></li>
<li><p>A few hours later Uptime Robot says the server is down again</p></li>
<li><p>I don&rsquo;t see much activity in the logs but there are 87 PostgreSQL connections</p></li>
<li><p>But shit, there were 10,000 unique Tomcat sessions today:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ cat dspace.log.2017-11-29 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
10037
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>Although maybe that&rsquo;s not much, as the previous two days had more:</p>
2018-02-11 17:28:23 +01:00
<pre><code>$ cat dspace.log.2017-11-27 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
12377
$ cat dspace.log.2017-11-28 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
16984
2019-05-05 15:45:12 +02:00
</code></pre></li>
2018-02-11 17:28:23 +01:00
2019-05-05 15:45:12 +02:00
<li><p>I think we just need start increasing the number of allowed PostgreSQL connections instead of fighting this, as it&rsquo;s the most common source of crashes we have</p></li>
<li><p>I will bump DSpace&rsquo;s <code>db.maxconnections</code> from 60 to 90, and PostgreSQL&rsquo;s <code>max_connections</code> from 183 to 273 (which is using my loose formula of 90 * webapps + 3)</p></li>
<li><p>I really need to figure out how to get DSpace to use a PostgreSQL connection pool</p></li>
2018-02-11 17:28:23 +01:00
</ul>
<h2 id="2017-11-30">2017-11-30</h2>
<ul>
<li>Linode alerted about high CPU usage on CGSpace again around 6 to 8 AM</li>
<li>Then Uptime Robot said CGSpace was down a few minutes later, but it resolved itself I think (or Tsega restarted Tomcat, I don&rsquo;t know)</li>
</ul>
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
2019-08-04 21:49:04 +02:00
<li><a href="/cgspace-notes/2019-08/">August, 2019</a></li>
2019-04-01 08:02:18 +02:00
2019-04-13 11:15:55 +02:00
<li><a href="/cgspace-notes/posts/">Posts</a></li>
2019-08-04 21:49:04 +02:00
<li><a href="/cgspace-notes/2019-07/">July, 2019</a></li>
2019-07-01 11:22:43 +02:00
<li><a href="/cgspace-notes/2019-06/">June, 2019</a></li>
2019-06-02 09:57:51 +02:00
<li><a href="/cgspace-notes/2019-05/">May, 2019</a></li>
2018-02-11 17:28:23 +01:00
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p>
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>