Update notes for 2019-03-25

This commit is contained in:
Alan Orth 2019-03-25 23:47:00 +02:00
parent b30dab348d
commit b8af480098
Signed by: alanorth
GPG Key ID: 0FB860CC9C45B1B9
3 changed files with 147 additions and 8 deletions

View File

@ -734,5 +734,70 @@ $ grep 'Can not load requested doc' cocoon.log.2019-03-25 | grep -oE '2019-03-25
- By default you get the Commons DBCP one unless you specify factory `org.apache.tomcat.jdbc.pool.DataSourceFactory` - By default you get the Commons DBCP one unless you specify factory `org.apache.tomcat.jdbc.pool.DataSourceFactory`
- Now I see all my interceptor settings etc in jconsole, where I didn't see them before (also a new `tomcat.jdbc` mbean)! - Now I see all my interceptor settings etc in jconsole, where I didn't see them before (also a new `tomcat.jdbc` mbean)!
- No wonder our settings didn't quite match the ones in the [Tomcat DBCP Pool docs](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html) - No wonder our settings didn't quite match the ones in the [Tomcat DBCP Pool docs](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html)
- Uptime Robot reported that CGSpace went down and I see the load is very high
- The top IPs around the time in the nginx API and web logs were:
```
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "25/Mar/2019:(18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
9 190.252.43.162
12 157.55.39.140
18 157.55.39.54
21 66.249.66.211
27 40.77.167.185
29 138.220.87.165
30 157.55.39.168
36 157.55.39.9
50 52.23.239.229
2380 45.5.186.2
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "25/Mar/2019:(18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
354 18.195.78.144
363 190.216.179.100
386 40.77.167.185
484 157.55.39.168
507 157.55.39.9
536 2a01:4f8:140:3192::2
1123 66.249.66.211
1186 93.179.69.74
1222 35.174.184.209
1720 2a01:4f8:13b:1296::2
```
- The IPs look pretty normal except we've never seen `93.179.69.74` before, and it uses the following user agent:
```
Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.20 Safari/535.1
```
- Surprisingly they are re-using their Tomcat session:
```
$ grep -o -E 'session_id=[A-Z0-9]{32}:ip_addr=93.179.69.74' dspace.log.2019-03-25 | sort | uniq | wc -l
1
```
- That's weird because the total number of sessions today seems low compared to recent days:
```
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-25 | sort -u | wc -l
5657
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-24 | sort -u | wc -l
17710
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-23 | sort -u | wc -l
17179
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-22 | sort -u | wc -l
7904
```
- PostgreSQL seems to be pretty busy:
```
$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
11 dspaceApi
10 dspaceCli
67 dspaceWeb
```
- I restarted Tomcat and deployed the new Tomcat JDBC settings on CGSpace since I had to restart the server anyways
- I need to watch this carefully though because I've read some places that Tomcat's DBCP doesn't track statements and might create memory leaks if an application doesn't close statements before a connection gets returned back to the pool
- According the Uptime Robot the server was up and down a few more times over the next hour so I restarted Tomcat again
<!-- vim: set sw=2 ts=2: --> <!-- vim: set sw=2 ts=2: -->

View File

@ -25,7 +25,7 @@ I think I will need to ask Udana to re-copy and paste the abstracts with more ca
<meta property="og:type" content="article" /> <meta property="og:type" content="article" />
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2019-03/" /> <meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2019-03/" />
<meta property="article:published_time" content="2019-03-01T12:16:30&#43;01:00"/> <meta property="article:published_time" content="2019-03-01T12:16:30&#43;01:00"/>
<meta property="article:modified_time" content="2019-03-25T12:39:22&#43;02:00"/> <meta property="article:modified_time" content="2019-03-25T12:59:24&#43;02:00"/>
<meta name="twitter:card" content="summary"/> <meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="March, 2019"/> <meta name="twitter:title" content="March, 2019"/>
@ -55,9 +55,9 @@ I think I will need to ask Udana to re-copy and paste the abstracts with more ca
"@type": "BlogPosting", "@type": "BlogPosting",
"headline": "March, 2019", "headline": "March, 2019",
"url": "https://alanorth.github.io/cgspace-notes/2019-03/", "url": "https://alanorth.github.io/cgspace-notes/2019-03/",
"wordCount": "4722", "wordCount": "5067",
"datePublished": "2019-03-01T12:16:30&#43;01:00", "datePublished": "2019-03-01T12:16:30&#43;01:00",
"dateModified": "2019-03-25T12:39:22&#43;02:00", "dateModified": "2019-03-25T12:59:24&#43;02:00",
"author": { "author": {
"@type": "Person", "@type": "Person",
"name": "Alan Orth" "name": "Alan Orth"
@ -1000,6 +1000,80 @@ org.postgresql.util.PSQLException: This statement has been closed.
<li>Now I see all my interceptor settings etc in jconsole, where I didn&rsquo;t see them before (also a new <code>tomcat.jdbc</code> mbean)!</li> <li>Now I see all my interceptor settings etc in jconsole, where I didn&rsquo;t see them before (also a new <code>tomcat.jdbc</code> mbean)!</li>
<li>No wonder our settings didn&rsquo;t quite match the ones in the <a href="https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html">Tomcat DBCP Pool docs</a></li> <li>No wonder our settings didn&rsquo;t quite match the ones in the <a href="https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html">Tomcat DBCP Pool docs</a></li>
</ul></li> </ul></li>
<li>Uptime Robot reported that CGSpace went down and I see the load is very high</li>
<li>The top IPs around the time in the nginx API and web logs were:</li>
</ul>
<pre><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &quot;25/Mar/2019:(18|19|20|21)&quot; | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
9 190.252.43.162
12 157.55.39.140
18 157.55.39.54
21 66.249.66.211
27 40.77.167.185
29 138.220.87.165
30 157.55.39.168
36 157.55.39.9
50 52.23.239.229
2380 45.5.186.2
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &quot;25/Mar/2019:(18|19|20|21)&quot; | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
354 18.195.78.144
363 190.216.179.100
386 40.77.167.185
484 157.55.39.168
507 157.55.39.9
536 2a01:4f8:140:3192::2
1123 66.249.66.211
1186 93.179.69.74
1222 35.174.184.209
1720 2a01:4f8:13b:1296::2
</code></pre>
<ul>
<li>The IPs look pretty normal except we&rsquo;ve never seen <code>93.179.69.74</code> before, and it uses the following user agent:</li>
</ul>
<pre><code>Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.20 Safari/535.1
</code></pre>
<ul>
<li>Surprisingly they are re-using their Tomcat session:</li>
</ul>
<pre><code>$ grep -o -E 'session_id=[A-Z0-9]{32}:ip_addr=93.179.69.74' dspace.log.2019-03-25 | sort | uniq | wc -l
1
</code></pre>
<ul>
<li>That&rsquo;s weird because the total number of sessions today seems low compared to recent days:</li>
</ul>
<pre><code>$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-25 | sort -u | wc -l
5657
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-24 | sort -u | wc -l
17710
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-23 | sort -u | wc -l
17179
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-22 | sort -u | wc -l
7904
</code></pre>
<ul>
<li>PostgreSQL seems to be pretty busy:</li>
</ul>
<pre><code>$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
11 dspaceApi
10 dspaceCli
67 dspaceWeb
</code></pre>
<ul>
<li>I restarted Tomcat and deployed the new Tomcat JDBC settings on CGSpace since I had to restart the server anyways
<ul>
<li>I need to watch this carefully though because I&rsquo;ve read some places that Tomcat&rsquo;s DBCP doesn&rsquo;t track statements and might create memory leaks if an application doesn&rsquo;t close statements before a connection gets returned back to the pool</li>
</ul></li>
<li>According the Uptime Robot the server was up and down a few more times over the next hour so I restarted Tomcat again</li>
</ul> </ul>
<!-- vim: set sw=2 ts=2: --> <!-- vim: set sw=2 ts=2: -->

View File

@ -4,7 +4,7 @@
<url> <url>
<loc>https://alanorth.github.io/cgspace-notes/2019-03/</loc> <loc>https://alanorth.github.io/cgspace-notes/2019-03/</loc>
<lastmod>2019-03-25T12:39:22+02:00</lastmod> <lastmod>2019-03-25T12:59:24+02:00</lastmod>
</url> </url>
<url> <url>
@ -214,7 +214,7 @@
<url> <url>
<loc>https://alanorth.github.io/cgspace-notes/</loc> <loc>https://alanorth.github.io/cgspace-notes/</loc>
<lastmod>2019-03-25T12:39:22+02:00</lastmod> <lastmod>2019-03-25T12:59:24+02:00</lastmod>
<priority>0</priority> <priority>0</priority>
</url> </url>
@ -225,7 +225,7 @@
<url> <url>
<loc>https://alanorth.github.io/cgspace-notes/tags/notes/</loc> <loc>https://alanorth.github.io/cgspace-notes/tags/notes/</loc>
<lastmod>2019-03-25T12:39:22+02:00</lastmod> <lastmod>2019-03-25T12:59:24+02:00</lastmod>
<priority>0</priority> <priority>0</priority>
</url> </url>
@ -237,13 +237,13 @@
<url> <url>
<loc>https://alanorth.github.io/cgspace-notes/posts/</loc> <loc>https://alanorth.github.io/cgspace-notes/posts/</loc>
<lastmod>2019-03-25T12:39:22+02:00</lastmod> <lastmod>2019-03-25T12:59:24+02:00</lastmod>
<priority>0</priority> <priority>0</priority>
</url> </url>
<url> <url>
<loc>https://alanorth.github.io/cgspace-notes/tags/</loc> <loc>https://alanorth.github.io/cgspace-notes/tags/</loc>
<lastmod>2019-03-25T12:39:22+02:00</lastmod> <lastmod>2019-03-25T12:59:24+02:00</lastmod>
<priority>0</priority> <priority>0</priority>
</url> </url>