mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Update notes for 2019-03-25
This commit is contained in:
@ -734,5 +734,70 @@ $ grep 'Can not load requested doc' cocoon.log.2019-03-25 | grep -oE '2019-03-25
|
||||
- By default you get the Commons DBCP one unless you specify factory `org.apache.tomcat.jdbc.pool.DataSourceFactory`
|
||||
- Now I see all my interceptor settings etc in jconsole, where I didn't see them before (also a new `tomcat.jdbc` mbean)!
|
||||
- No wonder our settings didn't quite match the ones in the [Tomcat DBCP Pool docs](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html)
|
||||
- Uptime Robot reported that CGSpace went down and I see the load is very high
|
||||
- The top IPs around the time in the nginx API and web logs were:
|
||||
|
||||
```
|
||||
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "25/Mar/2019:(18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
||||
9 190.252.43.162
|
||||
12 157.55.39.140
|
||||
18 157.55.39.54
|
||||
21 66.249.66.211
|
||||
27 40.77.167.185
|
||||
29 138.220.87.165
|
||||
30 157.55.39.168
|
||||
36 157.55.39.9
|
||||
50 52.23.239.229
|
||||
2380 45.5.186.2
|
||||
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "25/Mar/2019:(18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
||||
354 18.195.78.144
|
||||
363 190.216.179.100
|
||||
386 40.77.167.185
|
||||
484 157.55.39.168
|
||||
507 157.55.39.9
|
||||
536 2a01:4f8:140:3192::2
|
||||
1123 66.249.66.211
|
||||
1186 93.179.69.74
|
||||
1222 35.174.184.209
|
||||
1720 2a01:4f8:13b:1296::2
|
||||
```
|
||||
- The IPs look pretty normal except we've never seen `93.179.69.74` before, and it uses the following user agent:
|
||||
|
||||
```
|
||||
Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.20 Safari/535.1
|
||||
```
|
||||
|
||||
- Surprisingly they are re-using their Tomcat session:
|
||||
|
||||
```
|
||||
$ grep -o -E 'session_id=[A-Z0-9]{32}:ip_addr=93.179.69.74' dspace.log.2019-03-25 | sort | uniq | wc -l
|
||||
1
|
||||
```
|
||||
|
||||
- That's weird because the total number of sessions today seems low compared to recent days:
|
||||
|
||||
```
|
||||
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-25 | sort -u | wc -l
|
||||
5657
|
||||
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-24 | sort -u | wc -l
|
||||
17710
|
||||
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-23 | sort -u | wc -l
|
||||
17179
|
||||
$ grep -o -E 'session_id=[A-Z0-9]{32}' dspace.log.2019-03-22 | sort -u | wc -l
|
||||
7904
|
||||
```
|
||||
|
||||
- PostgreSQL seems to be pretty busy:
|
||||
|
||||
```
|
||||
$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
|
||||
11 dspaceApi
|
||||
10 dspaceCli
|
||||
67 dspaceWeb
|
||||
```
|
||||
|
||||
- I restarted Tomcat and deployed the new Tomcat JDBC settings on CGSpace since I had to restart the server anyways
|
||||
- I need to watch this carefully though because I've read some places that Tomcat's DBCP doesn't track statements and might create memory leaks if an application doesn't close statements before a connection gets returned back to the pool
|
||||
- According the Uptime Robot the server was up and down a few more times over the next hour so I restarted Tomcat again
|
||||
|
||||
<!-- vim: set sw=2 ts=2: -->
|
||||
|
Reference in New Issue
Block a user