cgspace-notes/content/post/2017-11.md

696 lines
31 KiB
Markdown
Raw Normal View History

2017-11-02 08:37:59 +01:00
---
title: "November, 2017"
date: 2017-11-02T09:37:54+02:00
author: "Alan Orth"
tags: ["Notes"]
---
## 2017-11-01
- The CORE developers responded to say they are looking into their bot not respecting our robots.txt
## 2017-11-02
- Today there have been no hits by CORE and no alerts from Linode (coincidence?)
```
# grep -c "CORE" /var/log/nginx/access.log
0
```
2017-11-02 09:07:34 +01:00
- Generate list of authors on CGSpace for Peter to go through and correct:
```
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
```
2017-11-02 08:37:59 +01:00
<!--more-->
2017-11-02 16:05:37 +01:00
- Abenet asked if it would be possible to generate a report of items in Listing and Reports that had "International Fund for Agricultural Development" as the *only* investor
- I opened a ticket with Atmire to ask if this was possible: https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=540
- Work on making the thumbnails in the item view clickable
- Basically, once you read the METS XML for an item it becomes easy to trace the structure to find the bitstream link
```
//mets:fileSec/mets:fileGrp[@USE='CONTENT']/mets:file/mets:FLocat[@LOCTYPE='URL']/@xlink:href
```
- METS XML is available for all items with this pattern: /metadata/handle/10568/95947/mets.xml
- I whipped up a quick hack to print a clickable link with this URL on the thumbnail but it needs to check a few corner cases, like when there is a thumbnail but no content bitstream!
2017-11-02 16:30:54 +01:00
- Help proof fifty-three CIAT records for Sisay: https://dspacetest.cgiar.org/handle/10568/95895
- A handful of issues with `cg.place` using format like "Lima, PE" instead of "Lima, Peru"
- Also, some dates like with completely invalid format like "2010- 06" and "2011-3-28"
- I also collapsed some consecutive whitespace on a handful of fields
2017-11-03 17:15:40 +01:00
## 2017-11-03
- Atmire got back to us to say that they estimate it will take two days of labor to implement the change to Listings and Reports
- I said I'd ask Abenet if she wants that feature
2017-11-04 15:44:23 +01:00
## 2017-11-04
- I finished looking through Sisay's CIAT records for the "Alianzas de Aprendizaje" data
- I corrected about half of the authors to standardize them
- Linode emailed this morning to say that the CPU usage was high again, this time at 6:14AM
- It's the first time in a few days that this has happened
- I had a look to see what was going on, but it isn't the CORE bot:
```
# awk '{print $1}' /var/log/nginx/access.log | sort -n | uniq -c | sort -h | tail
306 68.180.229.31
323 61.148.244.116
414 66.249.66.91
507 40.77.167.16
618 157.55.39.161
652 207.46.13.103
666 157.55.39.254
1173 104.196.152.243
1737 66.249.66.90
23101 138.201.52.218
```
- 138.201.52.218 is from some Hetzner server, and I see it making 40,000 requests yesterday too, but none before that:
```
# zgrep -c 138.201.52.218 /var/log/nginx/access.log*
/var/log/nginx/access.log:24403
/var/log/nginx/access.log.1:45958
/var/log/nginx/access.log.2.gz:0
/var/log/nginx/access.log.3.gz:0
/var/log/nginx/access.log.4.gz:0
/var/log/nginx/access.log.5.gz:0
/var/log/nginx/access.log.6.gz:0
```
- It's clearly a bot as it's making tens of thousands of requests, but it's using a "normal" user agent:
```
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36
```
- For now I don't know what this user is!
2017-11-05 14:06:22 +01:00
## 2017-11-05
- Peter asked if I could fix the appearance of "International Livestock Research Institute" in the author lookup during item submission
- It looks to be just an issue with the user interface expecting authors to have both a first and last name:
![Author lookup](/cgspace-notes/2017/11/author-lookup.png)
![Add author](/cgspace-notes/2017/11/add-author.png)
- But in the database the authors are correct (none with weird `, /` characters):
```
dspace=# select distinct text_value, authority, confidence from metadatavalue value where resource_type_id=2 and metadata_field_id=3 and text_value like 'International Livestock Research Institute%';
text_value | authority | confidence
--------------------------------------------+--------------------------------------+------------
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 0
International Livestock Research Institute | f4db1627-47cd-4699-b394-bab7eba6dadc | 0
International Livestock Research Institute | | -1
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 600
International Livestock Research Institute | f4db1627-47cd-4699-b394-bab7eba6dadc | -1
International Livestock Research Institute | | 600
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | -1
International Livestock Research Institute | 8f3865dc-d056-4aec-90b7-77f49ab4735c | 500
(8 rows)
```
- So I'm not sure if this is just a graphical glitch or if editors have to edit this metadata field prior to approval
2017-11-05 14:53:35 +01:00
- Looking at monitoring Tomcat's JVM heap with Prometheus, it looks like we need to use JMX + [jmx_exporter](https://github.com/prometheus/jmx_exporter)
- This guide shows how to [enable JMX in Tomcat](https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/) by modifying `CATALINA_OPTS`
- I was able to successfully connect to my local Tomcat with jconsole!
2017-11-07 13:50:01 +01:00
## 2017-11-07
- CGSpace when down and up a few times this morning, first around 3AM, then around 7
- Tsega had to restart Tomcat 7 to fix it temporarily
- I will start by looking at bot usage (access.log.1 includes usage until 6AM today):
```
# cat /var/log/nginx/access.log.1 | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
619 65.49.68.184
840 65.49.68.199
924 66.249.66.91
1131 68.180.229.254
1583 66.249.66.90
1953 207.46.13.103
1999 207.46.13.80
2021 157.55.39.161
2034 207.46.13.36
4681 104.196.152.243
```
- 104.196.152.243 seems to be a top scraper for a few weeks now:
```
# zgrep -c 104.196.152.243 /var/log/nginx/access.log*
/var/log/nginx/access.log:336
/var/log/nginx/access.log.1:4681
/var/log/nginx/access.log.2.gz:3531
/var/log/nginx/access.log.3.gz:3532
/var/log/nginx/access.log.4.gz:5786
/var/log/nginx/access.log.5.gz:8542
/var/log/nginx/access.log.6.gz:6988
/var/log/nginx/access.log.7.gz:7517
/var/log/nginx/access.log.8.gz:7211
/var/log/nginx/access.log.9.gz:2763
```
- This user is responsible for hundreds and sometimes thousands of Tomcat sessions:
```
$ grep 104.196.152.243 dspace.log.2017-11-07 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
954
$ grep 104.196.152.243 dspace.log.2017-11-03 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
6199
$ grep 104.196.152.243 dspace.log.2017-11-01 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
7051
```
- The worst thing is that this user never specifies a user agent string so we can't lump it in with the other bots using the Tomcat Session Crawler Manager Valve
- They don't request dynamic URLs like "/discover" but they seem to be fetching handles from XMLUI instead of REST (and some with `//handle`, note the regex below):
```
# grep -c 104.196.152.243 /var/log/nginx/access.log.1
4681
# grep 104.196.152.243 /var/log/nginx/access.log.1 | grep -c -P 'GET //?handle'
4618
```
- I just realized that `ciat.cgiar.org` points to 104.196.152.243, so I should contact Leroy from CIAT to see if we can change their scraping behavior
- The next IP (207.46.13.36) seem to be Microsoft's bingbot, but all its requests specify the "bingbot" user agent and there are no requests for dynamic URLs that are forbidden, like "/discover":
```
$ grep -c 207.46.13.36 /var/log/nginx/access.log.1
2034
# grep 207.46.13.36 /var/log/nginx/access.log.1 | grep -c "GET /discover"
0
```
- The next IP (157.55.39.161) also seems to be bingbot, and none of its requests are for URLs forbidden by robots.txt either:
```
# grep 157.55.39.161 /var/log/nginx/access.log.1 | grep -c "GET /discover"
0
```
- The next few seem to be bingbot as well, and they declare a proper user agent and do not request dynamic URLs like "/discover":
```
# grep -c -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1
5997
# grep -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c "bingbot"
5988
# grep -E '207.46.13.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c "GET /discover"
0
```
- The next few seem to be Googlebot, and they declare a proper user agent and do not request dynamic URLs like "/discover":
```
# grep -c -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1
3048
# grep -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c Google
3048
# grep -E '66.249.66.[0-9]{2,3}' /var/log/nginx/access.log.1 | grep -c "GET /discover"
0
```
- The next seems to be Yahoo, which declares a proper user agent and does not request dynamic URLs like "/discover":
```
# grep -c 68.180.229.254 /var/log/nginx/access.log.1
1131
# grep 68.180.229.254 /var/log/nginx/access.log.1 | grep -c "GET /discover"
0
```
- The last of the top ten IPs seems to be some bot with a weird user agent, but they are not behaving too well:
```
# grep -c -E '65.49.68.[0-9]{3}' /var/log/nginx/access.log.1
2950
# grep -E '65.49.68.[0-9]{3}' /var/log/nginx/access.log.1 | grep -c "GET /discover"
330
```
- Their user agents vary, ie:
- `Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36`
- `Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11`
- `Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)`
- I'll just keep an eye on that one for now, as it only made a few hundred requests to dynamic discovery URLs
- While it's not in the top ten, Baidu is one bot that seems to not give a fuck:
```
2017-11-16 09:15:33 +01:00
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep "7/Nov/2017" | grep -c Baiduspider
8912
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep "7/Nov/2017" | grep Baiduspider | grep -c -E "GET /(browse|discover|search-filter)"
2521
2017-11-07 13:50:01 +01:00
```
- According to their documentation their bot [respects `robots.txt`](http://www.baidu.com/search/robots_english.html), but I don't see this being the case
- I think I will end up blocking Baidu as well...
- Next is for me to look and see what was happening specifically at 3AM and 7AM when the server crashed
- I should look in nginx access.log, rest.log, oai.log, and DSpace's dspace.log.2017-11-07
2017-11-09 16:52:14 +01:00
- Here are the top IPs making requests to XMLUI from 2 to 8 AM:
2017-11-07 13:50:01 +01:00
```
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
279 66.249.66.91
373 65.49.68.199
446 68.180.229.254
470 104.196.152.243
470 197.210.168.174
598 207.46.13.103
603 157.55.39.161
637 207.46.13.80
703 207.46.13.36
724 66.249.66.90
```
- Of those, most are Google, Bing, Yahoo, etc, except 63.143.42.244 and 63.143.42.242 which are Uptime Robot
2017-11-09 16:52:14 +01:00
- Here are the top IPs making requests to REST from 2 to 8 AM:
2017-11-07 16:03:49 +01:00
```
2017-11-09 16:52:14 +01:00
# cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
2017-11-07 16:03:49 +01:00
8 207.241.229.237
10 66.249.66.90
16 104.196.152.243
25 41.60.238.61
26 157.55.39.161
27 207.46.13.103
27 207.46.13.80
31 207.46.13.36
1498 50.116.102.77
```
- The OAI requests during that same time period are nothing to worry about:
```
# cat /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E '07/Nov/2017:0[2-8]' | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
1 66.249.66.92
4 66.249.66.90
6 68.180.229.254
```
- The top IPs from dspace.log during the 28 AM period:
```
$ grep -E '2017-11-07 0[2-8]' dspace.log.2017-11-07 | grep -o -E 'ip_addr=[0-9.]+' | sort -n | uniq -c | sort -h | tail
143 ip_addr=213.55.99.121
181 ip_addr=66.249.66.91
223 ip_addr=157.55.39.161
248 ip_addr=207.46.13.80
251 ip_addr=207.46.13.103
291 ip_addr=207.46.13.36
297 ip_addr=197.210.168.174
312 ip_addr=65.49.68.199
462 ip_addr=104.196.152.243
488 ip_addr=66.249.66.90
```
- These aren't actually very interesting, as the top few are Google, CIAT, Bingbot, and a few other unknown scrapers
- The number of requests isn't even that high to be honest
- As I was looking at these logs I noticed another heavy user (124.17.34.59) that was not active during this time period, but made many requests today alone:
```
# zgrep -c 124.17.34.59 /var/log/nginx/access.log*
/var/log/nginx/access.log:22581
/var/log/nginx/access.log.1:0
/var/log/nginx/access.log.2.gz:14
/var/log/nginx/access.log.3.gz:0
/var/log/nginx/access.log.4.gz:0
/var/log/nginx/access.log.5.gz:3
/var/log/nginx/access.log.6.gz:0
/var/log/nginx/access.log.7.gz:0
/var/log/nginx/access.log.8.gz:0
/var/log/nginx/access.log.9.gz:1
```
- The whois data shows the IP is from China, but the user agent doesn't really give any clues:
```
# grep 124.17.34.59 /var/log/nginx/access.log | awk -F'" ' '{print $3}' | sort | uniq -c | sort -h
210 "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"
22610 "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.2; Win64; x64; Trident/7.0; LCTE)"
```
- A Google search for "LCTE bot" doesn't return anything interesting, but this [Stack Overflow discussion](https://stackoverflow.com/questions/42500881/what-is-lcte-in-user-agent) references the lack of information
- So basically after a few hours of looking at the log files I am not closer to understanding what is going on!
- I do know that we want to block Baidu, though, as it does not respect `robots.txt`
- And as we speak Linode alerted that the outbound traffic rate is very high for the past two hours (about 1214 hours)
- At least for now it seems to be that new Chinese IP (124.17.34.59):
```
# grep -E "07/Nov/2017:1[234]:" /var/log/nginx/access.log | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
198 207.46.13.103
203 207.46.13.80
205 207.46.13.36
218 157.55.39.161
249 45.5.184.221
258 45.5.187.130
386 66.249.66.90
410 197.210.168.174
1896 104.196.152.243
11005 124.17.34.59
```
- Seems 124.17.34.59 are really downloading all our PDFs, compared to the next top active IPs during this time!
```
# grep -E "07/Nov/2017:1[234]:" /var/log/nginx/access.log | grep 124.17.34.59 | grep -c pdf
5948
# grep -E "07/Nov/2017:1[234]:" /var/log/nginx/access.log | grep 104.196.152.243 | grep -c pdf
0
```
2017-11-07 16:26:16 +01:00
- About CIAT, I think I need to encourage them to specify a user agent string for their requests, because they are not reuising their Tomcat session and they are creating thousands of sessions per day
- All CIAT requests vs unique ones:
```
$ grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=104.196.152.243' dspace.log.2017-11-07 | wc -l
3506
$ grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=104.196.152.243' dspace.log.2017-11-07 | sort | uniq | wc -l
3506
```
- I emailed CIAT about the session issue, user agent issue, and told them they should not scrape the HTML contents of communities, instead using the REST API
2017-11-07 17:09:29 +01:00
- About Baidu, I found a link to their [robots.txt tester tool](http://ziyuan.baidu.com/robots/)
2017-11-07 17:23:10 +01:00
- It seems like our robots.txt file is valid, and they claim to recognize that URLs like `/discover` should be forbidden (不允许, aka "not allowed"):
2017-11-07 17:09:29 +01:00
![Baidu robots.txt tester](/cgspace-notes/2017/11/baidu-robotstxt.png)
- But they literally just made this request today:
```
180.76.15.136 - - [07/Nov/2017:06:25:11 +0000] "GET /discover?filtertype_0=crpsubject&filter_relational_operator_0=equals&filter_0=WATER%2C+LAND+AND+ECOSYSTEMS&filtertype=subject&filter_relational_operator=equals&filter=WATER+RESOURCES HTTP/1.1" 200 82265 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"
```
- Along with another thousand or so requests to URLs that are forbidden in robots.txt today alone:
```
# grep -c Baiduspider /var/log/nginx/access.log
3806
# grep Baiduspider /var/log/nginx/access.log | grep -c -E "GET /(browse|discover|search-filter)"
1085
```
- I will think about blocking their IPs but they have 164 of them!
```
# grep "Baiduspider/2.0" /var/log/nginx/access.log | awk '{print $1}' | sort -n | uniq | wc -l
164
```
2017-11-08 08:08:32 +01:00
## 2017-11-08
- Linode sent several alerts last night about CPU usage and outbound traffic rate at 6:13PM
- Linode sent another alert about CPU usage in the morning at 6:12AM
2017-11-08 13:17:04 +01:00
- Jesus, the new Chinese IP (124.17.34.59) has downloaded 24,000 PDFs in the last 24 hours:
2017-11-08 08:08:32 +01:00
```
2017-11-08 13:17:04 +01:00
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep -E "0[78]/Nov/2017:" | grep 124.17.34.59 | grep -v pdf.jpg | grep -c pdf
24981
2017-11-08 08:08:32 +01:00
```
- This is about 20,000 Tomcat sessions:
```
$ cat dspace.log.2017-11-07 dspace.log.2017-11-08 | grep -Io -E 'session_id=[A-Z0-9]{32}:ip_addr=124.17.34.59' | sort | uniq | wc -l
20733
```
- I'm getting really sick of this
- Sisay re-uploaded the CIAT records that I had already corrected earlier this week, erasing all my corrections
- I had to re-correct all the publishers, places, names, dates, etc and apply the changes on DSpace Test
2017-11-08 13:17:04 +01:00
- Run system updates on DSpace Test and reboot the server
- Magdalena had written to say that two of their Phase II project tags were missing on CGSpace, so I added them ([#346](https://github.com/ilri/DSpace/pull/346))
- I figured out a way to use nginx's map function to assign a "bot" user agent to misbehaving clients who don't define a user agent
- Most bots are automatically lumped into one generic session by [Tomcat's Crawler Session Manager Valve](https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Crawler_Session_Manager_Valve) but this only works if their user agent matches a pre-defined regular expression like `.*[bB]ot.*`
- Some clients send thousands of requests without a user agent which ends up creating thousands of Tomcat sessions, wasting precious memory, CPU, and database resources in the process
- Basically, we modify the nginx config to add a mapping with a modified user agent `$ua`:
```
map $remote_addr $ua {
# 2017-11-08 Random Chinese host grabbing 20,000 PDFs
124.17.34.59 'ChineseBot';
default $http_user_agent;
}
```
- If the client's address matches then the user agent is set, otherwise the default `$http_user_agent` variable is used
- Then, in the server's `/` block we pass this header to Tomcat:
```
proxy_pass http://tomcat_http;
proxy_set_header User-Agent $ua;
```
- Note to self: the `$ua` variable won't show up in nginx access logs because the default `combined` log format doesn't show it, so don't run around pulling your hair out wondering with the modified user agents aren't showing in the logs!
- If a client matching one of these IPs connects without a session, it will be assigned one by the Crawler Session Manager Valve
- You can verify by cross referencing nginx's `access.log` and DSpace's `dspace.log.2017-11-08`, for example
2017-11-08 13:20:27 +01:00
- I will deploy this on CGSpace later this week
2017-11-08 13:26:08 +01:00
- I am interested to check how this affects the number of sessions used by the CIAT and Chinese bots (see above on [2017-11-07](#2017-11-07) for example)
2017-11-08 14:20:49 +01:00
- I merged the clickable thumbnails code to `5_x-prod` ([#347](https://github.com/ilri/DSpace/pull/347)) and will deploy it later along with the new bot mapping stuff (and re-run the Asible `nginx` and `tomcat` tags)
2017-11-08 21:26:37 +01:00
- I was thinking about Baidu again and decided to see how many requests they have versus Google to URL paths that are explicitly forbidden in `robots.txt`:
```
# zgrep Baiduspider /var/log/nginx/access.log* | grep -c -E "GET /(browse|discover|search-filter)"
22229
# zgrep Googlebot /var/log/nginx/access.log* | grep -c -E "GET /(browse|discover|search-filter)"
0
```
- It seems that they rarely even bother checking `robots.txt`, but Google does multiple times per day!
```
2017-11-09 16:52:14 +01:00
# zgrep Baiduspider /var/log/nginx/access.log* | grep -c robots.txt
2017-11-08 21:26:37 +01:00
14
2017-11-09 16:52:14 +01:00
# zgrep Googlebot /var/log/nginx/access.log* | grep -c robots.txt
2017-11-08 21:26:37 +01:00
1134
```
- I have been looking for a reason to ban Baidu and this is definitely a good one
- Disallowing `Baiduspider` in `robots.txt` probably won't work because this bot doesn't seem to respect the robot exclusion standard anyways!
- I will whip up something in nginx later
2017-11-08 21:36:15 +01:00
- Run system updates on CGSpace and reboot the server
- Re-deploy latest `5_x-prod` branch on CGSpace and DSpace Test (includes the clickable thumbnails, CCAFS phase II project tags, and updated news text)
2017-11-09 16:41:14 +01:00
## 2017-11-09
- Awesome, it seems my bot mapping stuff in nginx actually reduced the number of Tomcat sessions used by the CIAT scraper today, total requests and unique sessions:
```
2017-11-10 12:52:33 +01:00
# zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep '09/Nov/2017' | grep -c 104.196.152.243
8956
2017-11-09 16:41:14 +01:00
$ grep 104.196.152.243 dspace.log.2017-11-09 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
223
```
- Versus the same stats for yesterday and the day before:
```
# zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep '08/Nov/2017' | grep -c 104.196.152.243
10216
$ grep 104.196.152.243 dspace.log.2017-11-08 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2592
2017-11-09 16:52:14 +01:00
# zcat -f -- /var/log/nginx/access.log.2.gz /var/log/nginx/access.log.3.gz | grep '07/Nov/2017' | grep -c 104.196.152.243
2017-11-09 16:41:14 +01:00
8120
2017-11-09 16:52:14 +01:00
$ grep 104.196.152.243 dspace.log.2017-11-07 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2017-11-09 16:41:14 +01:00
3506
```
2017-11-09 20:44:20 +01:00
- The number of sessions is over *ten times less*!
2017-11-09 17:05:32 +01:00
- This gets me thinking, I wonder if I can use something like nginx's rate limiter to automatically change the user agent of clients who make too many requests
- Perhaps using a combination of geo and map, like illustrated here: https://www.nginx.com/blog/rate-limiting-nginx/
2017-11-12 09:19:47 +01:00
## 2017-11-11
- I was looking at the Google index and noticed there are 4,090 search results for dspace.ilri.org but only seven for mahider.ilri.org
- Search with something like: inurl:dspace.ilri.org inurl:https
- I want to get rid of those legacy domains eventually!
## 2017-11-12
- Update the [Ansible infrastructure templates](https://github.com/ilri/rmg-ansible-public) to be a little more modular and flexible
2017-11-12 09:41:44 +01:00
- Looking at the top client IPs on CGSpace so far this morning, even though it's only been eight hours:
```
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep "12/Nov/2017" | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
243 5.83.120.111
335 40.77.167.103
424 66.249.66.91
529 207.46.13.36
554 40.77.167.129
604 207.46.13.53
754 104.196.152.243
883 66.249.66.90
1150 95.108.181.88
1381 5.9.6.51
```
- 5.9.6.51 seems to be a Russian bot:
```
# grep 5.9.6.51 /var/log/nginx/access.log | tail -n 1
5.9.6.51 - - [12/Nov/2017:08:13:13 +0000] "GET /handle/10568/16515/recent-submissions HTTP/1.1" 200 5097 "-" "Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)"
```
- What's amazing is that it seems to reuse its Java session across all requests:
```
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=5.9.6.51' /home/cgspace.cgiar.org/log/dspace.log.2017-11-12
1558
$ grep 5.9.6.51 /home/cgspace.cgiar.org/log/dspace.log.2017-11-12 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
1
```
- Bravo to MegaIndex.ru!
- The same cannot be said for 95.108.181.88, which appears to be YandexBot, even though Tomcat's Crawler Session Manager valve regex should match 'YandexBot':
```
# grep 95.108.181.88 /var/log/nginx/access.log | tail -n 1
95.108.181.88 - - [12/Nov/2017:08:33:17 +0000] "GET /bitstream/handle/10568/57004/GenebankColombia_23Feb2015.pdf HTTP/1.1" 200 972019 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)"
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=95.108.181.88' /home/cgspace.cgiar.org/log/dspace.log.2017-11-12
991
```
2017-11-12 17:48:52 +01:00
- Move some items and collections on CGSpace for Peter Ballantyne, running [`move_collections.sh`](https://gist.github.com/alanorth/e60b530ed4989df0c731afbb0c640515) with the following configuration:
```
10947/6 10947/1 10568/83389
10947/34 10947/1 10568/83389
10947/2512 10947/1 10568/83389
```
- I explored nginx rate limits as a way to aggressively throttle Baidu bot which doesn't seem to respect disallowed URLs in robots.txt
- There's an interesting [blog post from Nginx's team about rate limiting](https://www.nginx.com/blog/rate-limiting-nginx/) as well as a [clever use of mapping with rate limits](https://gist.github.com/arosenhagen/8aaf5d7f94171778c0e9)
- The solution [I came up with](https://github.com/ilri/rmg-ansible-public/commit/f0646991772660c505bea9c5ac586490e7c86156) uses tricks from both of those
- I deployed the limit on CGSpace and DSpace Test and it seems to work well:
```
$ http --print h https://cgspace.cgiar.org/handle/10568/1 User-Agent:'Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)'
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Language: en-US
Content-Type: text/html;charset=utf-8
Date: Sun, 12 Nov 2017 16:30:19 GMT
Server: nginx
Strict-Transport-Security: max-age=15768000
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Cocoon-Version: 2.2.0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
$ http --print h https://cgspace.cgiar.org/handle/10568/1 User-Agent:'Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)'
HTTP/1.1 503 Service Temporarily Unavailable
Connection: keep-alive
Content-Length: 206
Content-Type: text/html
Date: Sun, 12 Nov 2017 16:30:21 GMT
Server: nginx
```
- The first request works, second is denied with an HTTP 503!
- I need to remember to check the Munin graphs for PostgreSQL and JVM next week to see how this affects them
2017-11-13 11:04:41 +01:00
## 2017-11-13
2017-11-14 12:29:25 +01:00
- At the end of the day I checked the logs and it really looks like the Baidu rate limiting is working, HTTP 200 vs 503:
2017-11-13 11:04:41 +01:00
```
2017-11-14 12:29:25 +01:00
# zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep "13/Nov/2017" | grep "Baiduspider" | grep -c " 200 "
1132
# zcat -f -- /var/log/nginx/access.log.1 /var/log/nginx/access.log.2.gz | grep "13/Nov/2017" | grep "Baiduspider" | grep -c " 503 "
10105
2017-11-13 11:04:41 +01:00
```
- Helping Sisay proof 47 records for IITA: https://dspacetest.cgiar.org/handle/10568/97029
- From looking at the data in OpenRefine I found:
- Errors in `cg.authorship.types`
- Errors in `cg.coverage.country` (smart quote in "COTE DIVOIRE", "HAWAII" is not a country)
2017-11-14 12:29:25 +01:00
- Whitespace issues in some `cg.contributor.affiliation`
2017-11-13 11:04:41 +01:00
- Whitespace issues in some `cg.identifier.doi` fields and most values are using HTTP instead of HTTPS
- Whitespace issues in some `dc.contributor.author` fields
- Issue with invalid `dc.date.issued` value "2011-3"
- Description fields are poorly copypasted
- Whitespace issues in `dc.description.sponsorship`
- Lots of inconsistency in `dc.format.extent` (mixed dash style, periods at the end of values)
- Whitespace errors in `dc.identifier.citation`
- Whitespace errors in `dc.subject`
- Whitespace errors in `dc.title`
- After uploading and looking at the data in DSpace Test I saw more errors with CRPs, subjects (one item had four copies of all of its subjects, another had a "." in it), affiliations, sponsors, etc.
2017-11-14 12:29:25 +01:00
- Atmire responded to the [ticket about ORCID stuff](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=510) a few days ago, today I told them that I need to talk to Peter and the partners to see what we would like to do
## 2017-11-14
- Deploy some nginx configuration updates to CGSpace
- They had been waiting on a branch for a few months and I think I just forgot about them
- I have been running them on DSpace Test for a few days and haven't seen any issues there
2017-11-14 15:21:38 +01:00
- Started testing DSpace 6.2 and a few things have changed
- Now PostgreSQL needs `pgcrypto`:
```
$ psql dspace6
dspace6=# CREATE EXTENSION pgcrypto;
```
- Also, local settings are no longer in `build.properties`, they are now in `local.cfg`
- I'm not sure if we can use separate profiles like we did before with `mvn -Denv=blah` to use blah.properties
2017-11-14 16:19:22 +01:00
- It seems we need to use "system properties" to override settings, ie: `-Ddspace.dir=/Users/aorth/dspace7`
2017-11-15 14:12:56 +01:00
## 2017-11-15
- Send Adam Hunt an invite to the DSpace Developers network on Yammer
- He is the new head of communications at WLE, since Michael left
2017-11-16 09:15:33 +01:00
- Merge changes to item view's wording of link metadata ([#348](https://github.com/ilri/DSpace/pull/348))
2017-11-17 11:35:53 +01:00
## 2017-11-17
- Uptime Robot said that CGSpace went down today and I see lots of `Timeout waiting for idle object` errors in the DSpace logs
- I looked in PostgreSQL using `SELECT * FROM pg_stat_activity;` and saw that there were 73 active connections
- After a few minutes the connecitons went down to 44 and CGSpace was kinda back up, it seems like Tsega restarted Tomcat
- Looking at the REST and XMLUI log files, I don't see anything too crazy:
```
# cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep "17/Nov/2017" | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
13 66.249.66.223
14 207.46.13.36
17 207.46.13.137
22 207.46.13.23
23 66.249.66.221
92 66.249.66.219
187 104.196.152.243
1400 70.32.83.92
1503 50.116.102.77
6037 45.5.184.196
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 | grep "17/Nov/2017" | awk '{print $1}' | sort -n | uniq -c | sort -h | tail
325 139.162.247.24
354 66.249.66.223
422 207.46.13.36
434 207.46.13.23
501 207.46.13.137
647 66.249.66.221
662 34.192.116.178
762 213.55.99.121
1867 104.196.152.243
2020 66.249.66.219
```
- I need to look into using JMX to analyze active sessions I think, rather than looking at log files
- After adding appropriate [JMX listener options to Tomcat's JAVA_OPTS](https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/) and restarting Tomcat, I can connect remotely using an SSH dynamic port forward (SOCKS) on port 7777 for example, and then start jconsole locally like:
```
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=7777 service:jmx:rmi:///jndi/rmi://localhost:9000/jmxrmi -J-DsocksNonProxyHosts=
```
- Looking at the MBeans you can drill down in Catalina→Manager→<webapp>→localhost→Attributes and see active sessions, etc
- I want to enable JMX listener on CGSpace but I need to do some more testing on DSpace Test and see if it causes any performance impact, for example
- If I hit the server with some requests as a normal user I see the session counter increase, but if I specify a bot user agent then the sessions seem to be reused (meaning the Crawler Session Manager is working)
- Here is the Jconsole screen after looping `http --print Hh https://dspacetest.cgiar.org/handle/10568/1` for a few minutes:
![Jconsole sessions for XMLUI](/cgspace-notes/2017/11/jconsole-sessions.png)