Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
<pre><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
<li>Abenet asked if it would be possible to generate a report of items in Listing and Reports that had “International Fund for Agricultural Development” as the <em>only</em> investor</li>
<li>I opened a ticket with Atmire to ask if this was possible: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=540">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=540</a></li>
<li>Work on making the thumbnails in the item view clickable</li>
<li><p>METS XML is available for all items with this pattern: /metadata/handle/10568/95947/mets.xml</p></li>
<li><p>I whipped up a quick hack to print a clickable link with this URL on the thumbnail but it needs to check a few corner cases, like when there is a thumbnail but no content bitstream!</p></li>
<li><p>Help proof fifty-three CIAT records for Sisay: <ahref="https://dspacetest.cgiar.org/handle/10568/95895">https://dspacetest.cgiar.org/handle/10568/95895</a></p></li>
<li><p>A handful of issues with <code>cg.place</code> using format like “Lima, PE” instead of “Lima, Peru”</p></li>
<li><p>Also, some dates like with completely invalid format like “2010- 06” and “2011-3-28”</p></li>
<li><p>I also collapsed some consecutive whitespace on a handful of fields</p></li>
<li>Peter asked if I could fix the appearance of “International Livestock Research Institute” in the author lookup during item submission</li>
<li>It looks to be just an issue with the user interface expecting authors to have both a first and last name:</li>
<pre><code>dspace=# select distinct text_value, authority, confidence from metadatavalue value where resource_type_id=2 and metadata_field_id=3 and text_value like 'International Livestock Research Institute%';
<li><p>So I’m not sure if this is just a graphical glitch or if editors have to edit this metadata field prior to approval</p></li>
<li><p>Looking at monitoring Tomcat’s JVM heap with Prometheus, it looks like we need to use JMX + <ahref="https://github.com/prometheus/jmx_exporter">jmx_exporter</a></p></li>
<li><p>This guide shows how to <ahref="https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/">enable JMX in Tomcat</a> by modifying <code>CATALINA_OPTS</code></p></li>
<li><p>I was able to successfully connect to my local Tomcat with jconsole!</p></li>
<li><p>The worst thing is that this user never specifies a user agent string so we can’t lump it in with the other bots using the Tomcat Session Crawler Manager Valve</p></li>
<li><p>They don’t request dynamic URLs like “/discover” but they seem to be fetching handles from XMLUI instead of REST (and some with <code>//handle</code>, note the regex below):</p>
<li><p>I just realized that <code>ciat.cgiar.org</code> points to 104.196.152.243, so I should contact Leroy from CIAT to see if we can change their scraping behavior</p></li>
<li><p>The next IP (207.46.13.36) seem to be Microsoft’s bingbot, but all its requests specify the “bingbot” user agent and there are no requests for dynamic URLs that are forbidden, like “/discover”:</p>
<li><code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36</code></li>
<li><code>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11</code></li>
<li><code>Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)</code></li>
<li><p>According to their documentation their bot <ahref="http://www.baidu.com/search/robots_english.html">respects <code>robots.txt</code></a>, but I don’t see this being the case</p></li>
<li><p>These aren’t actually very interesting, as the top few are Google, CIAT, Bingbot, and a few other unknown scrapers</p></li>
<li><p>The number of requests isn’t even that high to be honest</p></li>
<li><p>As I was looking at these logs I noticed another heavy user (124.17.34.59) that was not active during this time period, but made many requests today alone:</p>
<li><p>A Google search for “LCTE bot” doesn’t return anything interesting, but this <ahref="https://stackoverflow.com/questions/42500881/what-is-lcte-in-user-agent">Stack Overflow discussion</a> references the lack of information</p></li>
<li><p>About CIAT, I think I need to encourage them to specify a user agent string for their requests, because they are not reuising their Tomcat session and they are creating thousands of sessions per day</p></li>
<li><p>I emailed CIAT about the session issue, user agent issue, and told them they should not scrape the HTML contents of communities, instead using the REST API</p></li>
<li><p>About Baidu, I found a link to their <ahref="http://ziyuan.baidu.com/robots/">robots.txt tester tool</a></p></li>
<li><p>It seems like our robots.txt file is valid, and they claim to recognize that URLs like <code>/discover</code> should be forbidden (不允许, aka “not allowed”):</p></li>
<li><p>I’m getting really sick of this</p></li>
<li><p>Sisay re-uploaded the CIAT records that I had already corrected earlier this week, erasing all my corrections</p></li>
<li><p>I had to re-correct all the publishers, places, names, dates, etc and apply the changes on DSpace Test</p></li>
<li><p>Run system updates on DSpace Test and reboot the server</p></li>
<li><p>Magdalena had written to say that two of their Phase II project tags were missing on CGSpace, so I added them (<ahref="https://github.com/ilri/DSpace/pull/346">#346</a>)</p></li>
<li><p>I figured out a way to use nginx’s map function to assign a “bot” user agent to misbehaving clients who don’t define a user agent</p></li>
<li><p>Most bots are automatically lumped into one generic session by <ahref="https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Crawler_Session_Manager_Valve">Tomcat’s Crawler Session Manager Valve</a> but this only works if their user agent matches a pre-defined regular expression like <code>.*[bB]ot.*</code></p></li>
<li><p>Some clients send thousands of requests without a user agent which ends up creating thousands of Tomcat sessions, wasting precious memory, CPU, and database resources in the process</p></li>
<li><p>Basically, we modify the nginx config to add a mapping with a modified user agent <code>$ua</code>:</p>
<li><p>Note to self: the <code>$ua</code> variable won’t show up in nginx access logs because the default <code>combined</code> log format doesn’t show it, so don’t run around pulling your hair out wondering with the modified user agents aren’t showing in the logs!</p></li>
<li><p>If a client matching one of these IPs connects without a session, it will be assigned one by the Crawler Session Manager Valve</p></li>
<li><p>You can verify by cross referencing nginx’s <code>access.log</code> and DSpace’s <code>dspace.log.2017-11-08</code>, for example</p></li>
<li><p>I will deploy this on CGSpace later this week</p></li>
<li><p>I am interested to check how this affects the number of sessions used by the CIAT and Chinese bots (see above on <ahref="#2017-11-07">2017-11-07</a> for example)</p></li>
<li><p>I merged the clickable thumbnails code to <code>5_x-prod</code> (<ahref="https://github.com/ilri/DSpace/pull/347">#347</a>) and will deploy it later along with the new bot mapping stuff (and re-run the Asible <code>nginx</code> and <code>tomcat</code> tags)</p></li>
<li><p>I was thinking about Baidu again and decided to see how many requests they have versus Google to URL paths that are explicitly forbidden in <code>robots.txt</code>:</p>
<li><p>I have been looking for a reason to ban Baidu and this is definitely a good one</p></li>
<li><p>Disallowing <code>Baiduspider</code> in <code>robots.txt</code> probably won’t work because this bot doesn’t seem to respect the robot exclusion standard anyways!</p></li>
<li><p>I will whip up something in nginx later</p></li>
<li><p>Run system updates on CGSpace and reboot the server</p></li>
<li><p>Re-deploy latest <code>5_x-prod</code> branch on CGSpace and DSpace Test (includes the clickable thumbnails, CCAFS phase II project tags, and updated news text)</p></li>
<li><p>Awesome, it seems my bot mapping stuff in nginx actually reduced the number of Tomcat sessions used by the CIAT scraper today, total requests and unique sessions:</p>
<li><p>The number of sessions is over <em>ten times less</em>!</p></li>
<li><p>This gets me thinking, I wonder if I can use something like nginx’s rate limiter to automatically change the user agent of clients who make too many requests</p></li>
<li><p>Perhaps using a combination of geo and map, like illustrated here: <ahref="https://www.nginx.com/blog/rate-limiting-nginx/">https://www.nginx.com/blog/rate-limiting-nginx/</a></p></li>
<li><p>The same cannot be said for 95.108.181.88, which appears to be YandexBot, even though Tomcat’s Crawler Session Manager valve regex should match ‘YandexBot’:</p>
<li><p>Move some items and collections on CGSpace for Peter Ballantyne, running <ahref="https://gist.github.com/alanorth/e60b530ed4989df0c731afbb0c640515"><code>move_collections.sh</code></a> with the following configuration:</p>
<li><p>I explored nginx rate limits as a way to aggressively throttle Baidu bot which doesn’t seem to respect disallowed URLs in robots.txt</p></li>
<li><p>There’s an interesting <ahref="https://www.nginx.com/blog/rate-limiting-nginx/">blog post from Nginx’s team about rate limiting</a> as well as a <ahref="https://gist.github.com/arosenhagen/8aaf5d7f94171778c0e9">clever use of mapping with rate limits</a></p></li>
<li><p>The solution <ahref="https://github.com/ilri/rmg-ansible-public/commit/f0646991772660c505bea9c5ac586490e7c86156">I came up with</a> uses tricks from both of those</p></li>
<li><p>I deployed the limit on CGSpace and DSpace Test and it seems to work well:</p>
<li><p>Helping Sisay proof 47 records for IITA: <ahref="https://dspacetest.cgiar.org/handle/10568/97029">https://dspacetest.cgiar.org/handle/10568/97029</a></p></li>
<li><p>From looking at the data in OpenRefine I found:</p>
<li><p>After uploading and looking at the data in DSpace Test I saw more errors with CRPs, subjects (one item had four copies of all of its subjects, another had a “.” in it), affiliations, sponsors, etc.</p></li>
<li><p>Atmire responded to the <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=510">ticket about ORCID stuff</a> a few days ago, today I told them that I need to talk to Peter and the partners to see what we would like to do</p></li>
<li><p>I need to look into using JMX to analyze active sessions I think, rather than looking at log files</p></li>
<li><p>After adding appropriate <ahref="https://geekflare.com/enable-jmx-tomcat-to-monitor-administer/">JMX listener options to Tomcat’s JAVA_OPTS</a> and restarting Tomcat, I can connect remotely using an SSH dynamic port forward (SOCKS) on port 7777 for example, and then start jconsole locally like:</p>
<li><p>Looking at the MBeans you can drill down in Catalina→Manager→webapp→localhost→Attributes and see active sessions, etc</p></li>
<li><p>I want to enable JMX listener on CGSpace but I need to do some more testing on DSpace Test and see if it causes any performance impact, for example</p></li>
<li><p>If I hit the server with some requests as a normal user I see the session counter increase, but if I specify a bot user agent then the sessions seem to be reused (meaning the Crawler Session Manager is working)</p></li>
<li><p>Here is the Jconsole screen after looping <code>http --print Hh https://dspacetest.cgiar.org/handle/10568/1</code> for a few minutes:</p></li>
<li><p>We know Googlebot is persistent but behaves well, so I guess it was just a coincidence that it came at a time when we had other traffic and server activity</p></li>
<li><p>In related news, I see an Atmire update process going for many hours and responsible for hundreds of thousands of log entries (two thirds of all log entries)</p>
<li>I found <ahref="https://www.cakesolutions.net/teamblogs/low-pause-gc-on-the-jvm">an article about JVM tuning</a> that gives some pointers how to enable logging and tools to analyze logs for you</li>
<li>Also notes on <ahref="https://blog.gceasy.io/2016/11/15/rotating-gc-log-files/">rotating GC logs</a></li>
<li>I decided to switch DSpace Test back to the CMS garbage collector because it is designed for low pauses and high throughput (like G1GC!) and because we haven’t even tried to monitor or tune it</li>
<li><p>I haven’t seen 54.144.57.183 before, it is apparently the CCBot from commoncrawl.org</p></li>
<li><p>In other news, it looks like the JVM garbage collection pattern is back to its standard jigsaw pattern after switching back to CMS a few days ago:</p></li>
<li><p>These IPs crawling the REST API don’t specify user agents and I’d assume they are creating many Tomcat sessions</p></li>
<li><p>I would catch them in nginx to assign a “bot” user agent to them so that the Tomcat Crawler Session Manager valve could deal with them, but they seem to create any really—at least not in the dspace.log:</p>
<li><p>I’m wondering if REST works differently, or just doesn’t log these sessions?</p></li>
<li><p>I wonder if they are measurable via JMX MBeans?</p></li>
<li><p>I did some tests locally and I don’t see the sessionCounter incrementing after making requests to REST, but it does with XMLUI and OAI</p></li>
<li><p>I came across some interesting PostgreSQL tuning advice for SSDs: <ahref="https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0">https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0</a></p></li>
<li><p>Apparently setting <code>random_page_cost</code> to 1 is “common” advice for systems running PostgreSQL on SSD (the default is 4)</p></li>
<li><p>So I deployed this on DSpace Test and will check the Munin PostgreSQL graphs in a few days to see if anything changes</p></li>
<li>It’s too early to tell for sure, but after I made the <code>random_page_cost</code> change on DSpace Test’s PostgreSQL yesterday the number of connections dropped drastically:</li>
</ul>
<p><imgsrc="/cgspace-notes/2017/11/postgres-connections-week.png"alt="PostgreSQL connections after tweak (week)"/></p>
<ul>
<li>There have been other temporary drops before, but if I look at the past month and actually the whole year, the trend is that connections are four or five times higher on average:</li>
</ul>
<p><imgsrc="/cgspace-notes/2017/11/postgres-connections-month.png"alt="PostgreSQL connections after tweak (month)"/></p>
<ul>
<li>I just realized that we’re not logging access requests to other vhosts on CGSpace, so it’s possible I have no idea that we’re getting slammed at 4AM on another domain that we’re just silently redirecting to cgspace.cgiar.org</li>
<li>I’ve enabled logging on the CGIAR Library on CGSpace so I can check to see if there are many requests there</li>
<li>In just a few seconds I already see a dozen requests from Googlebot (of course they get HTTP 301 redirects to cgspace.cgiar.org)</li>
<li><p>I think we just need start increasing the number of allowed PostgreSQL connections instead of fighting this, as it’s the most common source of crashes we have</p></li>
<li><p>I will bump DSpace’s <code>db.maxconnections</code> from 60 to 90, and PostgreSQL’s <code>max_connections</code> from 183 to 273 (which is using my loose formula of 90 * webapps + 3)</p></li>
<li><p>I really need to figure out how to get DSpace to use a PostgreSQL connection pool</p></li>